BRINGING THE DEAD BACK TO LIFE

Determining the point of death used to be fairly straightforward—when your heart stopped beating, that was it. Then we learned how to resuscitate a stopped heart, and over the years we learned more and more ways to keep a person alive with technology when their own body can’t do it on its own. From that came the concept of brain death: when our best sensing technologies, like functional Magnetic Resonance Imaging (fMRI) and electroencephalography (EEG) can no longer detect any brain activity, the person is declared dead. Because, after all, the brain runs everything. When too many brain cells (neurons) are badly damaged, coherent signals can no longer pass along the neural networks that are the body’s control system.

But what if you could stimulate the growth of fresh, undamaged neurons, and get them firing? Would that revive brain function? Could it bring the person back to life?

Reanimating the dead is a concept that’s been around a whole lot longer than the novel Frankenstein, possibly because people whose heartbeats have become too faint to easily detect by touch or sound sometimes do “come back to life”. So if it can happen spontaneously, there must be ways of doing it deliberately, right? (If you want to read about some of the utterly ghoulish attempts made over the past few centuries, have a look here.)

The challenge is not something that modern science can resist. So word has recently come that a US company called Bioquark will undertake human trials in an unnamed Latin American country to revive twenty patients who’ve been declared brain dead. It’s a three-step process involving stem cells, peptides, and laser stimulation. You’ve probably heard of stem cells—they’re the body’s “blank slates” which, at need, can become nearly any kind of specialized body cell. They’re used in everything from knee regeneration to cancer treatments these days, and it isn’t really a stretch to think they might be used to replace damaged neurons. In the Bioquark tests, the stem cells will be harvested from the patient’s own body. Then they’ll be re-injected into the patient’s spinal cord, along with proteins called peptides, in an effort to convince the cells to become neurons. That “convincing” will take the form of laser therapy and the stimulation of the brain’s median nerve for fifteen days.

In case you’re wondering if all of that treatment culminates in a bunch of electrodes and a lightning storm…well, I don’t think it will be that flashy—probably just a lot of scanning to see if anything’s happening. But the thing is, Bioquark hasn’t tested the procedure on animals first, and has no plans to. They’d originally intended to run their trials in India, but the Indian Council of Medical Research got wind of it and “invited” them to go elsewhere. So Latin America it is. Somewhere. We think.

Science fiction and fantasy are full of stories of the dead being brought back to life. One of the most common tropes has the unfortunate majority of humankind turned into zombies (perhaps named as such, but often not) as in Matheson’s I Am Legend. But Frank Herbert’s Dune series features an interesting take in the form of the gholas—technically clones of dead people, but potentially able to regain the full personality and memories of the original.

A Chinese science fiction writer named Du Hong recently paid more than $120,000 to have her brain frozen at a facility in Scottsdale, Arizona after her death from cancer, in hopes that future science will be able to reanimate it (or at the very least, experiment with it—she was OK with that too). The idea of being frozen and later returned to life is common in SF, from Buck Rogers to the Woody Allen movie Sleeper, and it’s only a short step to the deliberate cryonic suspension used for space travellers in stories like Lost In Space and 2001: A Space Odyssey (although those characters aren’t actually dead).

Alternate history buffs imagine the consequences of bringing notable figures from history back to life. Others propose returning the dead to consciousness in a robot body, or even just a computer system with no body at all. There are endless ways of using the subject in fiction, and readers are endlessly fascinated with it because we can’t escape the knowledge of our own inevitable death some day. So it’s easy to get excited about experiments like Bioquark’s.

I’ve often expressed my concerns about the ethics and hazards of certain biomedical procedures, but at least in most cases, even with the fairly bizarre stuff, the patients have consented to become guinea pigs. How can someone who’s clinically dead give their consent? Even with the support of next-of-kin, this is a very troublesome question. Especially since, when a person’s neurons have been too badly damaged to keep them alive, it isn’t likely that they could be revived without some serious loss of function. We can get a good idea of the potential results from seeing stroke victims and other people with brain injuries.

Is a life with significant impairments a life worth returning to? Maybe. It would depend on a lot of factors, and every individual might make a different choice. The point is, with Bioquark’s procedure the side-effects are impossible to know beforehand, and the person most affected can’t be consulted, so no choice is possible.

Frankenstein’s monster wasn’t given a choice, and that didn’t turn out too well.

WHAT WILL JUSTIFY HUMAN EXPANSION INTO SPACE?

Artist concept courtesy of NASA

Artist concept courtesy of NASA

I’d be willing to bet that the great majority of science fiction stories set in the future include a significant human presence in outer space as a given, even if the stories themselves aren’t about space. Space travel is just a huge part of the SF imagination. Human colonies on the Moon, Mars, moons of the gas giants, and at least some asteroids. Regular traffic to and from Earth, with established shipping routes weaving through the solar system for passengers and cargo. Maybe huge colony ships or faster-than-light spacecraft charging their way toward other suns.

But why do we seem so sure that will happen? Just because it would be cool?

That isn’t the way the human world works. To be frank, the forces that drive human exploration and expansion are usually necessity and greed. We go to new places because there isn’t enough room at home (or resources, or peace and prosperity) or because somebody stands to make a lot of money. But what about going beyond our own planet?

I’ve discussed the economics of space mining before. In these days when private enterprise is getting into the space launch business, companies like SpaceX and Orbital Science still need $27,000 to $43,000 to take a pound of cargo and put it into orbit. Even in the space shuttle days (because it could carry much more cargo) the price per pound was about $10,000. Now imagine the amount of steel and other heavy stuff needed to build a big transportation hub and/or warehouse complex in orbit. Or the weight of construction equipment needed to be hauled to the Moon to dig mines. Or even just the fuel to power the equipment and spacecraft. Water is about the cheapest fuel around (broken into hydrogen and oxygen), but it’s still heavy (that half-litre bottle you like to drink weighs more than a pound).

It’s true that the first three hundred kilometers of the journey from Earth are by far the most costly, so there have been proposals to replace rocket boosters with magnetically-levitating launch tracks, or a space elevator with cables made of nanomaterials hung from giant stations in geostationary orbit. There are lots of creative launch alternatives, but such things would cost billions, if not trillions of dollars to build. And all of that is just to create the infrastructure that mining and shipping operations would require.

What end product could possibly be worth such an investment? Even if speculations are true that some asteroids might contain as much platinum and related metals as have ever been mined on Earth (only an estimated 16 tons), with a price this week of around $950 per ounce that still only amounts to about $480 million. It would take a lot of asteroids for investors to make their money back, and that’s assuming the demand and price for platinum metals would stay high (which it wouldn’t with such large amounts dumped onto the market).

Some will say that there’s huge value in research and certain kinds of chemical processing that can only be carried out in zero gravity. That may very well be true, but such operations would be best placed in Earth orbit, close to the consumer market—there would be no need to colonize other planets for them.

All in all, I’m of the opinion that, at least until the Earth completely runs out of the mineral resources we need (including recycled materials), space mining with Earth as its main market won’t be the driver that creates a system of colonies and industries throughout the solar system. But I used italics because, if we create colonies on other planets and moons for some other reason, then space mining will be much more viable to supply those outposts than having to ship material from Earth.

So my point is that, if a widespread human presence in space beyond the Earth is ever to happen, it will be for reasons other than profit.

“Running out of room at home” could be one such reason—we don’t yet have our population growth under control, and rising living standards are creating a demand for food and other goods that may be beyond our beleaguered planet’s ability to supply for much longer. But as science fiction buffs, we can speculate about others:

- pollution, climate change, or nuclear war makes the planet unliveable.

- runaway products of genetic engineering or nano-engineering make the planet unliveable.

- a cosmic catastrophe like an asteroid strike, solar flare-up, or magnetic field disruption makes the planet unliveable.

- the Earth is about to be swallowed by a black hole (and would therefore be unliveable!)

Or possibly if the uber-wealthy 1% and the exploited 99% just can’t live together on the same planet any longer.

There are happier possibilities too:

- if a very inexpensive gravity-controlling technology were developed (especially in combination with force-field shields against radiation). Spacecraft might be less costly than submarines.

- if a faster-than-light spaceship drive were invented. We’d have a much greater incentive to explore other star systems (spared hundreds of years of travel time).

- if life is discovered on other planets or moons. We’d feel compelled to investigate it and possibly even protect and nurture it long-term.

- if research discovered that living in space or on other worlds provided a significant benefit to human health and lifespan.

-if genetic engineering made humans able to thrive under the harsher conditions elsewhere in the solar system (lower gravity, higher radiation, different atmospheres and temperatures).

Or if an advanced alien race were to make its presence known to us—whether in peace or in conflict—we’d have a strong impetus to establish a firm foothold in space.

Will there ever be humans living and working all over the solar system and beyond? I think so. Eventually. But it’ll take a very compelling motivation—maybe many compelling motivations—to make it happen. For once, the lust for money won’t be enough.

TIME FOR ANOTHER LOOK AT TIME?

A recent release from the University of British Columbia, Canada, inspired me to make the time to revisit the always timely subject of time travel (OK, maybe I should travel back in time and redo that sentence….)

Ben Tippett, a mathematics and physics instructor at UBC’s Okanagan campus, specializes in Einstein’s theory of general relativity and has come up with the mathematics to show that time travel should be possible. I’ll spare you my attempt to explain it mathematically (neither of us has that much time) but you’ve probably heard space described as being like a giant trampoline: it’s fairly flat in most places, but if you place something big and heavy on it (like a bowling ball, or a planet) you’ll make a deep depression in the fabric, and things nearby will roll down the side of the depression toward the object at the bottom. That’s a visualization of the force of gravity which, Einstein says, creates curves like that in space. UBC’s Tippett says that high gravity bends time as well as space, citing evidence that time passes more slowly close to a black hole, for instance. Bend time enough, and you can curve it into a loop that could be travelled backward or forward. (Technically, physicists call it a “closed time-like curve”.) At least, that’s what Tippett’s mathematical model shows. How it could be done is a whole other story—as he’s quick to point out, it would require exotic substances that don’t currently exist.

Still, I’m happy about any evidence that doesn’t rule out the possibility of time travel (I also like that Tippet named his model a Traversable Acausal Retrograde Domain in Space-time (TARDIS), which all Dr. Who fans will appreciate).

Actual hypotheses about how time travel would have to be accomplished include things like infinitely long cylinders spinning at a few billion revolutions per minute with ten times the mass of the sun, or donut-shaped areas of vacuum surrounded by hugely powerful and precisely focused gravitational fields (and that one also has a limitation that you couldn’t travel to a time before the machine was created.) Even Elon Musk won’t be bankrolling projects like that any time soon.

So should science fiction writers just drop the whole idea of time travel?

Not on your life (or infinitely recurring lifetimes, either).

H.G Wells didn’t try to explain the science when he wrote The Time Machine, and if it’s good enough for Herb it’s good enough for us. Like most of the best science fiction, the novel was a commentary on Wells’ own time, especially socialism and the British class system. It’s also wonderfully creepy. Better to leave out the dreary (and probably wrong) explanation of how the thing works, and focus on the story: the myriad ways time travel might be used—and mess things up!

A whole sub-genre of time travel stories involves characters messing with history, including one of my favourites, A Sound of Thunder by Ray Bradbury, in which the squashing of a butterfly changes the future. Another sub-genre professes to follow the credo that time travel is impossible, so instead the characters travel to a different time in an alternate universe. Michael Crichton’s Timeline is one of those, allowing the protagonists to have lots of adventures in the past, and even stay there, without screwing up our timeline (the title notwithstanding). Apparently it’s OK to screw up somebody else’s universe!

Robert J. Sawyer played a different trick with time in the novel Flashforward in which everyone on Earth gets a glimpse of their lives twenty-one years in the future (but doesn’t actually travel there). A host of personal dilemmas ensues. Sawyer also does something tricky with time travel in his novel Starplex—he avoids having to explain the technology by making it an exclusive ability of beings from billions of years in the future. Michael Swanwick’s Bones of the Earth does something similar, offering time travel as a gift from beings of the extremely far future to near-present-day humans, under very strict conditions. It’s a neat dodge—you don’t have to justify or explain time travel, you just have to believe that humans will someday figure it out.

All I can say is: if you’re reading this in the year 2 Billion AD, come back and visit me. We’ll talk.

A MARRIAGE OF MIND AND MACHINE

Billionaire Elon Musk knows how to get attention. Famous for his successful Tesla Inc. motor company (electric cars and the batteries to run them), Solar City (solar power), SpaceX (private space venture) and other companies, he made his first fortune as a co-founder of PayPal. Musk has a brilliant mind and a Midas touch. When he speaks people listen, and most recently he decided to speak about direct interfaces between human brains and electronic computers.

His newest company is called Neuralink, and Musk says it will use a technology in development called neural lace to enable direct connections between our flesh-and-blood brains and the digital world. For decades, researchers have worked to translate electrical signals in the brain, detected by electroencephalograms (EEG) and other methods, to better understand how the mind works, to explore the functioning of our senses, and even to directly control mechanical devices. Such research provides hope for victims of paralysis and degenerative diseases, permitting them to control artificial limbs, for instance, as well as enabling blind people to see, after a fashion. But what if we could do much more? What if our brains could interact seamlessly with computers without the need for physical interfaces like a keyboard, a mouse, or speech-to-text software?

Surf the web with a mere thought. Perform computer-swift calculations of any kind. Steer your car without touching any controls. Thought would instantly become action.

Musk’s announced reasons for starting Neuralink have to do with a project he co-chairs called Open AI which includes a number of other tech billionaires who believe that, while artificial intelligence is one of the greatest threats to the survival of humankind, it’s a genie that can’t be put back into the bottle. So the best way to save ourselves from falling victim to “evil AI” (like Skynet in the Terminator movies) is to develop “friendly AI” first. Now Musk asserts that the ultimate way to thwart the rise of dangerous AI is to beat computers to the punch by augmenting humans with computer intelligence. We will be the AI—combining both human and computer capabilities to outperform pure computer intelligence alone, and maybe halt the drive to produce true AI completely.

The first step is that seamless brain-computer interface. Neuralink’s neural lace is a kind of mesh that is surgically injected into the brain and spreads itself out from there, connecting with brain cells and eventually becoming fully accepted into the flesh neural network. The claim is that it can detect brain activity with much greater accuracy and less “signal noise” than traditional electrodes. It will certainly be interesting to see how well it can be made to work.

A novel manuscript of mine that’s currently under consideration by several publishers is about this very thing: what happens when truly effective brain-computer interfaces become a reality? It’s only a matter of time, and the possibilities are both breathtaking and frightening. Think of all the services your smart phone provides, except available with a mere thought. Imagine person-to-person networking that would make Facebook look like snail mail. But on the negative side comes the fear of mind control by governments, corporations, or hackers who could plant their own information directly into your brain, and possibly even control your body remotely. My novel also explores the potential abuses of marketing in a world of computer-linked minds (giving a whole new meaning to the concept of persuasion).

Musk, and others, believe that linking ourselves directly to computers is the next step in human evolution, and they’re probably right. There are many other teams working on the concept, including a company called Kernel founded by tech entrepreneur Bryan Johnson. I’m grateful that someone with Musk’s intelligence, tempered by a sincere desire for the betterment of humanity, is taking the lead in this field. Because the potential for abuse is enough to make my brain blow a fuse.

THE LONER MYTH IN SF

I haven’t posted a blog in a while because I’ve been in the process of moving from a small town home to an island cabin in a lake in Northern Ontario, Canada. It’s off the grid and as of this writing I still don’t have my solar power system up and running yet, so electricity is rationed!

I don’t think of my new home as all that isolated—I have neighbours and an all-season road just a few kilometers away—but while visiting my kids and their families in Toronto recently, the contrast struck me as pretty extreme. On the one hand, millions of people filling huge tracts of cookie-cutter housing and scores of high-rise condominiums, clogging eight-lane highways and a vast transit network. On the other hand, my wife and I using a snowmobile to get between house and car, and sometimes not seeing another soul for days at a time.

It often makes me think of those post-apocalyptic science fiction stories in which one lone man or woman faces a struggle to survive on their own resources. Stories like Cormac McCarthy’s The Road or Richard Matheson’s I Am Legend—a story of a lone survivor of a biological disaster amid a world of zombies. It’s been filmed numerous times under various names, starring Vincent Price, Charlton Heston, Will Smith and others. Why are such stories so popular? Is it because being left on our own is our greatest fear? Or our secret wish? Do we thrill with horror at the thought of being left to our own devices in a starkly hostile world, or is such a scenario a kind of wish fulfillment when the pressure of our crowded cities begins to get to us?

Scenarios like these aren’t impossible—if the human race is wiped out (absent the complete destruction of the planet) someone will be the last. And I accept it as a good way of telling a narrative that helps the reader fully identify with the protagonist. But such extraordinary solitude is usually anything but deliberate.

There are also space tales of lone humans being stranded on strange planets through accident or misfortune and thrust into a battle for survival. Andy Weir’s The Martian comes to mind. But it’s notable that, while astronaut Mark Watney manages to survive for a time on his own, ultimate safety requires a massive effort involving hundreds of people to bring him home.

That’s realistic. We need others to help us survive.

What I find hard to swallow are the stories that place a single human in space or on another planet as part of a deliberate plan, assigned to carry out some lonely duty. The reason they’re sent alone is rarely given. The Sam Rockwell movie Moon is one such. Yes, he has the ‘companionship’ of an AI, but no living human, and strange experiences ensue. Other SF short stories and novels also feature loner types assigned to some isolated outpost all on their own.

Why?

Why would anyone think that was a good idea?

There are, and maybe always have been, some people who choose to live a solitary life and manage to be self-sufficient—more or less—prospectors, lighthouse keepers, and fur trappers, for instance. But even most of them ultimately depend on others in some way, coming in from the wilderness to trade goods for food and other supplies produced by someone else. And that’s on the planet Earth, rich with sufficient air, water, energy, and food for our needs. Lone spacemen exploring the vast expanses between the stars in one-man ships doesn’t make a lot of sense (much as I love the Beowulf Shaeffer stories of Larry Niven’s Known Space series). The infrastructure required to support several people, or a dozen, isn’t that much greater than what’s required for a single pilot, yet offers so much more productivity, and sheer redundancy in the event of an accident or failure of some kind. And that’s not even considering the mental side-effects of prolonged and extreme solitude.

We’re social creatures—we evolved that way and we reject it only at great risk to our mental and physical health. We need others for company; we need others for the skills and labour they offer beyond our own; and we often need others to bail us out when we get in a jam. (My wife and I live on an island, but we’ve only managed to build our home and maintain it with the generous and all-too-frequent help of many friends and family members.)

So write your adventure yarn about the amazing outcast who braves the uncaring universe all alone. But please give me a darn good reason he or she isn’t doing it with a little help from their friends.

CAN DYSTOPIAN FICTION BECOME FACT? IF WE LET IT

As I write this, Donald Trump is in his second week as President of the United States. White House Press Secretary Sean Spicer has told easily-disproved lies with the boldest of faces. But then, Toronto’s The Star newspaper is now keeping a running list of the false claims Trump himself has made since becoming President. And Trump advisor Kellyanne Conway has cited a “massacre” that never happened as a defense for the travel ban against seven Muslim countries. Along with Conway’s use of the term “alternative facts”, it’s inevitable that people would be reminded of George Orwell’s dystopian novel Nineteen Eighty-Four. No surprise, then, that the 1949 novel has suddenly become a bestseller again, selling out at Amazon and elsewhere.

The totalitarian government of the country Airstrip One that Orwell describes in the novel rules with an iron fist over a mostly uneducated lower class population, and seeks power above all. But the element of the novel that resonates the most this week is Airstrip One’s Ministry of Truth which is, of course, about anything but truth. Its work is to revise history to make it match the party line, to erase troublesome figures and events from news and historical accounts. The Ministry’s “Newspeak” is official language that mostly obscures the truth and encourages “doublethink” requiring citizens to embrace opposing concepts, such as “black is white” (if the government says so). Alternative facts, indeed. The citizens of Airstrip One have no freedom and no privacy—almost all of us are familiar with the famous slogan “Big Brother is Watching You.”

In these days when the National Security Agency in the U.S. has surveillance powers that beggar belief, and even corporations know virtually everything about us and our movements thanks to reward programs, facial recognition, and our ever-present smartphones, the Big Brother concept is barely fiction anymore.

Of course, Nineteen Eighty-Four is only one of the best-known dystopian novels, but others are also disturbingly relevant to current events. Aldous Huxley’s 1932 novel Brave New World describes the year of 632 A.F. (“After Ford”) in which humans are produced in test tubes conforming to a very rigid class structure by genetics. Citizens’ behaviour is controlled through sleep-conditioning. And the masses are pacified by an all-purpose feel-good drug called soma, so that personal freedom can be sacrificed for the cause of social stability. Huxley was pretty familiar with mind-altering drugs, but he didn’t know the distraction value of television, the internet, social media and text messaging. I feel sure he would have recognized all of those as perfect means to keep the general population from looking too deeply into their governments’ actions and motives. Modern-day leaders have certainly embraced the sleight-of-hand techniques that technology offers them to keep the voters’ attention elsewhere.

Ray Bradbury’s Fahrenheit 451 envisioned another totalitarian world in which books are banned and even burned (along with their owners!) in a deliberate attempt to pacify and control the general populace by keeping citizens from thinking for themselves. Orwell also thought such a government would ban books, while Huxley feared people would simply lose interest in reading all on their own (a circumstance that many believe is coming true). Although there’s been no move to ban books in general, many means are being used to diminish the effectiveness of the media that are most people’s main source of information about the state of their own countries. Leaders like Trump (and before him Canada’s past-Prime Minister Stephen Harper) have very combative relationships with the media; they portray members of the media as dishonest; they try to muzzle scientists and administrators (I have to think Trump got the idea from Harper, who did it first); and “false news” sources have sprung up like bad weeds all over the internet. These all have a similar effect to banning books: keeping people uninformed and more apt to believe what they’ve been told by “official sources” (the louder, the better) rather than form their own opinions.

In Margaret Atwood’s The Handmaid’s Tale a fiercely right-wing Christian movement has overthrown the American government and returned all women back to the status of being the property of men. That may seem quite a distance from Trump’s attitude toward women, including his executive order banning any federal money from going to international groups that perform or provide information about abortion. But obviously the many millions of women who marched in protest in the U.S. and all around the world the day after his inauguration don’t think so.

A huge number of dystopian novels feature totalitarian governments, religious and cultural oppression, and the suppression of individual rights. They’re not far-fetched—it has happened in the past. And today’s technology—the omnipresent internet, computer hacking, electronic surveillance techniques, plus the constant distractions of smartphones, social media, and other entertainment—makes the modern world more fertile ground than ever for the rise of such movements. The desire and the means are already in place. All we have to do is to keep ourselves ignorant, apathetic, and distracted, and the rest will take care of itself.

Regimes like that can happen if we allow them to happen.

It’s interesting that novels like The Hunger Games, The Maze Runner, Divergent and others have been among the most popular books read by young adults. You have to wonder how many of the warning signs young readers recognize in the world around them. Far too many of their elders don’t seem to.

DISASTER STORIES: MORE THAN JUST A GUILTY PLEASURE?

I love disaster stories.

I usually explain it by saying that disaster scenarios bring out the best and the worst in humanity, which makes for terrific character-building and storytelling potential. Heroism and sacrifice, but also the self-serving villains we love to hiss and boo at. While fantasy novels let us imagine what it would be like to live in such a world, disaster stories are more than just guilty pleasures: they make us dig deeper, to ask ourselves “What would I do in such a situation?”

That’s my theory. Or maybe I’m just a little twisted. No psychoanalysis, please.

The first SF disaster novel I remember reading was J.G. Ballard’s first novel The Wind From Nowhere. Though he later pretty much disowned it (and it might suck if I were to re-read it now) I was impressed at the time with the image of a world ravaged by an ever-growing wind, and the noble attempts of mere humans to survive it. John Wyndham’s The Day of the Triffids is still one of my all-time favourite books, in which most of humanity is stricken blind and the carnivorous, mobile, and perhaps even intelligent triffid plants gain the upper hand. It’s simply chilling, told in an understated British style. In fact, UK writers seem to have produced many more disaster novels than those from other English-speaking countries. In the US the disaster genre has found its expression more often in the movies (although some, like Michael Crichton’s terrific The Andromeda Strain succeeded on both paper and film). While Larry Niven and Jerry Pournelle combined their talents to tell about a mammoth comet impacting the Earth in the 1977 novel Lucifer’s Hammer, it’s probably not as well known as the two movies with similar plots Deep Impact and Armageddon both released in 1998 (because Hollywood likes to run with themes).

Which brings me to my second point about disaster stories. There’s a sub-genre of science fiction that can be called “cautionary tales” that describe how things can go wrong as a warning to all of us. Brave New World, 1984, and Fahrenheit 451 are classic examples. And disaster stories are cautionaries at full blast. While some plots involve a purely natural threat, like a solar flare-up or a killer rock from space, many others (nuclear melt-downs, deadly plagues, and nanotechnology run riot) point the finger at potential man-made disasters. Not only do they warn us about paths we shouldn’t take with our technology, but also how critically important it is to be prepared for when disaster strikes.

Especially since the 1990’s there have been significant efforts to detect and plot the orbits of Near Earth Objects—things like asteroids and comets in our space neighbourhood that could potentially strike the planet with destructive results. And yes, there’s been serious discussion about how to send a rocket crew out to blast a threatening rock away from its collision course, just like Bruce Willis and the boys. The Spaceguard Foundation based in Italy, the UK’s SpaceGuard Centre, and other projects continue to work in the field, and the White House Office of Science and Technology Policy in the US recently released a National Near-Earth Object Preparedness Strategy from a working group that involved many different federal agencies. It’s the kind of collaborative approach that’s needed to cope with disasters on a national, or even global scale.

Society’s collective mindset is important—we have to believe that a given scenario is plausible before we will think about ways to protect ourselves from it. Science fiction fertilizes that soil. Perhaps we’re more prepared for crises like the 2002-2003 SARS outbreak because of pandemic-themed stories like Richard Matheson’s I Am Legend, Stephen King’s The Stand, or Margaret Atwood’s MaddAddam trilogy. A huge number of nuclear holocaust novels, from Shute’s On The Beach to Frank’s Alas, Babylon to Zelazny’s Damnation Alley help to keep the pressure on world leaders for nuclear disarmament (apparently Donald Trump doesn’t read). Of course, lots of novels and movies about rogue artificial intelligences and nanotechnology run amuck have ensured a very active public discussion about those areas of research and restrictions that should be considered.

Think tanks and government agencies collectively spend millions organizing brainstorming sessions to prepare for potential disasters of every description. Maybe their first step should be to stock up their SF libraries. Yes, I’m being a little facetious, but I’d like to believe that our literature of the imagination has helped to create a mindset that will save many lives in the centuries to come.

Have I written disaster stories myself? Of course! I invite you to read my collection Disastrous! Three Stories of the End of the World available as a free ebook download from my website bookstore or from Kobo. (I made it free on Amazon too, but they put the price back up!)

Enjoy!

A BILLION BLACK HOLES

This photo, recently released by NASA’s Chandra X-ray Observatory, shows black holes in a portion of the sky about two-thirds the diameter of the full moon seen at night. Chandra collected x-ray data from this small patch of sky for the equivalent of two months and then the data was “stacked” to produce the most detailed x-ray astronomy image ever. The photo shows more than one thousand supermassive black holes—the kind thought to exist at the center of galaxies—in just that small patch of sky. If that zone is typical of the rest of the sky, it means there are more than a billion such black holes out there. But before you get too worried, most of the black holes pictured are close to thirteen billion light years from Earth, meaning not only that they’re much too far away to worry about, but also that the image of them we’re seeing is from thirteen billion years in the past. A billion years or less after the Big Bang that produced the universe itself. Who knows what state they’re in now?

Black holes form when stars with at least three times the mass of our sun burn out and collapse in upon themselves. The material packs so densely together that the result is a fantastic amount of mass in a relatively small area, called a singularity, and within a certain distance of that singularity the force of its gravity is so strong that nothing, not even light, can escape it (explaining why it’s black!) That point-of-no-return is called the black hole’s event horizon because nothing can be seen beyond it. But the event horizon is also a zone of intense radiation, and often jets of radioactive particles stream outward from it, which scientists can see in the x-ray spectrum.

The thought of billions of black holes (possibly thousands in our galaxy alone) is rich fodder for the imagination. Think of what could be done with them! Borrowing the ideas of various science fiction writers, what if black holes are:

Shortcuts through space/time—these are called wormholes, but some physicists suggest that you could have a wormhole with black holes, like doorways, at each end. Could we use them to travel to far distant places? Well, somehow we’d have to survive the unthinkable gravity and tidal forces, radiation, and other unknown hazards, plus we’d still have to have incredibly fast spaceships to even get to the nearest black hole in the first place. Otherwise…maybe.

Doorways to another universe—but, if so, how will we ever know? Nothing is powerful enough to come back through one.

Portals for traveling into the past—if you could somehow manipulate black holes at the mouths of wormholes, theoretically you could place one at an earlier location in space/time. But then if we had the engineering ability to move black holes around, we could probably do anything we wanted with space/time anyway.

Means to jump into the future—as in the movie Interstellar, being close to a black hole slows your perception of time. Get close enough to a black hole for a few minutes and decades might have passed in the universe at large. A quick trip to the future, yes, but no way to return to your own time to tell about it.

Weapons—locate and manipulate a small black hole, then use it to eat your enemy’s city, or planet, or solar system. Hmmmm, except a black hole would just as happily gobble you as the bad guys. I also think they’d be kind of hard to sneak past galactic security.

Power sourcesStar Trek’s Romulans use black holes to power their starships. Mind you, using something that can warp space/time is bound to produce some undesirable side effects, not to mention that if the containment field fails the thing will consume your ship like a fistful of nachos on Superbowl Sunday.

Prisons—in a re-visioning of an Arthur C. Clarke novel, Gregory Benford imagined using a black hole as a prison for an immensely powerful and evil intelligence. Something that you can’t destroy any other way? Yup, I guess that would work. Unless the black hole turns out to be a gateway to another universe, another place in your own universe, or another time, in which case you’ve just shifted the problem.

Personally, I’ve sometimes wondered if black holes—the most destructive forces in our universe—spend billions of years gathering matter and energy because, at the right moment, they’ll suddenly explode in a Big Bang that creates a whole new universe in a different dimension—literally the mothers of all space phenomena.

I don’t know what physics would say about that, but it feels rather poetic to me.

HOW MUCH OF THE FUTURE SHOULD WE TRY TO PREDICT?

I’ve mentioned before that I rarely write stories of the distant future. Readers expect authors to include details of that future society, especially the technology. Will we have flying cars? Hotels on Mars? Robot servants? Everlasting bodies? They want to read about that—they want to see it in their minds.

Not only is that stuff hard to predict with any credibility hundreds of years ahead (how many futurists of the early 20th Century predicted the smartphone/online world we experience now, let alone where that path will take us from here?) But if you do it too thoroughly, the reader of today might not even be able to relate to the image you conjure. Why do Star Trek movies continue to show a full bridge crew manipulating physical controls like sliders on touchscreens at exotically-shaped workstations covered with more multi-coloured lights than a Christmas tree? Certainly the technology of the 23rd Century and beyond will make it possible for humans to be little more than passengers along for the ride while artificial intelligences handle all of a spacecraft’s functions. If there’s a reason for the AIs to feed regular data about the ship’s progress and surroundings to the humans, isn’t it more likely to be an immersive virtual reality experience than current-style readouts, blinking lights, and a big TV at the front of the room? And let’s not forget that brain-computer-interfaces are already a reality—if the humans ever do have to take control of something, they’ll just form a thought to “make it so”.

But that would suck on the big screen.

It would amount to a handful of characters sitting in chairs in some nondescript space, maybe with some kind of headset on (but probably not). We might not even recognize them as fully human. As much as our mechanical technology is changing by leaps and bounds, we’ll also very soon have the ability to make significant changes to the human form itself.

Our societies as a whole are fluctuating rapidly, too. Thirty years ago, who would have predicted the way our world has now been shaped by terrorism and our lawmakers' response to it? Or the new emphasis on equal rights for members of the LGBTQ community? Earlier than that, it was racial rights that were in flux. Gender equality still hasn’t been fully resolved, but then questions of gender identity are expanding all the time. Science fiction of recent decades has offered some striking examples of where biological engineering might take human sexuality—the novel 2312 by Kim Stanley Robinson includes some interesting possibilities.

But if we go too far in earnestly trying to describe the bizarre paths the human race could take over the next, say, five hundred years, will the result be as alien as anything that might have evolved on some distant planet? How will we identify with such people? How will they speak to us? The easy answer is to say that such characters will still have an “essential humanity” revealed by the author, but that might be disingenuous. Because we could very well have less in common with these trans-humans of tomorrow than we do with the ancient Sumerians of millennia ago.

There can be benefits in pooling our collective brainpower to predict where scientific developments are taking us, especially in helping us to decide which paths we definitely do not want to take. But our primary purpose as writers is to tell stories—stories that entertain, yes, but also offer instruction, philosophical exploration, and catharsis. To do so they have to touch the core human identity within us. None of that comes across if we can’t relate to the story—if we can’t picture ourselves in it.

So, by all means let’s enjoy creative visions of a far-flung future, but also recognize the practical limitations that fiction for a present-day audience dictates: too much strangeness, even if it’s likely to be accurate, can get in the way. And although it might seem like laziness when an SF writer doesn’t make his or her future world so utterly different from our own, maybe it’s not. Maybe sometimes it’s just good storytelling.

MORE BUILDING BLOCKS OF THE FUTURE

CREDIT University of Central Florida

CREDIT University of Central Florida

In my last post I wrote about some of the ways a bright technological future is already under construction, one development at a time. There are far too many new inventions and discoveries to be covered in a handful of blog posts, but I thought I’d touch on just a few more. You can follow the links to read more details at the magazine NewAtlas.com.

Some of the most exciting new work is being done in the area of energy. Since our ravenous consumption of energy from fossil fuel sources is one of the key reasons our world’s environment is in such a sorry state, every alternative is a step toward heading off even worse damage. Some new developments are potential sources of energy production, like the wafer materials known as ferroelectret nanogenerators such as are being developed at Michigan State University. These FENGs (for short) involve layers of complex materials sandwiched together which produce an electric current when compressed. So, for instance, pressing on a touch screen device might produce the energy to power that screen. Bending and flexing can also produce current, perhaps turning our elbows or knees into potential energy generators. With a FENG folded into a more potent package in the heel of a shoe, creating energy could be a walk in the park!

Thermoelectric materials produce electric current because of temperature differences on either side of the material. Scientists at Korea’s Ulsan National Institute of Science and Technology say they’ve developed a thermoelectric coating that can simply be painted onto objects. So nearly anything that has a warmer inside and a colder outside (or vice versa) could produce energy. Maybe not useful for house paint in northern climates where we like our homes well insulated, but possibly for shelters in more gentle climes. And certainly potentially useful for loads of household gadgets from coffee mugs to crockpots.

With our desire for ever more powerful portable computing devices, designers have explored lots of ways to make our clothing and accessories “smart” with circuitry incorporated into them, but also elegant means to power such devices. University of Central Florida scientists have created a “fabric” that uses threads of very special filaments. A coating on one side of the filament gathers solar energy then passes it over to the other side, which is a superconductor (storing energy like a battery). A combination sweater/smartphone anyone? Although, not surprisingly, the first practical uses for this stuff will probably be in uniforms for the modern soldier, giving them the ability to power a range of portable high-tech hardware without the weight of batteries.

Other developments are fascinating if mainly for their “oh, wow” ingenuity, like the way Irish materials scientist Jonathan Coleman added flakes of graphene (one-atom-thick sheets of carbon atoms) to Silly Putty to produce an electrically conductive material he calls G-putty that’s ridiculously sensitive to pressure impacts of any kind. That could make it the perfect choice for medical sensors and other sensing equipment (and made of Silly Putty!)

Still other innovations could transform our world in ways that might take some time to become clear. A company in the Netherlands has created an alternative to stairs and elevators which they call Vertical Walking. In a near-sitting position, a person uses their arms and core muscles to pull themselves up vertical rails in a series of movements that provide healthful exercise but aren’t much more strenuous than walking, while not requiring the external energy, space, and infrastructure of elevators. I’m not sure it’ll catch on, though it’s an interesting idea.

But I have to say that not all new inventions will necessarily make the world a better place. Speaking as someone who’s still mystified by the appeal of “selfies” and their proliferation along social media, I wasn’t impressed by the appearance of the selfie stick. So I’m also not a fan of the AirSelfie drone—a miniature quadcopter the size and shape of a smartphone designed to offer even more ways to be relentlessly narcissistic. Stored in your smartphone case, powered by and linked to the phone, it flutters smoothly into the air at your command, just far enough to take yet another series of pictures of YOU.

If you think this is the most exciting of the breakthroughs I’ve just mentioned, please, I don’t want to know.