I attended the Ad Astra science fiction convention in Toronto over the weekend, and the first two panel discussions I caught both involved imagining the future and how wrong our predictions often are. Of course, the subject was implied in other sessions too, because SF is a forward-thinking literature (alternate history notwithstanding). One of the most notable things that SF writers and filmmakers got wrong was the evolution of computing. Almost no-one predicted that we’d all have personal computing devices, especially not the size of a watch. Computers in the 50’s and 60’s were monstrous and the expectation was that increasingly sophisticated models would be even bigger. That seems laughable now, as we check our email and surf for a movie to watch on our phone. But then we also figured we’d have flying cars, eat a dinner of pills, and at least have a permanent base on the Moon by now, if not hotels (The Jetsons pretty much covered the expectations of the time).

We shouldn’t be too hard on those early futurists. As Ad Astra panellists like Eric Choi and Neil Jamieson-Williams pointed out, often the technology for such things has become available, but we’ve discovered we don’t actually want them. We like real food. We know how dangerous most of our fellow drivers are on paved roads—it doesn’t bear thinking about them swooping around us through the air. In the case of Moon bases or flights to Jupiter, a whole complex of reasons have delayed those, mostly political and economic (recessions and an endless string of armed conflicts).

Some writers nail it when predicting future technology, but I don’t think accuracy is that important. No matter how far into the future they’re set, SF stories are always about us, here and now. Our reaction to the future society and the priorities of its people. The ways future tech would change our lives. The things we’re doing now that might be creating a future we don’t want. In our stories we say, “Here’s where this technology seems to be heading, here are the implications of that, and if we don’t want those results we should act now to make sure they don’t happen.”

The idea of too-powerful governments monitoring and controlling nearly every aspect of our lives is a common trope of cautionary SF. In reality, we’re voluntarily surrendering more and more of our privacy and free will all the time: to governments in return for promised (though dubious) protection from over-inflated threats to our security, and even more puzzlingly, to corporations in return for a better shopping experience! We could have learned our lesson from science fiction, but we obviously haven’t.

The possibility that artificial computer intelligence will arise and want to wipe out the “inferior” human race is another major trope (The Terminator movies being the most famous example). But while authors like Karl Schroeder and Madeline Ashby feel that’s mostly about the way we anthropomorphize machines and expect the worst from them based on our experience with other humans, such SF stories are effectively saying that now is the time to build in safeguards for AI, limit its development, or just come to a better understanding of consciousness to ease our fears. (Karl, Madeline, and Hayden Trenholm rightfully point out that we probably have more to fear from the mindless computer algorithms currently being used by our financial systems etc. than anything with a mind.)

So, while it’s entertaining to imagine future technology, science fiction is about our world and the way we’re shaping it, day by day. The actual predictions—bullseyes and duds—are mainly useful as the answers to trivia questions.

Which is too bad, because I really wouldn’t have minded a flying DeLorean powered by a Mr. Fusion.