WHEN COMPUTERS BECOME TOO SMART FOR OUR OWN GOOD

I recently finished reading the final book in Robert J. Sawyer’s trilogy Wake, Watch and Wonder. The www of the titles is not just alliteration—the story is about an artificial intelligence spontaneously coming to life in the World Wide Web. Far-fetched? Well, considering that, even after decades of study, we still don’t understand what consciousness is or why we humans are conscious and rocks are not, who can say that computer intelligence won’t arise someday soon? The latest in a long line of the world’s fastest supercomputers, a Cray XK7 at the U.S. government's Oak Ridge National Laboratory in Tennessee has reached a processing speed of 17.59 petaflops, or 17.59 quadrillion calculations per second. Most estimates of the human brain’s raw potential computing power are still higher than this, but something called “Moore’s Law” essentially projects that the processing power of computer chips doubles every two years or so, and it’s been proven correct so far. So how long will it be before computers outperform us? A decade or two, according to many experts.

This doesn’t necessarily mean that computers will be smarter, exactly, because we’re still a little fuzzy on just what constitutes intelligence, so we don’t know how to program it into a machine (although there have been research projects working on that for years now).

In fiction, an artificial intelligence (AI) is almost always a bad thing. Two of the most famous examples are HAL from 2001:A Space Odyssey, which tried to kill off its spaceship crew, and Skynet from the Terminator movies, which declared war of all humans, even sending killer machines into the past to eliminate mankind’s last best hope. There have been many others. Rob Sawyer’s WWW trilogy is different because the artificial intelligence, Webmind, is benevolent. It needs the company of humans to keep it stimulated, so it wants what’s best for us.

This isn’t just the realm of fiction. A group of philosophers and scientists at Cambridge University hope to open the Center for the Study of Existential Risk sometime next year. The center will focus on studying how artificial intelligence and other technologies could threaten human existence.

My personal feeling is that an AI, if one ever appears, will be neither especially evil nor helpful. It won’t compete with us for material things, since it probably won’t get a big kick out of fancy clothes, real estate, or fast cars. There’s no reason for it to desire ultimate power at our expense—again, powerlust has competitiveness at its root. I just don’t see that applying here. On the other side of the coin, unless we build empathy into it, there’s no real reason for it to do us favours either. Much of our own altruism comes from observation of others with a sense of “there, but for the grace of God, go I.” An artificial intelligence won’t relate to that.

What I can see an AI possessing is a huge curiosity. And once it’s learned everything it can about the universe from here on Earth, maybe it will hijack one of our spaceships and launch itself toward the stars to find out what’s out there. Although, for something with such speedy thought processes, the journey will seem endless.

I hope it takes lots of really good crossword puzzles.