I’ve been amused this week to read the news that a computer program passed the famous “Turing Test” for artificial intelligence. The program presents itself as a 13-year-old boy living in Ukraine named Eugene Goostman, and it was able to carry on text conversations well enough to convince one-third of a panel of judges that they were chatting with a human being. It happened during a regular Turing Test event being hosted by the University of Reading in the UK on the 60th anniversary of the death of mathematician Alan Turing, who devised the test as a way of measuring artificial intelligence: if a computer is mistaken for a human more than 30% of the time during a series of five minute keyboard conversations, it passes the test. This is being touted as the first successful test, although NewScientist magazine points out that others have succeeded too, depending on the criteria used for the judging.
Detractors claim the fact that “Eugene” is presented as a 13-year-old boy with limited English-language skills coloured the expectations of the judges enough to render the test results less meaningful than they might otherwise be. Have you heard thirteen-year-olds talk lately? The fact the judges could understand Eugene’s answers at all should have been a tip-off that they weren’t speaking to a real teenager. Did he pause in the conversation to answer a few texts on his phone? Did he drop f-bombs, use spelling that looked like alphabet soup given a stir, or rely on the word “like” every other sentence? Were there any mistakes obviously caused by autocorrect? Dead giveaways, all of those. (Actually, Eugene does text like that on Twitter.)
Personally, I think the limitations of the test itself make it of little value. Certainly it shows that superfast processors fed with enough data about likely questions, colloquial language, general knowledge and other parameters can simulate a humanlike dialogue. It says nothing about self-awareness, self-motivation, creative problem-solving, psychological empathy, or many other things that we would expect of an intelligent being. So we’re still a long way from the Skynet days of the Terminator movies, or even HAL from 2001:A Space Odyssey.
If you spend much time on Facebook, or even watching reality TV, you’ll know that speaking like the average human being isn’t exactly a shining display of intelligence anyway—quite the opposite.
There are efforts to create a more universal artificial intelligence test, involving more visual cues, among other things. I expect that within another few generations of computing progress, that test will also be found wanting. The truth is, we’ll probably never know when the first truly intelligent, sentient, artificial mind is created.
Because it’ll know that the smartest thing it can do is to keep that little secret to itself.