Alan Turing was probably the world’s smartest British man. Among his accomplishments are winning World War 2 for the Allies and inventing computers. But that’s a story for a different time. What I want to discuss is a brilliant and simple little idea he had.
Alan Turing was interested in the idea of artificial intelligence. That is, machines with an actual mind rather than just your average pull-string Barbie or AIMbot. The problem with attempting to make an intelligent machine, of course, is how exactly you tell it’s intelligent. For that matter, say you never intended to make an intelligent machine in the first place, but you’re convinced that you never told this version of Word to scream “help” in 180 languages over and over. You can’t just ask “are you intelligent?”, because I can write a program that answers “yes” to that question in about ten minutes and five lines.
Turing came up with a quick way to figure things out so you can just calm down it’ll be okay. Put someone in front of a computer screen. Tell them that there’s either another person or a computer in a different room, and they’re going to be talking to each other via text (so you don’t need to make some freaky robot body to stick your AI in). Their job is to tell you whether it’s a computer or a person. Then you just sit back on Easy Street while some rube does your work for you. If the AI convinces them that it’s a human being, it’s definitely an intelligent being.
The idea was that this was a task that would prove reasonably easy for anything intelligent with access to enough data. All you have to do is pick a character and do a bit of role-playing. At the same time, it’s a task that’s pretty difficult for anything without intelligence. After all, when was the last time an AIMbot fooled you? An intelligent being can take a couple of simple axioms (‘I’m a white guy from the east coast who is about 34 and likes baseball’) and respond appropriately. It won’t need a long if/then/else script of responses that break down when someone says “SantaBot, what do you think about the performance of President Obama thus far?”
It’s important to make something about this clear though. It’s not a pass/fail test. It’s a conclusive/inconclusive test. If something convinces you it’s a human being, it’s intelligent. If it can’t, it may just be a comically shitty role-player. After all, some actual human beings probably couldn’t convince you that they were human based on this sort of exchange. But odds are if you’re at the point that you’re testing a computer program for sentience, it’s at the very least still something that you should keep an eye on. Probably you should avoid letting it watch Terminator.