It’s been a decade since world chess champion Garry Kasparov was first defeated by a computer. Since then, even after humans retooled their games to match computers, computers have managed draws against the world’s greatest players. It seems only a matter of time before computers will win every time — if humans are willing to play them, that is.
But each time computers have shown their remarkable abilities, detractors have claimed that the computers are really inferior because they apply brute-force tactics: methodically tracing every possible move instead of creatively reasoning toward a solution. Daniel Dennett says we shouldn’t be so quick to dismiss the computers. After all, human chess champions study and memorize sequences of moves, encyclopedically sorting through thousands of variations as they play. Isn’t this the same thing the computer is doing?
As soon as computers became good at chess, it was dismissed as a valid example because, ironically, computers could do it. A classic example of moving the goalposts.
Similarly, I’ve recently heard a few people say “If computers could beat us at poker, that would be a genuine example of artificial intelligence”. Recently, a poker playing computer narrowly lost to two pros.
Presumably, ‘genuine intelligence’ is just whatever computers can’t do yet.
I think this all points to a deep-seated discomfort with the idea of artificial intelligence. What if someday people built a virtual robot that was indistinguishable from a human, except for the fact that it had no human, physical form. If you watched and interacted with the robot via a video monitor, you simply couldn’t tell that you weren’t seeing a real human. Would it be human? You might argue that the robot has no consciousness, but how would you confirm it? The robot would constantly assure you that it was conscious, and any test you attempted would support those assertions. Would the fact that the robot wasn’t “real” be the limiting factor? Okay, what if it was real — independently mobile, humanlike in appearance.
Sure, if you took the thing apart, you’d be able to see that it was made of nuts and bolts, but outwardly the creation would look and act like any other human. Would such a creation qualify for human rights? Should it be able to vote?
What if it behaved like a human — but a human with an IQ of 70?
Perhaps we can make chess-playing and poker-playing robots, but not human-seeming robots. Maybe somewhere there’s a line that science will never cross, so we won’t ever have to deal with those questions. At this point, however, that’s not a bet I’d be willing to take.
[Update: While I was writing this post, Jonah Lehrer made an excellent post on the same topic. His take: Chess computers aren't actually very good examples of human-like intelligence.]