Daniel Dennett, in the latest Technology Review, argues that there’s no meaningful difference between the chess cognition of Deep Blue and that of Gary Kasparov. Both are functionalist machines, employing mental shortcuts to settle on an optimal strategy:
The best computer chess is well nigh indistinguishable from the best human chess, except for one thing: computers don’t know when to accept a draw. Computers–at least currently existing computers–can’t be bored or embarrassed, or anxious about losing the respect of the other players, and these are aspects of life that human competitors always have to contend with, and sometimes even exploit, in their games. Offering or accepting a draw, or resigning, is the one decision that opens the hermetically sealed world of chess to the real world, in which life is short and there are things more important than chess to think about. This boundary crossing can be simulated with an arbitrary rule, or by allowing the computer’s handlers to step in. Human players often try to intimidate or embarrass their human opponents, but this is like the covert pushing and shoving that goes on in soccer matches. The imperviousness of computers to this sort of gamesmanship means that if you beat them at all, you have to beat them fair and square–and isn’t that just what Kasparov and Kramnik were unable to do?
Yes, but so what? Silicon machines can now play chess better than any protein machines can. Big deal. This calm and reasonable reaction, however, is hard for most people to sustain. They don’t like the idea that their brains are protein machines. When Deep Blue beat Kasparov in 1997, many commentators were tempted to insist that its brute-force search methods were entirely unlike the exploratory processes that Kasparov used when he conjured up his chess moves. But that is simply not so. Kasparov’s brain is made of organic materials and has an architecture notably unlike that of Deep Blue, but it is still, so far as we know, a massively parallel search engine that has an outstanding array of heuristic pruning techniques that keep it from wasting time on unlikely branches.
I’m a little more skeptical of Deep Blue’s psychological realism. The set of linked mainframes was capable of analyzing over 200 million possible chess moves per second. Kasparov’s brain, on the other hand, only evaluated about five moves per second. This difference in processing power leads to profound differences in computational efficiency. Deep Blue is a fire hazard, and requires specialized heat-dissipating equipment to keep it cool during chess matches. Meanwhile, Kasparov barely breaks a sweat. His biological computer is a model of efficiency. So that’s one thing evolution gets you.
And then there’s the mystery of how, exactly, Kasparov managed to compete with Deep Blue even though his mind was several million times slower. His brain was clearly relying on something besides raw computational power. And I think this gets at a crucial difference between how Kasparov and Deep Blue learned how to play chess. Kasparov’s neurons were effective because they had trained themselves. They had been refined by years of experience to detect subtle spatial patterns in the chessboard, which allowed him to utilize a powerful set of heuristics (unconscious mental shortcuts). Most of Deep Blue’s intelligence, on the other hand, was derived from other chess grandmasters, whose wisdom was painstakingly programmed into the machine. (IBM programmers also studied Kasparov’s previous chess matches, and engineered the software to exploit his recurring strategic mistakes.) But the machine itself was incapable of learning.
So while the brain is just a “protein machine,” it’s a profoundly different kind of machine. Personally, I think a much better example of human intelligence can be found in the AI work of Gerald Tesauro, who has created one of the great backgammon players in the world. It’s name is TD-Gammon (the TD stands for “Temporal Difference”), and it learns just like our dopamine neurons.