Is Deep Blue Human?

Daniel Dennett, in the latest Technology Review, argues that there's no meaningful difference between the chess cognition of Deep Blue and that of Gary Kasparov. Both are functionalist machines, employing mental shortcuts to settle on an optimal strategy:

The best computer chess is well nigh indistinguishable from the best human chess, except for one thing: computers don't know when to accept a draw. Computers--at least currently existing computers--can't be bored or embarrassed, or anxious about losing the respect of the other players, and these are aspects of life that human competitors always have to contend with, and sometimes even exploit, in their games. Offering or accepting a draw, or resigning, is the one decision that opens the hermetically sealed world of chess to the real world, in which life is short and there are things more important than chess to think about. This boundary crossing can be simulated with an arbitrary rule, or by allowing the computer's handlers to step in. Human players often try to intimidate or embarrass their human opponents, but this is like the covert pushing and shoving that goes on in soccer matches. The imperviousness of computers to this sort of gamesmanship means that if you beat them at all, you have to beat them fair and square--and isn't that just what Kasparov and Kramnik were unable to do?

Yes, but so what? Silicon machines can now play chess better than any protein machines can. Big deal. This calm and reasonable reaction, however, is hard for most people to sustain. They don't like the idea that their brains are protein machines. When Deep Blue beat Kasparov in 1997, many commentators were tempted to insist that its brute-force search methods were entirely unlike the exploratory processes that Kasparov used when he conjured up his chess moves. But that is simply not so. Kasparov's brain is made of organic materials and has an architecture notably unlike that of Deep Blue, but it is still, so far as we know, a massively parallel search engine that has an outstanding array of heuristic pruning techniques that keep it from wasting time on unlikely branches.

I'm a little more skeptical of Deep Blue's psychological realism. The set of linked mainframes was capable of analyzing over 200 million possible chess moves per second. Kasparov's brain, on the other hand, only evaluated about five moves per second. This difference in processing power leads to profound differences in computational efficiency. Deep Blue is a fire hazard, and requires specialized heat-dissipating equipment to keep it cool during chess matches. Meanwhile, Kasparov barely breaks a sweat. His biological computer is a model of efficiency. So that's one thing evolution gets you.

And then there's the mystery of how, exactly, Kasparov managed to compete with Deep Blue even though his mind was several million times slower. His brain was clearly relying on something besides raw computational power. And I think this gets at a crucial difference between how Kasparov and Deep Blue learned how to play chess. Kasparov's neurons were effective because they had trained themselves. They had been refined by years of experience to detect subtle spatial patterns in the chessboard, which allowed him to utilize a powerful set of heuristics (unconscious mental shortcuts). Most of Deep Blue's intelligence, on the other hand, was derived from other chess grandmasters, whose wisdom was painstakingly programmed into the machine. (IBM programmers also studied Kasparov's previous chess matches, and engineered the software to exploit his recurring strategic mistakes.) But the machine itself was incapable of learning.

So while the brain is just a "protein machine," it's a profoundly different kind of machine. Personally, I think a much better example of human intelligence can be found in the AI work of Gerald Tesauro, who has created one of the great backgammon players in the world. It's name is TD-Gammon (the TD stands for "Temporal Difference"), and it learns just like our dopamine neurons.

Tags

More like this

While I generally agree with your conclusion, I take issue with the claim that Kasparov was relying less on "raw computational power" than Deep Blue? What is computational power? It's a lot more than clock speed or number of cycles per second. It also has to do with the number of computational elements that can be brought to bear, the amount of data likewise, the speed of communication between all the parts, the efficiency with which all of these resources can be utilized, etc. Imagine for a moment that someone designed a chess computer that was more like Kasparov, analyzing orders of magnitude fewer positions in orders of magnitude more detail. Wouldn't that also require a lot of computational power? Of course it would. The brain, with its millions of richly interconnected neurons and capacity for handling real numbers as chemical levels without converting to/from binary, has plenty of raw computational power in its own right. It is indeed a fundamentally different kind of machine, but it's a no less powerful one.

If there really were some kind of meaningful equivalence between the way Deep Blue and Kasparov played chess, then there should have been lessons learned that allow for transfer to other cognitive tasks, like walking or visual perception. Unless Dennett would make the argument that chess cognition is highly compartmentalized and specialized. The fact is, though, there are no cognitive scientists working with large-scale models of cognition based on the approach of Deep Blue, in part or in whole.

But the machine itself was incapable of learning.

Not true! The initial evaluation function was determined by having the machine examine thousands of high-level matches. There was a lot of additional, manual tuning as well, but it's wrong to say categorically that the machine was incapable of learning.

Kevin, was the machine able to do something beyond what "she" had been programmed to do? Probably not.

Having said that, the whole field of "machine learning", usually based on regression analysis for applications such as spam control, shows that machines can learn...within pre-set parameters.

It was determined by Adrian de Groot, a Dutch psychologist about 1930 that Chess grandmasters are not statistically better than average adults at memorizing randomly placed chess positions, but essentially perfect at memorizing positions for tournament games.

de Groot concluded that the grandmasters have learned to "instantly" recognize roughly 30,000 templates of semantically meaningful positions of a few key pieces.

Ahh, but imagine if the machine learned to behave like Bobby Fischer. Now that would be an interesting chess playing machine.

It is true that computers could be said to imitate a limited type of human "thinking".

One major difference is that humans can process the exact same information input in multiple parallel ways, and form numerous associations (although this may be said to be imitated to some degree by some computer processes, albeit crudely).

The big human difference is motivation, whatever that is. Kasparov wants to play chess, and is emotionally rewarded by playing chess.

Although animals do not play chess per se, Kasparov shares this general trait with other animals - and NOT with Big Blue or other computers, even though they can play chess far better than the animals can.

Nothing in modern computer science attempts to address this; indeed, this issue has nothing to do with the field of computer science as it has developed.

It would be silly to assume that emotions will "emerge" from complex and sophisticated computing power alone, when computers are already way ahead of all animals, including humans, in many such ways, and when species that seem to have far less "intelligence" than humans show motivation and emotional reinforcement. Certainly there is no suggestion that, in the history of life, "computing power" came first and motivation, emotions, empathy and the like followed. Almost certainly they developed in parallel. (Of course, conscious motivation, learning, and emotional reactions wouldn't be selected for unless there were some type of capacity for flexible behavior.)

It would be silly to assume that emotions will "emerge" from complex and sophisticated computing power alone, when computers are already way ahead of all animals, including humans, in many such ways, and when species that seem to have far less "intelligence" than humans show motivation and emotional reinforcement. Certainly there is no suggestion that, in the history of life, "computing power" came first and motivation, emotions, empathy and the like followed. Almost certainly they developed in parallel. (Of course, conscious motivation, learning, and emotional reactions wouldn't be selected for unless there were some type of capacity for flexible behavior.)

Here's a nonscientific POV: my general observations makes me think that "thinking" always contain elements of random illogical routines added in. I doubt that what we consider human "thinking" could be produced without that ever-present element of illogic. We can can willfully suspend the omnipresent illogic routines when we do logic oriented tasks, but strictly logic-oriented tasks by themselves wouldn't get us far. Conversely, we can also suspend logic-routines (seemingly situationally).

Until we can introduce that self-adjusting randomness into machines, they won't be anything like us.

Here's a nonscientific POV: my general observations makes me think that "thinking" always contain elements of random illogical routines added in. I doubt that what we consider human "thinking" could be produced without that ever-present element of illogic.

I strongly agree with this.

I see a lot of behavior as being what I refer to as "biorational". What I mean by that is that even insect behavior patterns can seem highly "logical" when they play out in the exact environment that they were selected for in.

Humans are often guided by involuntary or semi-voluntary instinctive and emotional reactions. These reactions may be "illogical" in the context of a technological society. But perhaps they were selected for because they "made sense" from the perspective of surviving and reproducing in our ancestors (including many of our pre-human ancestors).

Rushing to do the same thing that everyone else is doing, even if you're not sure why they're doing it, is an example. Not a very good idea, sometimes, in financial transactions. A great idea for a small primate who hasn't seen a hawk yet, but has already noticed that everybody is taking cover.

In a game with Petrosian (ex-world champ, USSR days) and another master, a sacrifice of a knight was made that had to be returned several moves later (to offset a winning position) as a forced sacrifice - and that game had to end in a draw.

I seriously doubt any computer could think like that.