Chess and Artificial Intelligence

Daniel Dennett just wrote an article on chess-playing computers and Artificial Intelligence, and a few bloggers are already talking about it. I'm sort of surprised that the concept is getting so much attention. To me, the answer to the question, "does a computer that can play chess demonstrate artificial intelligence" is obvious: it does, but only in a very trivial sense.

Discussions of the methods used by chess-playing computers and how they compare (or don't) to the way(s) that the human brain plays chess are interesting, but I don't really find them all that relevant to the whole "artificial intelligence" discussion. The idea that a single game - hell, a single test of any sort - can adequately assess whether something has "intelligence" is quite simply absurd. The development of a chess-playing computer demonstrates that a computer can be developed that can play chess. That's all.

That's not to say that the development of a chess-playing computer is unimportant or unimpressive. Far from it. Chess has an elegantly simple set of rules that create a game that is anything but simple. Playing chess well requires the ability to see and evaluate an enormous number of possibilities before each move. Developing a computer that can do that was an enormous accomplishment.

But Deep Blue didn't learn how to play chess. It was taught how to play chess. It can do nothing other than play chess. Put a royal flush in front of the thing, and it won't be able to register. It's a machine that has been constructed for a single purpose. It can do that one thing very well, but it can do nothing else.

At least for me, the key to intelligence is the ability to learn. Not the ability to be fed instructions, the ability to learn. The ability to gather information and use that information to develop entirely new skills - without being instructed to do that. My dog can do that - he learned from the cat how to get out a window. A chess- or poker-playing computer can't.

When we get to the point when a computer decides that chess is boring, and holds out for a game of five-card stud instead, I'll start to think about the deeper philosophical questions. Right now, it looks like that's a long way away.

More like this

Part of Dennett's article was about non-chess-rule things that human players do that computers do not--like tossing the board and holding out for five-card stud.

Your dog is capable of learning things its body can do-- Deep Blue will never be able to haul its hardware out the window, no matter how many instructions are fed in. The ability to learn is limited by the environment and the organism/entity's ability to interact with it.

Would it be impossible for some neural net, genetic program, or other data-mining software to do "learning", as least as "intelligently" as some organism? Would they need bodies or at least something worth optimizing for themselves?

Isn't "the key to intelligence is the ability to learn" just another example of moving the goalposts, though, redefining intelligence yet again to mean whatever they can't yet do? It's easy to imagine a not-too-distant future in which computers do exist that can learn, while the rearguard still insists on pointing out the kind of learning they do isn't general enough, that they can't learn how to learn, that they still don't feel, that they don't have souls, etc. Maybe "intelligence" is just too murky a concept for us to go around saying whether computers do or do not have it.

Platypus, I think your last sentence hits the nail on the head: "Maybe "intelligence" is just too murky a concept for us to go around saying whether computers do or do not have it."

I agree and for this reason I don't think Mike's statement, that intelligence is about learning, is really a case of "moving the goalposts". I think the goalposts can't be properly set at this point, although I don't think this stops people from trying.

For my part, to be convinced a computer is intelligent (whatever that is), it will have to be able to learn and apply that knowledge to novel situations. It need not necessarily come up with the same response a human would, but it ought to be able to "learn" whether its response is appropriate and adjust its knowledge store.

One problem with choosing some goalposts for machine intelligence, like 'be able learn and apply that knowledge to novel situations', is that you may set it higher than the abilities of some humans. Would a learning machine deserve more rights than Terri Schiavo? Turing's imitation game is good enough for me, but the implications of a positive result for a machine intelligence or a negative result for a human is still murky.

There are two sides to the "moving the goalposts" complaint. The way I see it, is that the real gold standard for AI is to have artificial minds that are capable of everything that human minds are capable of: understand jokes, recognize objects, understand real natural language with its ambiguities, grammatical errors, etc. Turing's original goalpost has never really been moved at all. Yes, there are programs that can briefly fool someone into thinking he is talking with another human, but in an extended back-and-forth exchange on some topic with actual subject matter, there is no computer program that can come anywhere close to human-level competence at conversation.

Much more modest goalposts can and have been proposed: face recognition, ability to read handwritten text, ability to play chess at the grandmaster level, ability to distinguish human speakers by their voices, ability to distinguish painters or composers by their styles, etc. These goalposts are much more testable than Turing's, because there is an objective criterion for whether they have succeeded or not. But I don't think any of these really work as benchmarks for how far we have come towards the goal of truly humanlike artificial intelligence.

What would be nice would be to have an actual plan for humanlike AI that had a series of objective milestones that could be checked off one at a time. But the successes in AI really don't seem to be headed anywhere. Yes, chess-playing is nice, but can we apply the lessons learned from chess-playing to get a head start on the next milestone? It's not clear whether we can.

Our definition is intelligence is stronlgy connected to our notion of conscience. And we haven't a clue (yet?) what the latter really is, or really means. We may be able to quantify the 'intelligence' of a system only when we separate it from the concept of conscience.

By Wouter Lievens (not verified) on 31 Aug 2007 #permalink

Dennett is right about the goal posts in a limited sense. The problem is however not with the movers, it is with the designers themselves.

AI programs are designed with particular end points in mind. That they do well is no surprise, considering that whole subgroups of AI programming have risen to solve specific problems.

However such improvement over humans is not entirely new. Take manufacturing for instance, machines regularly make better products than we humans can ever hope to, the computer chip for instance can never be be made by a human.

Does this mean that the Machine beats man? We make specific machines for specific tasks and expect that they do better than us. This is what we term automation, any doubts? We designed these machines to replace us and it is a measure of our knowledge that we have built them so well.

Therefore conclusions like machine betters man are inherently stupid. Does the blacksmith ever wonder and worry about machines making more accurate artifacts than him?

In Deep Blue winning over Kasparov, we have just done what we have done to manufaturing techniques, taken the best of what we know and incorporate it in our machines. Why read more into it.

Dennetts views are actually irrelevant to such questions because I suspect that his views arise more from a "push materialism" agenda rather than understanding the moves or the mechanisms behind Deep Blue. I am willing to bet that he doesnt.

If he can see that a calculator can perform faster than a human, then why should not a program that processes decision trees fast outbeat a human. There is little more to it. Let us not delude ourselves.

Such delusions arise from our images about chess, of it as a superhuman exercise and as a measure of how brainy we are. Chess may look superhuman but it has more to do with pattern recognition and memory than real time mental solution finding processes.

Einstein, widely acknowledged as one of the better brains to pop up in the last epoch making century found this out the hard way when he tried playing against masters of GO. It is not only about Brain Power.

It is also about pattern power. It is similar pattern based efficacy that moves much of nature. Birds fly miles based on such patterns, nothing new about that.

If we are able to mimic these pattern processing abilities in our machines it just means that everything is normal. We have been transferring knowledge to automatons for quite some time now and this is a natural extension.

Let us stop deluding ourselves and work to input more of these patterns into our machines. We need more and more of such pattern recognition machines to ease our busy lives.

If we ever want to make humanoids or nature like self sustaining self perpetuating machines, then we need to improve our knowledge and ability to handle patterns. Such pattern cognition lies behind many of natural entities facetous handling of life's challenges.

We do such pattern handling every time and most of it is non conscious. As illustration see how easy it is to walk in your room even when it is dark and the power is off and how difficult it is in an alien place.

Read Nick Humphreys account of a visually disabled monkey and his adaptation to his blindness. AI seriously needs to find better and faster pattern recognition algorithms, I am not an expert on Deep Blue's processes, but I tend to believe that it does a fair amount of pattern matching and cognition. Can anyone enlighten me on that?

As we learn more about these things, newer and better goalposts would arrive, and I bet that many of them would be what we cannot even imagine today.

Dennett and the others are reading too much into Deep Blue like incidents, like the seers and sayers of old who looked up the stars for portents. Heard the story about Aristotle and the porcupine?

I tend to think that these people are still wrapped up in the 1950's Turing math style thinking about man, consciousness etc. Searle laughing at them has not seemed to help either them or Searle.

Perhaps the right thing to do is to grab the lessons of Deep Blue's algorithms and apply them to more mundane entities. Sufficient breadth and depth exists today in AI to perhaps even create strong AI.

Such strengths were not available in the 80's and early 90's when these philosophers made their names and their theories. I am sure that with expanding research on the brain and the nervous system, many of their favorite theories are going to take a beating. So let us continue to work on real issues than arguing with golden oldies.

For instance take the syntax vs. semantics argument... well more on that some other time..

udayapg@gmail.com