Chess computers beat humans: Does this mean computers are "creative"?

It's been a decade since world chess champion Garry Kasparov was first defeated by a computer. Since then, even after humans retooled their games to match computers, computers have managed draws against the world's greatest players. It seems only a matter of time before computers will win every time -- if humans are willing to play them, that is.

But each time computers have shown their remarkable abilities, detractors have claimed that the computers are really inferior because they apply brute-force tactics: methodically tracing every possible move instead of creatively reasoning toward a solution. Daniel Dennett says we shouldn't be so quick to dismiss the computers. After all, human chess champions study and memorize sequences of moves, encyclopedically sorting through thousands of variations as they play. Isn't this the same thing the computer is doing?

Vaughan at Mind Hacks points out that these detractors are using a classic denialist tactic:

As soon as computers became good at chess, it was dismissed as a valid example because, ironically, computers could do it. A classic example of moving the goalposts.

Similarly, I've recently heard a few people say "If computers could beat us at poker, that would be a genuine example of artificial intelligence". Recently, a poker playing computer narrowly lost to two pros.

Presumably, 'genuine intelligence' is just whatever computers can't do yet.

I think this all points to a deep-seated discomfort with the idea of artificial intelligence. What if someday people built a virtual robot that was indistinguishable from a human, except for the fact that it had no human, physical form. If you watched and interacted with the robot via a video monitor, you simply couldn't tell that you weren't seeing a real human. Would it be human? You might argue that the robot has no consciousness, but how would you confirm it? The robot would constantly assure you that it was conscious, and any test you attempted would support those assertions. Would the fact that the robot wasn't "real" be the limiting factor? Okay, what if it was real -- independently mobile, humanlike in appearance.

Sure, if you took the thing apart, you'd be able to see that it was made of nuts and bolts, but outwardly the creation would look and act like any other human. Would such a creation qualify for human rights? Should it be able to vote?

What if it behaved like a human -- but a human with an IQ of 70?

Perhaps we can make chess-playing and poker-playing robots, but not human-seeming robots. Maybe somewhere there's a line that science will never cross, so we won't ever have to deal with those questions. At this point, however, that's not a bet I'd be willing to take.

[Update: While I was writing this post, Jonah Lehrer made an excellent post on the same topic. His take: Chess computers aren't actually very good examples of human-like intelligence.]

Tags

More like this

I went to a lecture on AI some years ago, and the speaker defined the computer science discipline of artificial intelligence as "trying to get computers to do things that humans do better." Obviously, once you get computers to do something better than humans, it's no longer AI!

As a student in Computer Science with an AI focus, I've been thinking about similar things ever since I read about the Turing Test for AI - that is, a computer is considered intelligent if its responses are indistinguishable from those of a human, given a particular scenario consisting of the passing of messages through a wall. It seems to me that the internet provides a unique "wall" of anonymity which would be the ideal testing ground for an AI, for example the AIM-bots of last decade ("SmarterChild", etc.).

I argue that since it is unprovable that the machine is "conscious" as we understand it (just as it is unprovable to any reader that I am conscious, and not just a dream/figment/hallucination/AI program), we must therefore treat it as having basic human rights. However, since it will be part of our society, we must also enforce it maintaining basic human responsibilities - such as respect for other "beings of assumed consciousness", that is, anyone who acts human. As soon as legislation is proposed that would effect the lives of AIs, they should have the right to vote on such legislation, because how can you prove that they don't have such a right?

If you turned off an AI, would it be equivalent to killing it? Would the "thread" of being be interrupted, or would it continue once the AI was rebooted? If the AI with an IQ of 70 was made aware of a newer model that had an IQ of 170, would it want to be upgraded? Would it continue to be the same "person" if it were upgraded? What if it was only a hardware upgrade that made it "smarter" (ie faster processor, greater bus throughput, more RAM)? What if it was limited to software, a change in its basic workings?

By Andy Hight (not verified) on 27 Aug 2007 #permalink

My issue with the chess computers is that all their chess knowledge was programmed into them by a team of programmers and chess experts. Human chess experts don't have their knowledge directly programmed into their brain, but learn it through experience.

For most problems it will not be practical to directly program expertise into the AI. Computers will show 'genuine intelligence' when they are able to learn how to play these games through their own experience.

As Todd mentioned, I think the learning aspect is a big part of whatever human intelligence is. A human chess player still has the choice of making an unusually brilliant, unorthodox play or, conversely, a blindingly boneheaded mistake. As far as I'm aware, the best a computer chess player can do is generate a random number to choose between a set of equally perferrable moves. It can't jump out of it's programming (what might be referred to in humans as an intuitive leap).
In the end, although I haven't yet read the article, I'll agree with Jonah's point that chess computers aren't the best example of human intelligence. Not sure what would be though.

Ever read Godel Escher Bach?
Hofstadter laid out a good case for artificial intelligence requiring several related properties to be like us. It would need to be able to recognize futility. It would need to be able to avoid falling into recursive traps. And be able to deal with self-reference.

Much of what intelligence is, or human intelligence at least, is more about what is not processed than what is.

I still think GEB is the best book on intelligence that I have yet read.

Mark - I've heard of GEB, and it's steadily climbing my must-read list. One of the most computationally difficult problems is determining whether a given function will terminate, which would be integral in avoiding recursive traps and recognizing futility. It's something we either have to solve intuitively or "ignore" in the sense that we stop trying to solve it because we realize the futility.

By Andy Hight (not verified) on 27 Aug 2007 #permalink

A few months ago there was a really interesting piece in Slate by William Saletan about computer chess programs (link). The thrust of the piece is that it's a mistake to think of events like the Kasparov-Deep Blue chess match as human-vs-computer showdowns, since the computers and the programs they run are ultimately of our own invention. The real accomplishment is a very human one: a subtle and elegant distillation of a domain of highly expert human knowledge.

This point is similar to that made by Todd above, but I think it's also different in important ways. For instance, Saletan points out that one of the difficulties that human grand masters face when taking on chess-robots is that the games are often organized very differently at the stratospheric levels of abstraction that grand masters think in terms of. For instance, in a match between Vladimir Kramnic and Deep Fritz, Kramnic overlooked an instant checkmate by Deep Fritz, an error which lost him the match. More generally, Saleten observes that "In postgame press conferences, players swore they'd been winning right up until the moment when, for unclear reasons, they lost."

So, here, it seems that (touching on the point made by Noodle above) humans are the ones that are unable to "escape their programming." The difference is that that programming is a result of the massive amount of history and precedent surrounding chess strategies that informs the training of future grand masters, and consequently the way that they conceive of the game at the high levels of abstraction that allow them to play so expertly.

I don't think knowing how to play chess is a good example of AI. So a human or computer playing chess well isn't an indication to "true" intelligence. AI is more like quasi-intelligence nowadays, and is not directly linked, nor implies consciousness. This is what I think.

Personally, I'll be convinced that machines have reached human-like status when they are capable of engaging in reasoning clearly driven by emotional processes.

With the possible exception of KISMET ( http://www.ai.mit.edu/projects/humanoid-robotics-group/kismet/kismet.ht… ), it seems that the majority of AI research is driven by the assumption that human thought is based purely on "logical" processes. If that were true, phenomena such as denialist thinking (driven by beliefs fundamentally driven by emotional states)would not exist. Further, I'd bet the recursive trap problem mentioned in a couple of posts, would be an easy one to solve if machines could somehow be taught to feel; the recursive trap (i.e., no logical solution to a problem) is likely experienced by humans as futility--an emotional (rather than logical) state. Kismet appears to have emotional processes built into it (e.g., homeostatic mechanisms), but it is arguable whether this is the same as experiencing emotional qualia.

Further, machines should be able to show other complex emotion/cognition interactions. As an example, having their internal programs exist in one state, but their behavioral programs exist in another (e.g., feeling frustrated at someone, but showing no such indications behaviorally).

So in short, when machines are capable of experiencing psychological states, then it will be convincing that they are human. But that opens up another can of worms, because there is still no final word on what it means to be a conscious/feeling being, even though most humans believe we are. Perhaps that's denialist thinking of another sort.

By Tony Jeremiah (not verified) on 27 Aug 2007 #permalink

I think that Turing had the right idea when he stressed human-like language use as the gold standard for when a computer was as smart as a human. If a computer program were able to process natural language as well as any human, which means that it could write letters to the editor, novels, blog posts, etc. as well as any human, it can take verbal instructions, and figure out the hidden assumptions and resolve ambiguities (or ask intelligent followup questions) as well as a human, then I think that most people would consider it to be true AI.

In the internet age, most of us only experience others' intelligence through their verbal abilities, so if that isn't good enough, it's hard to know what could possibly be good enough.

Andy -
What you are eluding to is a version of the Halting Problem, which asks if there is a general procedure to determine if a given machine / program combo will halt.

It turns out there isn't one. There is no general procedure that will give a correct answer for every instance, and even coming up with a procedure that works on a subset of all possible cases is for all practical purposes impossible.

The best you can do is say that if a machine enters a state that it was in previously, then the machine will never halt. But due to the sheer number of possible states a modern computer can go through, this isn't a workable solution.

I've often thought that emotions are a way of avoiding such infinite loops. Essentially (semi) random inputs that ensure that at some point, the state of the machine changes in such a way as to avoid a repeating loop.

By R. Landau (not verified) on 28 Aug 2007 #permalink

Landau - I don't think that I would characterize emotions as random, semi- or otherwise. I am often made aware of the fact that human brains work on many different levels, and (I suppose due to the natural human tendency to assume causes for everything) that there is a reason behind even our most inexplicable emotions. Though of course I agree with you that emotions would be a marvelous way to avoid many such traps. This brings to mind the possibility of an emotional machine having, say, a nervous breakdown, which would be an emotional infinite loop of some sort. For example, if a machine were afraid of a particular condition, and even more afraid of the persistence of that condition, the positive feedback of fear would probably drown out all other emotional and intellectual processing should that machine ever enter such a condition. Though that could probably be halted by changing the environment such that the condition of fear was no longer met.

Soo... apparently nobody who reads CogDaily is afraid of a massive robot uprising? If humans can kill humans, why wouldn't a robot with human intelligence be capable of killing a human?

By Andy Hight (not verified) on 28 Aug 2007 #permalink

This isn't nearly as technical as some of the responses, but given what is mentioned in the post, I can't help but think of the movie Bicentennial Man. For those of you who haven't seen it (it's several years old, starring Robin Williams), it deals with a robot who increasingly acquires human characteristics including (but not limited to) intellligence, curiosity, yearning, and more. Eventually, he begins to incorporate human body parts making himself bionic. The question here is both physical as well as mental/psychological. Is it possible for a robot to be psychologically human but physically robotic? Or a human to be physically robotic but psychologically human? Which do we as a society give credence to?

Ultimately what is decided by the "World council" in the movie is that the ability to or risk of death is what constitutes being human. But as medicine progresses, this can surely not be what defines us?

Andy -

I agree that all emotions aren't entirely random or semi-random. The main point, which you did pick up on, is that they offer an extra piece of input to avoid repetitions in state.

As to your hypothetical about a machine caught in a self-reinforcing infinite loop of fear drowning out other emotions, I want to say that this does happen to certain humans that have mechanical defects in their brains.

By R. Landau (not verified) on 28 Aug 2007 #permalink

I wouldn't dismiss the argument that chess players and chess computers rely on different mechanisms. While Kasparov was famous for developing a mental database of chess openings, human chess players can't memorize or calculate all midgame permutations, and rely on abstract pattern matching to identify likely lines of play. Within the relatively narrow constraints of chess, brute force calculation wins. However it's clear brute force artificial intelligences are fragile outside of bounded domains, or even domains with a larger possible solution set.

For that matter, multiple chess masters, including Fischer, have believed that chess under the classic rules was close to being theoretically solved and proposed rules variants to broaden the solution set and force tactical improvisation. Even before Deep Blue, there was some grumbling that chess theory had reached a point that winning a game was just a matter of who blundered first.

By CBrachyrhynchos (not verified) on 28 Aug 2007 #permalink

Soo... apparently nobody who reads CogDaily is afraid of a massive robot uprising?

You are a fool to assume CogDaily commenters are not robots.

Todd (#3):
The distinction you're making - between 'learned' intelligence and 'hardcoded' intelligence - isn't a very useful one. Learning algorithms are in common use, and all they truly are is a way to automatically distill patterns without humans having to find them first. You use learning algorithms when humans *can't* give them appropriate knowledge.

What's more, learning algorithms are certainly being used on games. Anaconda is a very successful checkers program that was created with the absolute barest minimum of human involvement. The researchers used genetic algorithms to evolve a neural net that could play checkers - both of these are completely hands-off methods. The only thing the researchers did was tell the programs how many points they received after a series of games - they didn't even know how well they did on each game! Anaconda ended up performing *very* favorably against one of the top human-crafted checkers programs.

I used very similar methods to create an AI to play 4d tic-tac-toe against. It worked well. ^_^

By Xanthir, FCD (not verified) on 29 Aug 2007 #permalink

Xanthir - what were the dimensions of your playing field? Also, seriously 4-D? I had enough trouble making a playable 3d tic-tac-toe game... mine was 4x4x4, in case you were wondering.

By Andy Hight (not verified) on 29 Aug 2007 #permalink

They still can't touch us at go where the number of choices at any given moment is as much as 361 and the possible (ko-free) game states are 361! It's too computationally intensive and, dare I say, too intuitive for the old silicon cerebellum. So we can stave off Skynet with your average Japanese kid (if skynet plays by Bergman/Bill and Ted rules).

By Doug Henning (not verified) on 29 Aug 2007 #permalink

We tend to mix up human learning and his intrinsic intelligence, an intelligence that he shares with the rest of nature. Intelligence indicates the general ability to tackle problems, which is common to all entities that survive and have survived nature's pressures. Intelligent solutions arise from within natural learning systems and they are not always in the form of action alone, even hiding is an intelligent solution.

Human learning arises from the fact that we are able to see how these solutions arise, and use these solution paths to create newer solutions, resulting in a positive loop. We presume that entities lower than humans are not equipped with such capacities, at least not to the level that humans have reached.

What goes into a machine is what we have learnt, which as we know is severely limited. That the machine can extend that into beating Kasaprov is more an indication of the thoroughness of our learning than the depth of our learning. We still do not know how Kasparov thinks, I bet even he doesnt...

If we want to create human like machines, we are a far way off, however if we want to create machines that can do what humans do then we are closer.

As to the question whether computers are creative, creative is always with respect to a problem. If a computer finds a non programmed new solution under pressure and such a solution helps it or its designer then the computer is creative. Who says chess is creative?

computrs are creative in being creations, no more no less, or,no less no more, mirrors of thier creators discoveries, either before or after the act,f? whonkows what the further holds?for them[computers or humans]/?winloseordraw.