What does it even mean to pass the mirror test?

The mirror test is a well known indicator for some degree of self-awareness: surreptitiously mark an animal's face, show it a mirror, and see if it recognizes that the reflected image is of itself by whether it reaches up to touch or remove the mark. We see that behavior and infer that the animal has some knowledge of itself and can recognize that the mirror image is not another animal.

But now robots are being specifically programmed to pass the mirror test.

Ow. It makes my brain hurt.

So this is a computer that has no other indicators of consciousness or awareness or autonomous "thought" (whatever that means…my brain is hurting again), and is being coded to respond to a specific kind of visual input with a specific response…to literally pass the mirror test by rote. Does that really count as passing?

I think that all it actually accomplishes is to subvert the mirror test. It's always been a proxy for a more sophisticated cognitive ability, the maintenance of a sophisticated mental map of the world around us that includes an entity we call "self", and I don't think that training a visual processing task to identify a specific shape unique to the robot design counts.

I'd also like to see what happens if two identical robots are made and put in the same room. To recognize "self" you also have to have a concept of "other".

Categories

More like this

How about making robots that attack each other if put in the same room? They would look the same.

Then put up a mirror and see if it attacks the mirror.

"Does that really count as passing?"

That depends on how the behavior arises. Is it the result of AI that actually resembles cognition (a 'strange loop' as Hofstadter calls it) or is it the result of just plain old 'if-then-else' kind of programming.

Something very similar to this was actually present in the world's first autonomous robot. Its creator was a bit of a showman, and loved having the media around, so we've got a lot of information on it, surprisingly.

This robot not only behaved in an unusual, and not always predictable, manner when left to its own in controlled environments, but could handle complex environments without too much trouble, and new behaviour woudl emerge from there as well (for instance, the original reporters were convinced they should describe it as "male" because it would ignore the men in the room and pursue the women).

When presented with a mirror, its behaviour changed into a very distinctive dance. This behaviour would not show up unless the robot spotted its own reflection.

Similarly, two robots like this produced a characteristic dance when they were in the same room and allowed to move freely; that dance never showed up unless both robots were present.

Would that count as passing?

...Oh, I forgot to mention. This was 1948. The robots were described completely in W.G. Walter's The Living Brain, if you're interested - they're commonly called the Grey Walter Tortoise (or, because Walter himself was a showman, as I said, he referred to them as "Machina speculatrix.) Check 'em out.

"...and I don’t think that training a visual processing task to identify a specific shape unique to the robot design counts."
I agree. In a way your response reminds me somewhat of John Searle's Chinese Room argument, except that he was talking about machines understanding language.

By Charles Sullivan (not verified) on 24 Aug 2012 #permalink

I’d also like to see what happens if two identical robots are made and put in the same room.

PZ, I don't think you're giving this experiment enough credit. From what I'm seeing, they're trying to teach the robot to a) understand what a mirror is, b) recognize what it looks like, and c) develop a concept of self as a tangible entity in real space and time.

So yes, the machine is being programmed to pass the mirror test, but it's being programmed to do so in the same logical way that chimps, elephants, and humans do it.

By Greg Fish (not verified) on 24 Aug 2012 #permalink

Oh and to answer your hypothetical quoted in my previous reply. If the identical robot is not facing the mirror, it should be recognized as a different robot. What's really tricky would be to put two identical robots in front of a mirror and look at each other in that mirror. That should really throw them for a loop...

By Greg Fish (not verified) on 24 Aug 2012 #permalink

This whole thing goes way back for us computer scientists. The Turing Test (http://en.wikipedia.org/wiki/Turing_test) was one of the earlies attempts at defining "artificial intelligence" and nowadays we have the same dillema, since "stupid" chatbots such as Cleverbot and Siri are getting closer and closer to passing the test.

Does it pass because it knows what it looks like? That's an obvious cheat. Does it recognize itself by deducing that the reflection is an inversion of it's own body model, and moves the same way? That's slightly less of a cheat.

A cognitive task that we know the mechanics of will always seem like a cheat. Everything the brain does is far more emergent without a clear chain of events.

I'm guessing we'll see robots with sophisticated emergent behavior before we figure out the precise mechanics of how the brain does it's cognitive tricks. But what if it happens in reverse order? Will we look at ourselves as cheating the mirror test?

By Bjørn Konestabo (not verified) on 24 Aug 2012 #permalink

Robots who attack each other when in the same room? 'Celebrity Big Brother'. (Cultural snobs don't know what they're missing.) Julian Clary: 'I stayed in a gay hotel once. It was called the White Swallow'. The hilarious part was that none of them saw it was a joke.

By xmaseveeve (not verified) on 24 Aug 2012 #permalink

The mirror test and the Turing test are both artificial, incomplete proxies for aspects of Nature's own fitness test, a.k.a. the struggle for existence.

The ability to cheat at such a limited test (I wrote a program some 20 years ago that passed a simple Turing test; so imagine what better programmers than I could do with access to more powerful computers over the intervening decades) is not evidence of survival ability in the real world.

Actually, now I come to think of it, it's only cheating because passing artificial tests isn't a reliable indication of Intelligence.

By BecomingJulie (not verified) on 24 Aug 2012 #permalink

I think the assumption has always been that 'self-awareness' in a limited sense remains the parsimonious explanation for animals passing the mirror test.

After all, there are not a lot of likely reasons for animals to be 'rigged' to pass this test in ways that differ greatly from our own internal workings.

Am I wrong?

By Don Druid (not verified) on 24 Aug 2012 #permalink

It reads like they're attempting to program the ability to pass the mirror test in a general way. As in if they change its apperance it'll still work out its itself in the mirror. I suppose that its programing how to pass ther test not what the test answer is.

@Brian D: I looked up Grey Walter's tortoises, fascinating! Seems like they used two cell neural nets. I might have to break out the analogue kit in my electronics work room.

>To recognize “self” you also have to have a concept of “other”.

What, are there no solipsists?

...or am I the only one? ;)

By Buck Field (not verified) on 25 Aug 2012 #permalink

From the article, it sounds like the whole "mirror test" angle is a clever way to spin the research in order to capture a little public fancy. The real goal here seems to be to allow robots to *use* information gleaned from looking in a mirror, and self-recognition is just a step in that direction. The goal of making robots use mirrors is very interesting if you understand a bit about CV, but it's not going to cause laymen to say silly things at parties, and we all know that making laymen say silly things at parties is positively correlated with keeping your research funded.

So the philosophical question of whether it counts to pass the test "by rote" is sort of moot, but in any case, that doesn't seem to be what this is doing. The article says that so far the robot is only able to recognize its arm, which indicates to me that this is not hard-coded recognition. Building a robot hard-coded to recognize itself in a mirror would be trivial; you'd just pick key points on its body, place distinctive symbols at those points, and then use any number of mature feature detection algorithms to identify those points.

So it sounds like they're doing something much more difficult and interesting. Obviously, passing the mirror test without any other form of cognition still does not make a robot "sentient", but it is still philosophically interesting in the sense that it dissolves yet another silly "five words or less" definition of consciousness.

Based on http://en.wikipedia.org/wiki/William_Grey_Walter, I would guess that Walter's mirror behavior was a simple feedback effect, rather than a pre-programmed gimmick. The wiki page says that it jiggled in a certain way when a light was put on its "nose" and it observed itself in a mirror. If it already had a tendency to track light sources, then its attempt to track a light that varied with its own movement could result in such an oscillation. This is not very much like self-awareness, and it does sound as if Walter may have overstated the significance.

Is it hofstadter or dennett who coined the word sphexish? The robot seems to be passing the test spexishly, sort of?

This is a characteristic of all tests, and it's why No Child Left Behind is such a stupid idea.