Our brains react differently to artificial vs human intelligence

Blogging on Peer-Reviewed ResearchWith their latest film WALL-E, Pixar Studios have struck cinematic gold again, with a protagonist who may be the cutest thing to have ever been committed to celluloid. Despite being a blocky chunk of computer-generated metal, it's amazing how real, emotive and characterful WALL-E can be. In fact, the film's second act introduces a entire swarm of intelligent, subservient robots, brimming with personality.

Whether or not you buy into Pixar's particular vision of humanity's future, there's no denying that both robotics and artificial intelligence are becoming ever more advanced. Ever since Deep Blue trounced Garry Kasparov at chess in 1996, it's been almost inevitable that we will find ourselves interacting with increasingly intelligent robots. And that brings the study of artificial intelligence into the realm of psychologists as well as computer scientists.

Jianqiao Ge and Shihui Han from Peking University are two such psychologists and they are interested in the way our brains cope with artificial intelligence. Do we treat it as we would human intelligence, or is it processed differently? The duo used brain-scanning technology to answer this question, and found that there are indeed key differences. Watching human intelligence at work triggers parts of the brain that help us to understand someone else's perspective - areas that don't light up when we respond to artificial intelligence.

i-3ede20e21d20e658de9c77c16d41e48d-WALLE.jpg

I, for one, welco... oh whatever

Ge and Han recruited 28 Chinese students and made them watch a scene where either a detective had to solve a logical puzzle. The problem-solver was either a flesh-and-blood human or a silicon-and-wires computer (with a camera mounted on it). In either case, their task was the same - they were wearing a coloured hat and had to deduce whether it was red or blue. As clues, they were told how many hats of each colour there were in total and how many humans/computers had also been given hats. They could also see one of these peers, and the hat they were wearing.

It's an interesting task, for both the human and the computer in this mini drama were given the same information and had to make the same logical deductions to get the right answer. The only difference was the tools at their disposal - the human used good, old-fashioned brain power while the computer relied on a program.

The students' job, as they watched this scene, was to work out if the problem-solver was capable of divining the colour of their hat. As the volunteers reasoned their way to an answer, Ge and Han scanned their brains using a technique called functional magnetic resonance imaging (fMRI).

They found that the group who watched the humans showed greater activity in their precuneus; other studies have suggested that this part of the brain is involved in understanding someone else's perspective. The scans also revealed a fall in the activity of the ventral medial prefrontal cortex (vMPFC), an area that helps to compare new information against our own experiences.These two reactions fit with the results of other studies, which suggest that we understand someone else's state of mind by simulating what they are thinking, while suppressing our own perspective so it doesn't cloud our reasoning.

But neither the precuneus nor the vMPFC showed any change in the group who watched the computer. And the connections between the two areas were weaker in the students who watched the computer compared to those who saw the humans.

i-3a02f0f6716a016446c1a13698af800d-Human-vs-AI.jpg
The differences weren't for lack of deductive effort; when the students were asked to work out the colour of the problem-solver's hat for themselves, the scans showed equally strong activation in the brain's deductive reasoning centres, regardless of whether the students were watching human or machine.

Two strategies

It seems that the technique of placing yourself in someone else's shoes doesn't apply to artificial intelligence. Because we are aware that robots and computers are controlled by programmes, we don't try to simulate their artificial minds - instead, Ge and Han believe that we judge them by their actions.

Indeed, when Ge and Han gave the students the simpler task of just saying what hat colour the problem-solver can see, those watching the computer showed stronger activity in the visual cortex than those watching the humans. That suggests they were paying closer attention to the details of the scene such as where the computer's camera was pointing. Their precuneus, however, remained unexcited.

These results may help to explain why autistic people seem to enjoy interacting with computers and playing with robots. Autistics face social difficulties because they find it hard to put themselves in other people's shoes. Indeed, their vMPFCs fail to tune down in the normal way, suggesting that they cannot block their own experiences from interfering with their deductions about someone else's. But when they interact with robots, they don't have to do that - remember that the activity of the vMPFC didn't drop either in the students who watched the problem-solving computers.

Ge and Han conclude that humans understand artificial intelligence and other humans using very different mental strategies. But I wonder if their result applies to all types of AI. In this case, the world of artificial intelligence was represented by a camera linked to a computer, neither of which actually interacted with the study's participants. Would the results be different if the robot in question was more human in design? What would happen in the precuneus and vMPFC of someone playing with a Robo Sapien toy or watching WALL-E? A question for next time, perhaps.

Reference: PLoS ONE doi: 10.1371/journal.pone.0002797

Image: Wall-E copyright of Pixar; figure by PLoS

Categories

More like this

"But I wonder if their result applies to all types of AI'

I wonder if this applies to all types of people, Ed! I mean the sample is so small, and it's entirely students, and its entirely Chinese. What confidence do we have that we can generalize this sample to all demographics?!

This is interesting, however once robots become mainstream the humans will treat their robotic pets with as much love and attention as their real counterparts. The same will probably go with human acting robots. Once we get used to them, we will simply treat them that way. We love to anthropomorphise.

By Ricahrd Eis (not verified) on 04 Aug 2008 #permalink

The conclusion may be accurate for some limited situations but I am not sure it would apply to some of the many humanoid robots currently under testing and development. Tests in Japan have shown that children and nursing home residents who were given a robot assistant/friend quickly bonded with them.

I suspect that tests with a robot capable of interaction with the testers would elicit a different response. I hope that Jianqiao Ge and Shihui Han continue to pursue this area of research.

Somehow I don't think simply making the AI look more living will change this result, though I'm sure the big goofy eyes will trigger all sorts of other changes. I suspect this result is basically because the programs of living things are predictable and intuitively understood - seek food, crave sex, flee danger and so on, while all AI programs I've seen (in addition to being pretty simplistic) don't have an obvious, predictable purpose I could apply any sort of theory of mind to. Unless I have reason to think / observe that the curvaceous robot front of me is in fact designed to be a futurama style fembot, I'm not going to treat it as much more than a mannequin.

Children also bond with completely inanimate stuffed animals.

Researchers are working to increase the cute-and-cudlitude of robots. This is not the same thing as recreating human intelligence.

It would be interesting to see a harry harlow type experiment here - an unfriendly mechanical-looking device controlled by a human versus a cute and furry device controlled by a computer. Which would elicit which response?

When I saw this in PLoSOne, I also immediately thought of Wall-E. As far as the study, I think we all have to be pretty careful when reading fMRI results like this. I don't at all doubt the study authors' methods or results, but fMRI interpretation can be pretty tricky. It's easy to generalize the findings of other studies to your own (i.e., XXX found activation here which they attributed to YYY cognitive function, since we found diminished activation here, YYY cognitive function must be suppressed, or whatever variant you like) to your results, but this isn't necessarily valid. Keep in mind that even the smallest, highest-resolution voxels (3d pixels) in an fMRI scan are measuring at best the hemodynamic response to hundreds of thousands of neurons. It's very possible (even likely) that similar voxels may show increased BOLD signal response due to the activity of very different, though spatially-indistinguishable, networks - or that the temporal dynamics of activity are completely different, but the different patterns happen to lead to a similar BOLD response. I think D is on to something with the Theory of Mind interpretation, though - maybe an interesting control condition could have involved tools? Right now, computers are generally seen as problem-solving tools with no capacity for interaction or 'intelligence'. Maybe this will change in the next few years?

Nice job covering this story, though, Ed. I enjoy your writing a great deal.

Thanks for the interesting comment Tommy - a lot of people have strong opinions on the limitations of fMRI studies, but few put the point across in such a cogent way.

I wonder whether this type of brain reaction is hardwired or acquired over time with experience?

It also makes me wonder whether, if children in the future are raised around robots (whether humanoid or classically "robotic"), will their brains and their ability to empathize and "place themselves in the others position" adapt? Will their brains display the same fMRI signal as children of today? Or will they be more "in tune" with the mental ways and actions of the robots?

Interesting stuff. To mirror what some others have said: Perhaps the reason the subjects react differently is because they really only know simple (relative to the human brain) robots and programs. I'd be interested to see if people living in a world with more human-like robots would have the same reaction. Also, I wonder how the subjects would react to watching a small child or chimpanzee solve the problem, as compared to the computer. Or how about some kind of cyborg :), like Ghost in the Shell-type stuff. Or how about if, as another poster alluded to, the robot used was one the person had a previous 'relationship' with?

I also really wonder about whether one's perspective on the existence of the soul, mind (Cartesian duality-etc.) and even free will would have any effect on this. If people didn't see a machine as something totally unrelated to the mystical ongoings of the human condition, perhaps they would react differently (although, if this had anything to do with it, you'd expect the effect to be more pronounced when dealing with "human-level" AI). Basically though, I think this may be more of a result of the practicalities of life in 2008, than a deep insight into the homo sapien brain. Really interesting research though.

I think the point about the existence of a previous relationship with the robot is crucial. Even with today's (relatively) simple computers, not to mention motor cars, bicycles, etc, familiarity with our own kit allows us to trouble-shoot it better. Among human beings, we find it much easier to predict the views and feelings of our family and friends than those of strangers.

With regard to the effect on us of close relationships with hypothetical advanced future robots, I explore this in my book Automatic Lover ISBN 978-1-4092-0554-8.

By Ariadne Tampion (not verified) on 19 Aug 2009 #permalink