How to make robots more "human"

You might think the best way to make a robot seem more "human" is to reproduce human features as precisely as possible, like in this YouTube video:

But most people are creeped out by robots this "real." We're actually more comfortable interacting with less realistic robots that exhibit some human traits, like this adorable robot named Leo:

So why is this less realistic robot so much more endearing? A fascinating article in this week's New York Times Magazine may offer some answers:

If a robot had features that made it seem, say, 50 percent human, 50 percent machine ... we would be willing to fill in the blanks and presume a certain kind of nearly human status. That is why robots like Domo and Mertz are interpreted by our brains as creaturelike. But if a robot has features that make it appear 99 percent human, the uncanny-valley theory holds that our brains get stuck on that missing 1 percent: the eyes that gaze but have no spark, the arms that move with just a little too much stiffness. This response might be akin to an adaptive revulsion at the sight of corpses. A too-human robot looks distressingly like a corpse that moves.

Domo and Mertz are among several robots discussed in the Times article, which covers a lot of ground in considering whether robots can ever be made into real substitutes for humans.

But despite the issue of the too-realistic robot, one grad student actually claims she would prefer a robot to a real boyfriend, if it could be made to simulate caring about her, since this behavior would be much more reliable than the real thing. But another student countered with this:

"Anyone who tells you that in human-robot interactions the robot is doing anything -- well, he is just kidding himself.... Whatever there is in human-robot interaction is there because the human puts it there."

The Times reporter had been especially impressed with a video of Leo apparently showing that he had mastered the "false belief" task, where a graduate student Jesse Berlin looked for an object in the wrong box, and Leo seemed to know both what he was looking for and how to help him get it. But all was not as it seemed:

Leo did not learn about false beliefs in the same way a child did. Robot learning, I realized, can be defined as making new versions of a robot's original instructions, collecting and sorting data in a creative way. So the learning taking place here was not Leo's ability to keep track of which student believed what, since that skill had been programmed into the robot. The learning taking place was Leo's ability to make inferences about Gray's and Berlin's actions and intentions. Seeing that Berlin's hand was near the lock on Box 1, Leo had to search through its internal set of task models, which had been written into its computer program, and figure out what it meant for a hand to be moving near a lock and not near, say, a glass of water. Then it had to go back to that set of task models to decide why Berlin might have been trying to open the box -- that is, what his ultimate goal was. Finally, it had to convert its drive to be helpful, another bit of information written into its computer program, into behavior. Leo had to learn that by pressing a particular lever, it could give Berlin the chips he was looking for. Leo's robot learning consisted of integrating the group of simultaneous computer programs with which it had begun.

In other words, Leo had to be given a massive head start by its programmers in order to solve the problem any five-year-old can easily manage. What's more, the robot can only be programmed with one instruction set at any given time -- so it can't do the false-belief task, for example, on the same day it does the button-pushing task.

Clearly, however, once the concept is proven, it's a much simpler matter to integrate the entire set of possible behaviors into a single robot. Watching these technologies develop is riveting stuff, and I'd highly recommend reading the entire Times article.

As a bonus, here's one more video (not included in the Times story), featuring COG, a robot discussed extensively in the article:

Tags
Categories

More like this

The relationship you're describing is popularly known as the Uncanny Valley, first postulated by Japanese roboticist Masahiro Mori in 1970.

Contrary to the first two paragraphs of your posting and to most media stories about this aspect of human/robot interaction, the Uncanny Valley is not necessarily accurate nor the best way of characterizing human emotional responses to increasingly humanoid robots. Many contemporary roboticists criticize it as being overly simplistic, subjective, and pseudoscientific.

I speculate that as humanoid robotics and artificial intelligence becomes increasingly sophisticated, more and more cracks will appear in Mori's assertions, forcing a reappraisal of the topic. Just food for thought for the coming years.

I noticed this several years ago with animated films. As an artist of course I love animation and have followed the techniques over the years and was very excited when people began to try to make them realistic. Until I watched the first one, it was just "wrong" and made me uncomfortable to watch. While it's easier to believe Shrek is real a realistically animated human-type character just screams out "wrong" the whole time you are watching. In that vein I've long thought that The Incredibles gave the most realistic depiction of super powers. Films with humans have always had problems making powers look real but The Incredibles really made me feel what a fight between superpowered beings would feel like.

I recall from some years back that researchers - MIT if I am not mistaken - found that people interacting with a computer program were more comfortable when the program offered verbal indications it was listening rather than sit in silence and wait for them to finish.
On a more philosophical note, are we working this problem in the wrong direction? Creating perfect robots and then "making them more human"? Is it better to try to duplicate the complexity of the human brain, or to create something that can become as complex as the human brain? I guess for that to work we'd have to hope that technology can speed up evolution a few million percent.
Matt

I don't know. I bet if the made a bunch of the robots in the first film for the next Republican National Convention, nobody would notice anything amiss.

While there is something scary and unsettling about very lifelike human robots, I think the wierdness in the videos shown on this page is that the human isn't quite human. She just does a few human-like movements, then reminds us she's robotic. If it were Data from Star Trek, I would feel more comfortable.

With respect to comment #3:

I agree, to a point. I honestly feel that modern cgi is actually good enough to properly duplicate live actors and make them look, well, alive. This wasn't achievable a few years ago.

For example, Final Fantasy: The Spirits Within tried very hard to be realistic, but couldn't quite achieve it. I could *feel* something akin to the Uncanny Valley effect when I was watching it.

On the other hand, the recent Beowulf film was so realistic that I (having gone into it with absolutely no information about the film itself) actually thought that the closeup scenes were live actors. It wasn't until after leaving the theater and looking up details online that I found out the entire thing was CG.

As a programmer with a keen interest in animation, the fact that a film can *do* that is absolutely astounding, and indicative of tremendous advances in the medium.

By Xanthir, FCD (not verified) on 12 Dec 2007 #permalink

Scott McCloud, the comic critic and author of Understanding Comics, suggests that cartoon-y (i.e., more generically-featured) characters are more empathetic because people can project themselves into them. As characters become more highly rendered and "realistic," they take on more individual characteristics that make them seem other. This could account for both the higher likability of the Incredibles over the characters in Final Fantasy and our preference for cartoon-y robots.

Hi,
I speculate that as humanoid robotics and artificial intelligence becomes increasingly sophisticated, more and more cracks will appear in Mori's assertions, forcing a reappraisal of the topic. Just food for thought for the coming years.
thanks you.