How we know what someone else can see

i-eca0cf2af9fc3ac4445c7dff7d8aab70-research.gifDevelopmental psychologists since Piaget have been interested in how well children are able to take the perspective of another. Piaget's laboratory had a large table with elaborate models on top; children who were able to take the perspective of a doll on the table and explain what the table looked like from her perspective instead of their own perspective were said to be at a later developmental stage.

But understanding whether a doll can "see" something doesn't always literally require taking her perspective. Take a look at this simple arrangement of objects on a table:

i-e498a75a83ded90a12fc87630fc5e2a0-perspective0.jpg

You don't have to imagine yourself sitting at the doll's place in order to decide whether she can see the key. All you have to do is trace an imaginary line from the doll's eyes to the object in question. But line tracing won't tell you if the key is on the doll's right or left. For that, you have to imagine sitting in the place of the doll. Children typically are able to do the "can the doll see X?" task at a younger age than the "is the key to the doll's left or right?" task.

However, though there has been an abundance of research on when children do these tasks, little work has been done to learn how people -- children or adults -- do them. It might be that even though line-tracing is possible, people actually always place themselves in the perspective of another. Or there may be a completely different explanation.

Pascal Michelon and Jeffrey Zacks have devised a set of experiments to help uncover exactly how we perform these tasks. In one experiment, they showed college students a set of photos like the example above, asking them to indicate either whether the highlighted object was visible, or whether it was to the left or right of the doll. Participants were instructed to respond as quickly as possible, and reaction times were measured.

In a new experiment, they substituted an asterisk symbol for the doll, like this:

i-9e5e1481e6591fa87d5aba661c6c8bd8-perspective1.jpg

Again, reaction times were measured. If the participants are imagining themselves in the position of the asterisk, then the farther the table is rotated from their own perspective, the longer it should take them to do the task. But if they are simply imagining a line from the asterisk to the object, then the orientation of the table shouldn't matter. Here are the results:

i-02f8d85b97e347533d4563e7a9331bf1-perspective3.gif

As you can see, reaction times were slower for the left-right task as the orientation of the table changed, while for the "can the asterisk see it" task, the reaction times were the same, regardless of table orientation.

But if the participants were tracing a line from the asterisk to the objects to do the "can the asterisk see it" task, then the longer the distance from the asterisk to the object, the longer the reaction times should be. Here's a chart showing the relationship between object distance and reaction time for both tasks:

i-60c431ef5ed19796eed849d39a569ab7-perspective4.gif

For the "can the asterisk see it" task, the results are as expected: faster reaction times when the object is nearer. But the results were similar for the left-right task, which at first glance doesn't make sense: if participants are imagining what they see from the perspective of the asterisk, it shouldn't take longer to decide if objects that are farther away are to the left or to the right.

But take a look back at the diagram with the asterisk.

i-9e5e1481e6591fa87d5aba661c6c8bd8-perspective1.jpg

Due to the arrangement of the table, objects that are closer to the asterisk, like the scissors or the pencil, are also more dramatically to the left or right compared to, say, the key. In fact, when they analyzed the data taking this into account, the distance advantage for the left-right task disappeared. It appears that participants were using line-tracing for the visibility task and perspective-taking for the other task.

To confirm this suspicion, Michelon and Zacks created a new experiment, with a new arrangement of objects that allowed them to control not only the distance from the asterisk, but also the distance of each object from the midline. Here's a sample arrangement of the table:

i-4620f9a40b050883610362fa03c2e49d-perspective2.jpg

As before, they found that reaction times were slower for the left-right task as the table was rotated farther from the viewer's perspective. However, now the reaction time advantage for near objects was much more dramatic for the visibility task compared to the left-right task.

i-df9a4babbed94ddfe92a517c01c3ae7b-perspective5.gif

Michelon and Zacks argue that these experiments offer substantial evidence that we use at least two different methods to understand the perspective of others. When we are trying to decide whether someone else can see what we can see, these experiments suggest that we use the line-tracing method, but when we're trying to understand the relative positions of objects, we use the more cognitively demanding perspective-taking approach.

Michelon, P., & Zacks, J.M. (2006). Two kinds of visual perspective taking. Perception & Psychophysics, 68(2), 327-337.

More like this

One of the amazing things the visual system does is to compensate for the motion of our bodies. Consider, for example, the difference between the apparently smooth view of the world you get when you're talking a walk, and the shaky image you see if you record the same walk while holding a camcorder…
[Originally posted January, 2007] Nearly all video games that offer a first-person perspective -- where the view on-screen simulates what a real person would see as she navigates through the virtual environment -- also include a virtual map to help in navigation. Even my favorite golf game has one…
Nearly all video games that offer a first-person perspective -- where the view on-screen simulates what a real person would see as she navigates through the virtual environment -- also include a virtual map to help in navigation. Even my favorite golf game has one. Such maps can be indispensable,…
One of the amazing things about the Stroop Effect is how much good research is being done based on this simple phenomenon, over 70 years later. One of the neatest recent experiments was created by Peter Wühr and Florian Waszak. I think I've created a simple animation that replicates their results.…

As the asterisk rotates farther from your point of view, you also have to inhibit a response tendency to answer according to YOUR right or left. There isn't such a difference in inhibitions necessary for the visibility task since all objects are visible to you, regardless of the rotation.