Conversational partners coordinate eye movements -- and nose-scratching

i-eca0cf2af9fc3ac4445c7dff7d8aab70-research.gifWhen you have a conversation with someone, you're doing a lot more than just interpreting the meaning of the words they say. You're also trying to figure out what they intend to say and integrating that in to your understanding. You're working together with them to decide whose turn it is to speak. Your accents become similar. Your body movements become synchronized. You even scratch your nose at the same time as your conversational partner.

It makes sense, then, if you're both looking looking at the same picture while you talk, that you'll look at the same parts of the picture at the same time. If you're together in the same room, you can point to the areas you want to talk about, or substitute pointing for speech. But even if two partners are conversing remotely while looking at the same picture, their eye movements are still synchronized. In 2005, Daniel Richardson and Rick Dale had recorded college students speaking about pictures of the cast of The Simpsons and Friends, and recorded their eye movements. They then played back those speeches to different set of students looking at the same pictures. Here are the results:

i-319b3760dae5d20a2266d15b19b9b5ed-richardson1.gif

The dark green line shows when the eye movements of a speaker-listener pair matched. So at any given moment, there's about a 17 percent chance that the speaker and the listener were looking at the same part of the picture (whether it was the Simpsons or the Friends). The listener was actually more likely to look where the speaker looked a little after the speaker did -- 20 percent of the time after a two-second delay. But when speakers were matched up randomly with other participants (who had listened to different speakers talking about the same pictures), the effect disappeared, and the likelihood of looking at the same picture dropped to around 14 percent no matter what the delay.

But what about when conversants interact in real time? In a new pair of experiments, Richardson and Dale, along with Natasha Kirkham, had volunteers converse over the telephone while looking at the same sets of pictures. Here are the results:

i-2cfbf89cfc55d775b57bff227bb3ffc3-richardson2.gif

The results were similar to Richardson and Dale's 2005 study, except that eye gaze actually matched most closely at the moment the speaker was looking at the picture, instead of after a two-second delay. The researchers attribute the overall lower levels of matching eye movement to the difficulty of capturing the rapid eye shifts in the course of conversation.

But how much does the common eye gaze depend on the common knowledge of the two participants? Nearly every American college student is familiar with The Simpsons and Friends. What if the subject of conversation is less familiar? In a second experiment, Richardson's team had conversants look at Salvador Dali's painting Nature Morte Vivante. Before their conversation, the speakers listened to one of three 90-second explanations of one aspect of the painting. The explanations focused on either the history, content, or meaning of the painting -- so some conversants had listened to the same explanation of the painting, while others had listened to different explanations. When people had listened to the same explanation, they looked at the same region of the painting as their conversational partners more frequently compared to those who had heard different different explanations of the painting.

Interestingly, there was no difference between the results for pairs who had heard the same explanations: listening to the history of the painting had the same effect on eye gaze as listening to a description of the painting -- which opens a fascinating question: how much common ground is necessary for conversational partners to look at the same parts of a picture? Perhaps just listening to a general discussion of art history would have caused the effect. Or perhaps listening to a completely irrelevant explanation would have affect the results, as long as both conversants had heard the same recording before viewing the picture. What do you think?

Richardson, D.C., Dale, R., & Kirkham, N.Z (2007). The art of conversation is coordination: Common ground and the coupling of eye movements during dialogue. Psychological Science, 18(5), 407-413.

More like this

What is the confidence level? 20% to 14% may not be a significant difference.

In relation to your query about common ground between partners, it would be interesting to re-run the experiment from a cross-cultural perspective. Specifically, to examine what the data pattern looks like when the speaker-listener and/or conversational pairs are from different cultural backgrounds. Other research has shown that persons from different cultural backgrounds (i.e., American and Chinese; http://www.pnas.org/cgi/content/abstract/102/35/12629 ) have differences in eye movements when looking at a scene.

So I wonder what impact an American-Chinese pairing would have on similarity of eye movements when every other aspect of the experiment is kept the same.

By Tony Jeremiah (not verified) on 11 Aug 2007 #permalink

mirror neurons, dude. it's all about the mirror neurons.

This is really interesting in that it provides some evidence as to how factors relating to embodiment contribute to two agents' ability to share their intentions and engage in a shared activity. rtl, while I too love the mirror neurons, there's more to it then the activity of those cells. In general, there seems to be an elaborate coordination between two agents that incorporates salient aspects of the situated physical environment (e.g., the painting), as well as prior knowledge (the lecture in the second case). One other tremendously important factor, as hinted at in the beginning of the review, is the ability of each agent to assess the intentions of the other. If a listener is observing the speaker look at something, the listener's mirror neuron activity might predispose them to looking at the same thing, and thus give them a predictive stance on what the speaker intends, which is supported by the content of the speech. The shared attention given to the painting then, could be seen as emerging from the interaction between imitative behaviors, shared environmental cues, physical interactions, and assessments of salience in the environment. Very interesting stuff.

Michael, the results are significant, with p < .001 in the 2005 experiment and the first experiment in this study, and p < .05 in the last study.

The participant's purpose,which may effect the result greatly,had been ignored in these experiment,but focused on some events filling with the words imply they where they should look on.
Maybe What the speaker said just make he or she curious to searching through the picture.
I can't image what the meaning of this research in an ordering_like way to attract the participants attention ,then get their seems as attractive result data ,but not to find why we are curious to behave under the speakers' words.