More and more human conversations are taking place online. While I don’t do instant messaging the way my kids like to, I’m much more likely to contact a friend via e-mail than to pick up the phone. Here at Cognitive Daily and at other online discussion forums, I’ve built relationships with commenters who I’ve never seen or even e-mailed.
While the next leap in online communications—videoconferencing—is in its infancy, an intermediate form is beginning to show promise. Called a Collaborative Virtual Environment (CVE), it enables people to have a virtual online conference by creating digital representations of everyone they’re meeting with. Instead of sending video images across the Internet, only voice and a little data about movements are transmitted. Computers on either side of the connection translate the motion data into a realistic animation of an avatar—an electronic image of each conferee. Even if fast Internet connections eventually allow widespread true videoconferencing, CVEs will still be necessary for situations when a fast connection isn’t available, such as when one conferee is using a cell phone.
This brings up a serious issue: could one or more members of a CVE hack the network, sending motion data designed to win over the other participants? It’s not as far-fetched as it sounds. Research on face-to-face interaction has shown that people who mimic the gestures of the people they talk to are judged to be more likeable than those who don’t. In an online setting, a conferee could program his avatar to behave differently for each person viewing the conference, custom-mimicking the other conferees for maximum likeability.
But perhaps likeability doesn’t work the same way in a virtual environment. Jeremy Bailenson and Nick Yee developed an experiment to test if people respond to computers the same way they respond to humans. They gave students course credit to watch a persuasive presentation “read” to them by a computerized virtual reality embodied agent. The agent looked like either a man or a woman, and read the same script (a persuasive speech advocating requiring students to carry ID on campus at all times) in a corresponding male or female voice. The VR equipment allowed the researchers to monitor the body movement of the students participating in the experiment. For half the participants, the head and body movements of the agent mimicked the motions of the viewer—only delayed by four seconds so participants didn’t notice. The other half of the time, the agent used the motions of another viewer, recorded during a separate session. After the session, viewers rated the agent on three different dimensions: social presence (how realistic the agent appeared); whether they agreed with the agent’s proposal; and their overall impression of the agent—how positively they viewed the agent. Here are the results:
For all three measures, participants viewed the mimicking agent as more effective than the agent that displayed recorded movements. The viewers were all aware that this was simply a virtual reality presentation—that there was no real person behind the avatar—and yet they still found the mimicking agent to be more effective. So it does appear that a simple computer program can manipulate complex social behavior. Perhaps as people get accustomed to CVEs, they will become more aware of the possibility of social manipulation, but in the short run, this experiment shows the potential for danger in computer-mediated communication.
Balenson, J.N., & Yee, N. (2005). Digital chameleons: Automatic assimilation of nonverbal gestures in immersive virtual environments. Psychological Science, 16(10), 814-819.