Recognizing faces: New evidence on how we put it all together

ResearchBlogging.orgWhat are we looking at when we recognize faces? The shapes of of the individual components of the face -- eyes, nose, mouth? Or are we recognizing the larger patterns of how those parts relate to one another -- the distance between the eyes, the position of the mouth relative to the nose? We're actually probably doing some of each, with those configural patterns playing a slightly more important role.

But this raises an important question for perception researchers, because recognizing details and recognizing overall patterns utilize two different components of the visual system. Researchers have known for some time that "spatial frequency" is an important tool for understanding how the visual system works. High spatial frequencies are used for perceiving details like eyes and lips, whereas larger-scale differences and the overall configuration of an object is perceived through low spatial frequencies. Just as you might use different instruments to measure the height of waves in the ocean and the changing of the tides, different parts of the visual system are responsible for high and low spacial frequencies.

But there is a third aspect to recognizing a face -- holistic processing, which is a combination of the two other methods. Holistic processing explains why it's harder to recognize faces we see upside-down or broken into pieces. Which part of the visual system manages holistic processing of faces?

A 2006 study by V. Goffaux and B. Rossion suggested that holistic processing might rely primarily on low spatial frequency visual processing. They showed viewers a composite photo made from two separate faces, then a blank screen, and then a new composite, which either did or did not include part of the original face. They also varied the spatial frequency of the images, like this:

i-e22b371f5b919d4ebf2f48b798a45ea1-cheung.png

Viewers were asked if the top half of the image was the same or different from what they had seen before. In all cases, they were to ignore the bottom half. There was one other key to this experiment. Sometimes the faces were aligned, as above, and sometimes they were misaligned, something like this:

i-f5117dee829411d875fd1bc27b048c0b-cheung2.png

What Goffaux and Rossion found is that misalignment helped people recognize similar faces. And alignment appeared to interfere more for low-spatial frequency images than for high-spatial frequency images. They concluded that holistic face recognition relies primarily on low spatial frequency processing. This makes some sense, as misalignment alters the configuration of the parts of the face more than the details. If the part of the face they are supposed to ignore is separated, then it's easier to ignore it, and this should matter more when the part of the visual system responsible for holistic processing is being used.

But a research team led by Olivia Cheung wondered if Goffaux and Roisson's research really offered the final word on how holistic processing is done. Goffaux and Roisson measured accuracy identifying the same face, but they didn't measure errors when the faces were different.

Cheung's team repeated Goffaux and Roisson's study, but added several additional test conditions to make sure that errors were accounted for. Here are the key results:

i-0f0dbb5a884a19c3b83d9f35d062348b-cheung3.png

The top graph shows accuracy overall. This replicates Goffaux and Rossion's results: aligning the faces appears to disproportionately cause errors in low spatial frequency images compared to the other images.

But the lower graph shows something else: For low spatial frequency images, viewers are significantly biased to say the faces are different, both when the faces actually are different, and when they're not. In fact this bias explains the entire first graph.

Why is this bias occurring? It's clear that it's not due to actual differences in perceptual process, so what could account for it? While this study can't tell us, the researchers speculate that it may have something to do with our own real-world experiences. We see low spatial frequency faces all the time -- when we view someone from a distance, obscured by foggy glass, or when not wearing corrective lenses. But high spatial frequency faces are rarely seen outside of a lab.

Perhaps we're just as good at getting holistic information from high spatial frequency images, but since we're unfamiliar with them, we're less likely to reject close calls as different. Overall testing accuracy goes up, but not because of true differences in the perceptual system.

Olivia S. Cheung, Jennifer J. Richler, Thomas J. Palmeri, Isabel Gauthier (2008). Revisiting the role of spatial frequencies in the holistic processing of faces. Journal of Experimental Psychology: Human Perception and Performance, 34 (6), 1327-1336 DOI: 10.1037/a0011752

More like this

Thanks for this post! I've been following the blog for a while and always found it interesting. But this post was a bit of an enlightenment -- I've spent a few years at art schools in Sweden and read a lot on the visual system; I never found any information on the relation between spatial frequency and face recognition anywhere.

Thanks for sharing
/ Mats

By Mats Halldin (not verified) on 23 Jan 2009 #permalink