Are faces really different from the other things we perceive?

Many studies of the electrical activity in the brain have found consistent differences in activity when people look at faces compared to other stimuli such as cars or tools. This has led some researchers to conclude that face processing is fundamentally different from other visual processing. But a recent study has found some evidence to challenge that notion, and the Phineas Gage Fan Club has the details:

Many studies have compared faces presented at the same angle and size to a control category, presented with widely differing angle and size. If you then find that based on your study, certain characteristics appear for the face group but not the control group, you can no longer be sure that the results are not caused by differing amounts of what has been termed interstimulus perceptual variance, rather than some feature that is intrinsic to faces.

So the difference in brain activity between looking at faces and other stimuli might simply be due to the fact that there's more variety in the other stimuli. One notable example of the observed difference is called the N170, a negative electrical potential occurring 170 milliseconds after viewing faces, but not after viewing other things. In the new study, researchers found that when they controlled for the variety of non-face and face stimuli, the difference in N170 response disappeared. So does this mean there's no difference in the way faces and non-faces are processed? Not necessarily:

Thierry et al (2007) also found that two microstates in the P1 range, P1f and P1c, did differ significantly between cars and faces, but not between the high- and low- perceptual variance condition.

So on the downside, the commonly accepted truth that the N170 is face-specific may be wrong. On the upside, Thierry et al (2007) have identified two microstates that may prove more resistant to changes in method. While such surprising results need replication, it is quite striking that a face-specific component appears so early. By comparison, a stimulus that is presented for 140 ms (the far end of where P1 appears) would be considered borderline subliminal. Or to take another example, perception experiments sometimes use stimulus presentations of less than 200 ms to ensure that the subject doesn't have time to shift their gaze, as that's about the minimum time needed for the eye to make a saccade. So in less time than what is needed to execute a saccade, the brain appears to be carrying out face-specific processing.

So while one pattern that was thought to be specific to faces may have been debunked by this study, two new patterns specific to faces have emerged. Make sure to read Johan's entire article for additional analysis of the study and comparison to other neuroimaging results.

Tags

More like this

VIEWING a stimulus for a prolonged period of time results in a bias in the perception of a stimulus viewed afterwards. For example, after looking at a moving stimulus for some time, a stationary stimulus that is viewed subsequently appears to drift in the opposite direction. These after-effects…
Music can be thought of as a form of emotional communication, with which the performer conveys an emotional state to the listener. This "language" is remarkably powerful - it can evoke strong emotions, and make your heart race or send tingles down your spine. And it is universal - the emotional…
Multitasking refers to the simultaneous performance of two or more tasks, switching back and forth between different tasks, or performing a number of different tasks in quick succession. It consists of two complementary stages: goal-shifting, in which one decides to divert their attention from one…
Prosopagnosia is a neurological condition characterised by an inability to recognize faces. In the most extreme cases, the prosopagnosic patient cannot even recognize their own face in the mirror or a photograph, and in his 1985 book The Man Who Mistook His Wife For A Hat, the neurologist Oliver…

Ah yes, the long awaited 'object expertise' microstates!

I always have problems with the unnecessary assertion that faces are special -- for the obvious fact that there are no other visual objects of comparable importance and relevance to us humans, all throughout our lives. So it makes perfect sense to have naturally designed neural and computational modules specific for face processing, although at the same time, there's nothing we can do when some of these modules are sometimes recruited to help process other special visual objects (in the context of the Thierry et al. paper: 'special cars' equated on low variance). There is nothing to prevent me from arguing, for example, that what the paper shows is the recruitment of one face module (N170) to help identify cars in some unnatural setting (normal cars as we see them in everyday lives are obviously subject to high variance), is there?

We evolved to focus on faces, it's instinctual. There is value in being able to read faces, and respond appropriately to the signals we read there. Witnesss the trouble the autistic have. Our faces are expressive. Our faces tell us much, and much of our communication is through our faces. You don't pay attention to the look on another's face, you could wind up in deep shit. So faces matter.

Tirta:

I'm not sure if faces are special or not, personally. The best evidence that they are probably comes from prosopagnosia, which is an inability to recognise people's faces following lesions to brain areas corresponding to (wait for it) the fusiform face area.

It's hard to explain why these patients are so specifically impaired at recognising faces without positing damage to a face-specific area.

Johan, I agree that prosopagnosics make a strong case for the existence of face-specific processing modules. However the nature of most neuropsychological cases is such that it's not easy to generalize from one case to another. But the most striking case for me is that of patient CK (Moscovitch et al., 1997), who has his face perception intact while having object agnosia and dyslexia at the same time -- thus a double dissociation.

What I was saying previously is that I have a problem with the word 'special'. As a class of visual stimuli, it's clear that they are. So I guess the debate boils down to a matter of the exclusiveness of the computational mechanisms involved, and I personally find it hard to differentiate between these two alternatives: (1) specific face-processing modules that are sometimes recruited for processing of other visual objects (of expertise), and (2) more general, object-expertise modules that are used primarily for face processing.