Beyond change blindness: Change deafness works almost the same way

ResearchBlogging.orgWe've talked a lot on Cognitive Daily about change blindness: the inability to spot visual differences between images and even real people and objects right before our eyes. The most dramatic demonstration might be Daniel Simons' "experiment" that took place before participants even knew they were being studied:

More recently researchers have uncovered a similar phenomenon for sounds: Change deafness. Listeners are asked to listen to two one-second clips separated by 350 milliseconds of white noise. The clips are composite sounds, combinations of four different familiar sounds:

If one of the sounds is simply omitted during the second clip, then most people notice the difference. But if it's replaced by a different sound, then the change becomes much more difficult to notice.

But why? Are we simply unable to parse out the different sounds in the mishmashed sound clip? Or do the substituted sounds sound too similar?

Melissa Gregg and Arthur Samuel took 12 different sound pairs, each pair representing two different examples of the same sound, and analyzed them along two dimensions of sound: Harmonicity (how repetitive the sound was) and Fundamental Frequency (pitch). For example, here are two different "chicken" sounds:

Chicken 1:

Chicken 2:

The sounds that were equivalently different in Harmonicity and Fundamental Frequency were targeted for changing in the study:

i-7eeeecc955385844f53930ee3536364c-gregg1.gif

Here you can see that the two dog sounds were about as different from each other as Dog 2 and Ship 2. So when the researchers played mishmashed groups of four combined sounds, if Dog 2 was played in the first clip, then sometimes Dog 1 was played in the second clip, and sometimes Ship 2 was played.

Gregg and Samuel played 144 different combinations of sounds via headphones to 25 college student volunteers. Half the time the first and second clips were the same, and half the time one of the sounds was substituted for another version of the same sound or a different sound. Here are the results:

i-036880a689720aa72dbb6a615309fef8-gregg2.gif

So while listeners were quite accurate at identifying when the two clips were the same, they weren't very good at all at noticing when a sound had been changed (remember, you can get 50 percent right just by guessing). More importantly, they were significantly worse at noticing the change when a different example of the same sound had been substituted. Since the researchers were careful to only substitute sounds that were equivalently different acoustically, then the listeners must have at some level responded to the meaning of the sound.

It's as if listeners were making note of the combination of the meanings of the sounds (Chicken-Dog-Phone-Bird), and only noticing a difference if one of them was different in the second clip (Chicken-Ship-Phone-Bird but not Chicken-Dog2-Phone-Bird).

The results are quite similar to some change blindness studies where participants don't notice when a white male person is substituted for another white male, but they do notice when a female is substituted for a male. The meaning of the sounds makes a difference in whether we notice the difference.

Gregg, M., & Samuel, A. (2009). The importance of semantics in auditory representations Attention, Perception & Psychophysics, 71 (3), 607-619 DOI: 10.3758/APP.71.3.607

Categories

More like this

Since the researchers were careful to only substitute sounds that were equivalently different acoustically, then the listeners must have at some level responded to the meaning of the sound.

This is quite interesting. This would suggest that our brains our capable of "un-mixing" the mixed sound clips within a mere second, categorize them (also within that time, plus the 350ms of white noise), and use that categorization as a frame of reference for the second mixed clip. Amazing. Dave, did the study indicate how much time the participants had before responding, or if response time was a measured variable? I feel there may be some interesting correlations if RT data were to be analyzed.

"equivalently different acoustically"

Ummm... this presents just two metrics of a vast number of ways of measuring "different acoustically".

Probably more interesting is what types of "acoustic difference" are easy to perceive and which are difficult.

In conversation with a musician friend of mine I once said that due to my own lack of training in music I suspect that I cannot perceive sound in the same way the she does. This came up because she was trying to explain a musical idea to me and I was having difficulty understanding what she was getting at.

She would play a chord and then talk about the component notes and I eventually realized that I hear one undifferentiated complex of sound whereas she seemed to hear multiple differentiated sounds working harmoniously together in complex relationships. Whatever the idea she was trying to communicate about chords seemed to depend on the ability to differentiate the components of the complex relationships. I failed to get what she was talking about because I could not reliably pick out the components of the auditory events she was referring to.

I wonder if there would be a difference in the performance of musically trained subjects versus those without musical training in this particular task?

Ummm... this presents just two metrics of a vast number of ways of measuring "different acoustically".

Probably more interesting is what types of "acoustic difference" are easy to perceive and which are difficult.

Yes, excellent point. Actually, in another experiment in the same study, the researchers asked people to subjectively rate the clips for how similar they are and repeated the original experiment using perceived differences rather than those arbitrary measures of difference. The results were the same.

Dave,
Thanks for the added info. Yeah, it is an obvious thing to do.

Maybe that study is a better one to cite for a more general audience. Of course, the conclusion isn't really all that surprising... humans remember things by making categorical associations. Memory (even short term) appear to be/function more like a description than auto clips or photos. Of course, I like this since cognition based on classification systems and associations makes a lot of sense computationally to me (my educational background bias).