Dyslexia and the Cocktail Party effect

IMAGINE sitting in a noisy restaurant, across the table from a friend, having a conversation as you eat your meal. To communicate effectively in this situation, you have to extract the relevant information from the noise in the background, as well as from other voices. To do so, your brain somehow "tags" the predictable, repeating elements of the target signal, such as the pitch of your friend's voice, and segregates them from other signals in the surroundings, which fluctuate randomly.

The ability to focus on your friend's voice while excluding other noises is commonly referred to as the cocktail party effect. Although first described more than 50 years ago, the brain mechanisms involved are unknown. But a new study by researchers at Northwestern University now shows that activity in regions of the brainstem are modulated by specific characteristics of the speaker's voice, and that this modulation is impaired in children with dyslexia.

Animal experiments have shown that auditory regions of the brainstem such as the inferior colliculus, are involved in the processing of sound signals within noisy environments. These structures receive inputs from the cerebral cortex, which are thought to amplify relevant information in the sound signal while inhibiting irrelevant information, thus increasing the signal-to-noise ratio.The activity of brainstem neurons is known to be dynamic and modulated by experience. The new work shows that this modulation occurs online during speech perception, instead of on a longer timescale.

Bharath Chandrasekaran and his colleagues at Northwestern's Auditory Neuroscience Laboratory developed a non-invasive method for recording the electrical activity of the brainstem, in order to investigate whether responses to auditory stimuli are modulated by the context of speech. In the first experiment, 21 children without neurological abnormalities or learning disabilities were played a synthesized speech syllable ("da") while they watched a video of their choice. The syllable was presented in either a repetitive and predictable manner, or in a highly variable and unpredictable one.

The response of the auditory brainstem was found to be dependent upon the context in which the speech sound was played, such that the neural representation of the sound became fine-tuned to the repetitive syllable, but not the variable one. Repetition of the syllable was found to induce plasticity in the brainstem, so that the response is automatically sharpened to elements of the signal related to voice pitch. This modulation is crucial for the ability to perceive speech in a noisy environment, because pitch is a characteristic which is used to distinguish between different voices. The adaptation of the brainstem response underlies the listener's ability to tag the speaker's voice, and to segregate it from background noise.

The same experiment was then repeated, but this time the children were divided into two groups of 15, according to their reading ability as defined by a standardized word reading efficiency test. The "good readers" group consisted of 15 children from the first experiment, all of whom had scored 115 or more on the reading test. The "bad readers" group consisted of 15 others, who had obtained scores below 85, had previously been diagnosed by a physician with having a learning impairment, and attended a private school for the learning disabled. In the "good readers" group, the response of the auditory brainstem was again found to be modulated by the repetitive sound. However, no adaptation of brainstem activity was observed in the group of poor readers.

Earlier behavioural studies suggested that a core deficit of developmental dyslexia is the inability to exclude background noise from the incoming stream of auditory information. The new work confirms this, and shows that the inability arises because neurons in the auditory brainstem do not fine-tune their responses to speech cues. As a result, dyslexic children apparently cannot filter out background noise, and so have difficulty paying attention in the noisy classroom environment. The new findings suggest that they would benefit from sitting at the front of the room or wearing noise-reducing headphones to help them concentrate, and may provide a new way of diagnosing the condition. The researchers are also investigating the possibility that musical training might improve speech-in-noise perception.

Related:


Chandrasekaran, B., et al. (2009). Context-Dependent Encoding in the Human Auditory Brainstem Relates to Hearing Speech in Noise: Implications for Developmental Dyslexia. Neuron 64: 311-319. DOI: 10.1016/j.neuron.2009.10.006.

More like this

Given that modulation deficits are much better known in other conditions, notably autism, I'd like to read more of a comparison between modulation deficits in dyslexia versus modulation deficits in other conditions. How are they similar/different?

To do so, your brain somehow "tags" the predictable, repeating elements of the target signal, such as the pitch of your friend's voice, and segregates them from other signals in the surroundings, which fluctuate randomly.

Do you only write for neurotypical readers? Because my brain doesn't do that, I'm autistic.
Also, why do you consider people with different neurologies abnormal and inferior?

Earlier behavioural studies suggested that a core deficit of developmental dyslexia is the inability to exclude background noise from the incoming stream of auditory information. The new work confirms this, and shows that the inability arises because neurons in the auditory brainstem do not fine-tune their responses to speech cues.

Poor Speech in Noise Perception (SiNP) can exist sans reading disability. So presumably can the contrary - reading problems sans a deficit in SiNP. The circuitry and mechanism described would appear to apply to everyone with SiNP problems, irrespective of whether reading deficits also exist. However, the subjects appear to be kids with reading difficulties, who appear not to have been tested individually for SiNP deficits.

The claim that deficits in brainstem neural circuitry responsible for improving Signal to Noise Ratio (SNR) of audio signals modulates reading difficulties is much stronger, and to me at least, not obvious from your description of this work.

It would require that the auditory brainstem is somehow involved in reading - a process where the sensory input is visual. The implication would be that the visual cortex (or other downstream circuit) generates a mixture of candidate or approximate audio signals from the visual signal and then routes them through the audio brainstem for SNR boosting and subsequent identification.

Otherwise, its hard to see the relevance of the audio circuitry in question, especially in relation to the ability to read clear, large font print.

By Stagyar zil Doggo (not verified) on 17 Nov 2009 #permalink

I hate to use the "when I was your age" argument, but there certainly seems to be so much more "noise" now than there used to be, I would think processing it would be exhausting by the end of the day. Throwing in any type of learning challenge would probably be unbearably frustrating.

Maybe I have that. Often I "fake" a conversation while fully engrossed in a nearby conversation that does not include me. I seem to have two modes: unable to filter any sensory information or being completely unconscious of what is going on around me. Also, I love eavesdropping. I wish that it wasn't socially awkward to walk up to people having a conversation and just stand there and listen, without comment or acknowledgment. It is likely that my brain is pathological in multiple ways.

By Catharine (not verified) on 13 Nov 2009 #permalink

Interesting article, a little more info on the non-invasive method of measuring brainstem response would've been useful.

On the conclusion though, I would think that the lower tuning of the brainstem response, if it is related to attention, would affect most cognitive operations, not just reading. If the lack of tuning is specific to reading (i.e Dyslexic population), then it just means reading is closely related to sound perception. If the lack of tuning is seen in people with attention disabilities as well, then it is an attention problem. Basically, they need to run another test group to connect the response to attention, right now the effect is limited to reading.

is this plasticity temporal or permanent?
Tuning to the syllable sounds like FFR(frequency following response). So what exactly does the neuron tune to?

This is very interesting for me as a dyspraxic (a seperate disorder from dyslexia, but closely related). I have noticed that I seem unable to hear people in cocktail party situations even when other's can. It's nice to have a possible explanation(or maybe an excuse.

By Danielfrank (not verified) on 15 Nov 2009 #permalink

A non-scientist, I read this post with interest from two angles: 1. I have an infant granddaughter newly diagnosed with single-sided deafness, and I am struggling to figure out the implications of this. Having problems in noisy environments seems to be one outcome, but I am currently muddled about just where, in sensorineuro deafness, the problem will lie. Given that so much of speech is perceptual rather than mechanical and relies on other cues, this was interesting - though I clearly still have a lot to learn.
2. As an English teacher, I am well acquainted with dyslexia, but a bit puzzled that in this study it seemed to be assumed that poor reader=dyslexic. Isn't it a little bit more complicated than that?

Interesting. For the past two years I've been working with a group of people who consistently speak over each other or hold individual conversations over broader conversations. Lately I'm finding I have no ability to concentrate in on anything when this occurs...partially because it is hard, and partially because it annoys me and I just tune everybody out.

But I wonder how much harder this is for children today than it was years ago when I was a child. I hate to use the "when I was your age" argument, but there certainly seems to be so much more "noise" now than there used to be, I would think processing it would be exhausting by the end of the day. Throwing in any type of learning challenge would probably be unbearably frustrating.

First off, thanks for summarizing our paper so eloquently. This is a good representation of our study, and I do encourage readers to have a look at the actual paper (I'm happy to provide you a copy).

@ Claire- your observations are excellent. Children in the poor reading group had to have a n external diagnosis of dyslexia plus a poor reading score (as measured by a standardized reading test).

@ SC, please do look at our lab webpage brainvolts.northwestern.edu for more info on the speech ABRs. I agree about your point regarding another group to test specificity. We are planning on doing this next.

@TengXB- We believe plasticity can work at both levels- this is moment-to-moment. This is speculative though, and we are working on experiments that will tease the temporal aspect of plasticity better.

By Bharath Chandr… (not verified) on 20 Nov 2009 #permalink

Very good article.

To Fertanish's comment above:

It is my subjective impression that speaking over others in conversations is common in some Latin-languages cultures like Spanish. I find it incredible confusing and have no idea how people can keep track on that kind of conversations. They must have an extra fin-tuned ability to discriminate sound, or maybe guess what the other persons say so they don't need to hear the rest. Alternatively, they simply don't care what others say:-)