A study doesn’t have to be brand-new to be interesting. Consider the situation in 1992: It was known that adults are much better at distinguishing between sounds used in their own language compared to other languages. Take the R and L sounds in English. In Japanese, both of these sounds belong in the same category of sounds: both sounds have the same meaning, which is why it’s difficult for native Japanese speakers to learn the difference between the sounds in English. In 1992, it was thought that this linguistic specialization occurred at about the age of 1, when infants learn their first words. But a new experiment by a team led by Patricia A. Kuhl changed that.
Kuhl’s team worked with 64 six-month-old babies: 32 from Sweden and 32 from the U.S. They focused on two sounds that are pronounced slightly differently in Sweden and the U.S. I’ve found an example online of the Swedish sound: it’s the sound the y makes in this word, “fyra”:
The closet way to spell the sound out in English would probably be ee: “feera.” Here’s a recording of me saying it:
As you can see, it sounds different. But native Swedish speakers would probably give me the benefit of the doubt, especially since there’s no “ee” sound in Swedish [correction: actually there is. See comments]. In fact, native speakers of a language will accept a wide variety of vowel sounds similar to the “true” sound, as long as it doesn’t sound like another vowel in the language. This is called a “perceptual magnet,” and it helps us understand language spoken in a wide variety of voices and accents.
The researchers used a computer to generate 32 variants of both the y sound and the ee sound. These variants became progressively more different from the prototypical sound, in four “rings” around each sound. This graphic might help you see how the sounds were created:
The bullseye of each target is the protoypical sound, and each ring of sounds becomes progressively more different from the original sound. The two bullseyes are each about four rings away from each other.
The babies sat on a parent’s lap and listened to one of the two prototypical sounds, which was repeated every two seconds. Then occasionally a sound four rings different from the original sound was played, and if the babies looked in the direction of the sound, they got to see a toy bear bang on a drum. They quickly learned to look when the sound changed! This was the training phase.
During actual testing, the sound would often change more subtly: the new sound could vary by one, two, three, or four rings.
Now, take a look at this graph showing how often the American babies turned and looked during testing:
Babies who had been trained and tested with the Swedish y sound noticed changes much more frequently than those trained and tested with the English ee sound. The authors argue that the babies are perceiving the sounds just like adults–they perceive many sounds similar to ee as the same. In just 6 months of life, and without speaking a word, they’ve learned that ee is a part of English, and y isn’t. The Swedish babies showed exactly the reverse pattern.
Thus, the authors reason, learning the sounds particular to a language is one of the fundamental steps of learning that language, and it’s done before the child can even produce words!
Kuhl, P.K., Williams, K.A., Lacerda, F., Stevens, K.N., & Lindblom, B. (1992). Linguistic experience alters phonetic perception in infants by 6 months of age. Science, 255, 606-608.