Infants as young as six months respond to words differently from other sounds

ResearchBlogging.orgThere is a growing body of evidence that very young children -- too young even to talk -- still know plenty of words. When our kids were very young, it was quite clear that they knew the meanings of many more words than they could actually produce. When they couldn't speak at all, they understood words like "Mommy," "bottle," and "diaper." When they were older and could say those words but not complete sentences, they understood more complicated phrases like "go into the kitchen and bring me your sister's sippy cup."

But is there something special about words? Or could babies learn to associate any sound with a meaning? So far the evidence suggests that words are special. Twelve-month-olds, for example, can be trained to recognize words much more quickly than other sounds. But still, this might be due to the fact that these babies have themselves already learned a considerable amount of language. Do younger babies show the same preference for words over other sounds?

Anne Fulkerson and Sandra Waxman enlisted 128 infants -- half of them six months old, and half twelve months old -- and showed them each eight slide-show pictures of either dinosaurs or fish. With each picture, half the babies heard a recording of a woman's voice identifying the picture with a nonsense word, like this: "Look at the toma! Do you see the toma!" The other babies heard a series of tones at a constant pitch for the same duration as the woman's voice. After viewing the series of pictures -- either eight different dinosaurs or eight different fish, they saw one final slide with both a picture of a dinosaur and a fish (neither of which they had seen before).

If a child has been trained to recognize the category "dinosaur" and has seen many dinosaurs in a row, then dinosaurs shouldn't be as interesting as fishes. We would expect the babies to look at the fish for a longer period than the dinosaur. Here are the results:

i-4f6094835ec60344d816648bfce70bf2-fulkerson.gif

Both younger and older infants spent significantly more time looking at the category they hadn't seen before when they had been trained using words. But neither group spent significantly more time looking at the new categories when they had been trained with tones. In both cases the babies saw the same animals and heard consistent audio along with each animal, but when that audio wasn't language, the children didn't appear to learn that all the animals belonged to the same category.

Fulkerson and Waxman argue that this shows that early language learning isn't simply a process of associating particular sounds with categories. Instead, even very young infants only appear to learn categories when those categories are associated with words. Note that this doesn't mean that tones themselves couldn't be seen as words. A baby could be taught that an electronic "beep" means dinosaur by showing her pictures of dinosaurs and saying "do you see the (BEEP)?" This study simply shows that tones in the absence of a language framework aren't treated as language. Even babies as young as six months can tell the difference between sounds that are intended to communicate and non-language sounds.

Fulkerson, A., Waxman, S. (2007). Words (but not Tones) facilitate object categorization: Evidence from 6- and 12-month-olds. Cognition, 105(1), 218-228. DOI: 10.1016/j.cognition.2006.09.005

More like this

One of the amazing things about learning language is that children rarely hear language sounds in ideal acoustic environments. Maybe other people are talking in the background, or the dishwasher is running, or the TV is on. Yet somehow children they learn words just the same. By the time we're…
One of the first steps to learning a language is figuring out where one word ends and the next one begins. Since fluent speakers don't generally pause between words, it can be a daunting task. We've discussed one of the ways people do it in this post -- they focus in on consonant sounds. Other…
This weekend, robot cars competed in a challenge that most humans would find trivial: drive 132 miles in 12 hours without crashing. Yet crash, they do. The difficult part isn't so much the steering and acceleration, it's determining the difference between an obstacle you must navigate around and a…
[Originally posted on February 20, 2006] Here's a picture of our daughter Nora at about 3 months of age. She looks like she's fairly aware of the events going on around her (arguably more aware than she sometimes appears now, at age 12). However, as our knowledge of how infants begin to perceive…

Is the "toma" business necessary? What if the woman had just said "Look at that! Isn't it interesting?" or some such?

It's kind of sick to learn the small kids to call dinosaurs 'toma'.

your proposed script, sidheag, does nothing to demonstrate the importance of categories in language acquisition, which is one of the big things they studied. as for the use of nonsense words, i assume they wanted to rule out the influence of previously-acquired vocabulary by using something new.

By libbyblue (not verified) on 17 Mar 2008 #permalink

The other babies heard a series of tones at a constant pitch for the same duration as the woman's voice.

Was it a pure tone or were the overtones in the woman's voice presented alongside a pure tone? Was the woman saying "toma" speaking in a constant pitch or in the normal speaking intonation?

Unless the tonality was controlled, the presence and absence of overtones and the speaking intonation are systematic differences between these two conditions. Perhaps the difference in the child's response can be attributed to either of these rather than to words/language.

I doubt the babies will remember the word "toma" for dinosaur. I'm sure its fine Frobbles/Sidheag

So what is the prediction about what would happen if they did use my version? According to the experimenter's interpretation of their experiment (which I don't fully understand, lacking the linguistic background) would they expect comparing my words with non-word tones to produce the same effect that they saw here, or to produce no effect? If they'd expect the same effect, then their version seems over complicated. If they'd expect no effect - i.e., that the babies would not learn the category if they heard my words - then the experiment should surely (have been?) done. Personally, from my experience with one baby, I strongly suspect that they'd see exactly the same effect that they saw here. That is, that labelling the category ("toma") is irrelevant to the child's formation of it: having the adult express interest in the picture is enough. Is that what they expect?

So, what, if anything, does this have to say about the large number of people who are teaching their toddlers to use sign language with apparently great success? Apparently, babies/toddlers can easily learn and use sign language long before they can talk, thus allowing much easier communication as to their wants and needs.

This study simply shows that tones in the absence of a language framework aren't treated as language.

If that's really all it shows, than the study is incredibly uninteresting. Six months provides a lot of exposure to spoken (and, in some cases, signed) language, as well as non-linguistic, environmental sounds.

Unless the tonality was controlled, the presence and absence of overtones and the speaking intonation are systematic differences between these two conditions.

The study did use pure tones (matched for timing [i.e., pause location and duration], duration [i.e., overall sentence duration], and volume [I assume they mean RMS amplitude or something like this]). Aside from the fundamental frequency and spectral shape of a voice, there are many other differences between speech and pure tones (assuming pure tones were used). In general, speech has a large amount of spectro-temporal change over time, whereas pure tones do not. However, the lit review mentions other work wherein more complicated non-speech sounds failed to induce categorization. On further reading, though, the discussion section mentions work in which non-linguistic sounds situated in a context in which they are clearly intended as names do induce categorization.

So what is the prediction about what would happen if they did use my version?

The lit review for the paper discussed in this post mentions a study wherein naming and non-naming (e.g., 'see the toma' vs. 'see that?') were compared. Apparently, categorization was exhibited only by the kids in the naming condition.

So, what, if anything, does this have to say about the large number of people who are teaching their toddlers to use sign language with apparently great success?

Almost certainly nothing. It seems pretty clear to me that there is a large 'social' component to language learning in that speech and/or signs are special at least in part because they are emitted by people (parents, siblings, etc...). All of the children in these studies have had at least six months of face-to-face interaction with speaking (and maybe signing) people. If you had kids exposed to sign language in such a study, you could probably demonstrate similar effects for signed/linguistic vs. visual/non-linguistic stimuli accompanying the dinosaur and fish pictures, and if you had neglected children with no exposure to speech, they probably wouldn't categorize well in either condition (though, of course, this could be for any number of reasons). The non-speech-intended-as-a-name research mentioned in the discussion section supports this, I think.

Interesting! Thanks, noahpoah.

Words are complex things. Tones aren't. What if you replaced the tones with specific(interesting)sound sequences *plunk* *rattle* *bong*, or even just a short piece of music specific to each picture. It's not 'words' its 'language', which can be represented in many different ways.

This is a very interesting study and there are certainly interesting directions in which to further explore the effect. The monotone sequence seems pretty different from babies' day-to-day experience and also very different from language in terms of complexity (as other commenters above have noted). Perhaps there are ways to address those issues.

Familiar household sounds might include music, the dishwasher, cooking, etc. Music might also approach language in terms of complexity as might something like bird song or chimpanzee vocalizaitons.

I wonder if the observed effect is some type of arousal or attention triggered by the voice. Whether it's learned or innate, perhaps the sound of the voice triggers this response to facilitate learning. From an adaptive standpoint, the voice, and especially the mother's voice, is probably often associated with a "lesson" being given, perhaps more so at this age when there's repetition (e.g., "Look at the toma! Do you see the toma!").