Consonants tell us where words begin, what about vowels?

ResearchBlogging.orgThe fact that infants are able to learn language without any help from adults can sometimes seem almost miraculous. Not only do children learn to speak and understand language completely on their own, active teaching of language skills seems to make almost no difference in their ability to talk.

One of the first difficulties when learning a language solely from listening to spoken language is determining where one word ends and the next one begins. Native speakers of a language typically leave no audible space between words at all. Even "motherese" doesn't leave any space between words -- if anything the spaces are diminished: "issntdatacutewittlebaby!"

So how do babies learn where one word ends and the next one begins? A group of researchers including Luca Bonatti, Marina Nespor, Jacques Mehler, and Juan Toro, believes it has identified a key pattern that works in a wide range of languages: language learners look to patterns in the consonants for information about where words start and end; they look to vowels to understand the role of words in a sentence. The first part of their explanation was explored in 2005. Their newest paper, led by Toro, considers the second part of the problem. How did they do it? They invented a "language" that had a couple of very simple rules. See if you can figure out the rules by looking at the list of "words" below:

tapena
tapona
tepane
tepona
topano
topeno
badeka
badoka
bedake
bedoke
bodako
bodeko

Get it? To be a "word" in this language, you must have three syllables, the first and last syllables must have the same vowel sound, and the consonants must follow the t-p-n pattern or the b-d-k pattern.

Paid Italian-speaking volunteers listened to a 10-minute recording of this language, but in the recording, each word was separated by one or two syllables randomly selected from the possible syllables in the language, like this:

tapenabepobedakenotopanokadebadoka

Then the listeners were tested on a new set of words to see if they had learned the 12 words in the language. They were given complete words like "tepane" or partial words from the set like "penabe." Even though the second word follows the vowel pattern correctly, it doesn't have the correct consonant pattern, so the correct answer is "no."

Respondents were an average of 63 percent correct on this test. They were also tested on whether they could generalize the pattern to other vowel sounds: "biduki" would be a word, but "biduku" wouldn't. Respondents were 67 percent correct on this test -- in both cases, significantly better than random chance.

In a second experiment, the roles of the consonants and vowels were reversed: words had to have an a-u-E or i-e-o vowel pattern, and the first and last syllables shared the same consonant sound. This time, listeners couldn't accurately identify whole versus partial words, and they couldn't generalize apply the rule to other consonant sounds.

But maybe the reason they couldn't generalize was simply because they couldn't identify the words to begin with. So in a third experiment, the researchers added short pauses after each word. Now listeners were accurate in identifying words they had heard before, but still couldn't generalize to other consonant sounds.

In a final experiment, the consonant pattern was made even easier to recognize. All real words repeated the same consonant sound three times: "bibebo" was a word, but "binebo" was not. Again there were pauses after each word. As before, listeners could distinguish whole words from partial words, but couldn't generalize the pattern to other consonant sounds.

The researchers conclude that listeners look to vowels and consonants for different types of information. This basic pattern can help language learners begin to understand what a word is, and eventually to assign a meaning to that word and understand its role in a phrase.

Finally, I couldn't do better than the authors' own explanatory figure for the study, so I'm duplicating it here (click for larger version):

i-b19761675454494b30835fbcc6f905ad-torosmall.gif

Toro, J.M., Nespor, M., Mehler, J., Bonatti, L.L. (2008). Finding Words and Rules in a Speech Stream: Functional Differences Between Vowels and Consonants. Psychological Science, 19(2), 137-144. DOI: 10.1111/j.1467-9280.2008.02059.x

More like this

One of the first steps to learning a language is figuring out where one word ends and the next one begins. Since fluent speakers don't generally pause between words, it can be a daunting task. We've discussed one of the ways people do it in this post -- they focus in on consonant sounds. Other…
As Eddie Izzard notes in the video above, the English, within our cosy, post-imperialist, monolingual culture, often have trouble coping with the idea of two languages or more jostling about for space in the same head. "No one can live at that speed!" he suggests. And yet,…
bo gi ru sa af er og um There are about 250 legal two word combinations in the english language, looking just at the basic vowels - a,e,i,o,u,y and the remaining 20 consonants. I'm allowing y to be a pseudo-consonant for some extra combinations. There are apparently 101 legal two letters words in…
Learning a new language as an adult is no easy task but infants can readily learn two languages without obvious difficulties. Despite being faced with two different vocabularies and sets of grammar, babies pick up both languages at the same speeds as those who learn just one. Far from becoming…

Native speakers of a language typically leave no audible space between words at all. Even "motherese" doesn't leave any space between words -- if anything the spaces are diminished: "issntdatacutewittlebaby!"

Is this true? My subjective impression is that child-directed speech often involves explicitly segmenting words, particularly nouns. Take an instance where a parent points at a dog and says "Dog!" or "Doggy!", often repeating. I'm not familiar with CDS corpus studies, but is there actually any evidence regarding how much of CDS is segmented versus unsegmented?

A group of researchers attempts to teach iCub, a robot, language. The approach is to work with language development specialists who research how parents teach children to speak. The iCub is supposed to learn in a way which is closer to the human experience.

In your article however, you state, children learn without the help of an adult. Just by discerning vowels and consonants a human can eventually learn a new language. So, is the ability to speak and understand a language innate? And does this mean, a robot without this in-built language skill can not learn how to speak?

Derek,
From my experience, it depends on the purpose of the interaction. When your conciously trying to teach a child a word, you'll slow down like you describe. But most of the time, people interacting with a child will get all high pitched and squeeky and their words will bunch together.

Note: "tepona" should be "tepone". You just threw that in to make the task difficult....

@June: The idea that the human capacity for language is innate (or, "the Innateness Hypothesis") isn't exactly new. The idea is that any human child has the capacity to learn a language as long as they have some contact with that language. In fact, case studies show that two isolated children with no outside human contact will spontaneously create a language.

The biggest mysteries are how exactly this is done -- what clues we're keyed to listen for in a string of speech.