The allure of music has been a recurring question for psychologists. Why do we see the need for music? Is music like language, or is it something entirely different? The attempts to answer the latter question have generated mixed results. Musicians with brain damage have retained musical ability while losing language ability. Some patients with a condition called amusia can recognize songs from their lyrics but not from the melody. On the other hand, healthy people remember melodies better when they are repeated with their original lyrics instead of the words from other songs.
Listen to the following two short melodies:
The two songs are identical except for the last two chords (and it’s pretty clear that neither is going to be topping the charts any time soon!). Song 1 ends with “tonic” musical progression, while Song 2 ends with a “subdominant” progression. While subdominant progressions are common in Western music, it’s an unusual way to end this song. A team of researchers led by Bénédicte Poulin-Charronnat of Université de Bourgogne used these two melodies to see if they could come to a better understanding of how language and music are related (Bénédicte Poulin-Charronnat, Emmanuel Bigand, François Madurell, and Ronald Peereman, “Musical Structure Modulates Semantic Priming in Music,” Cognition, 2005).
They asked volunteers to listen to a set of songs sung to these two melodies. The words to the songs were cleverly varied in two different ways. Sometimes the final word in the song was replaced with a word that did not make sense, for example “The giraffe has a very long neck” would be replaced with “The giraffe has a very long foot.” In other instances, the final word was replace with a nonsense word, e.g. “The giraffe has a very long veck” (these are rough translations—the actual experiment was conducted in French). The task was simply to indicate if the last word was a word or a non-word.
In principle, the music played to accompany the words was irrelevant to the task, but in fact, participants responded differently depending on what melody was being played. When the final word was related to the rest of the song (“neck,” in our example), participants were more accurate when the tonic melody was played. But when the final word was unrelated (“foot”), participants were more accurate with the subdominant melody.
So clearly the music is affecting our understanding of language—but how? Poulin-Charronnat et al. performed their study on both musicians and non-musicians, and the results were the same for each, so the result was not simply a product of musical training. The researchers speculate that music demands some of the cognitive resources available for processing language, and that different musical structures require different resources, so each melody being played will affect language differently. Music and language do appear to rely on some of the same cognitive processes. Unfortunately for aspiring American Idol contestants, however, being able to speak the language still doesn’t mean you can sing in tune.