Recently I attended a concert featuring the premier of an up-and-coming composer’s work. She gave a brief talk before her piece was played, during which she explained the complex symbology of her work. The musical notes weren’t just noises; they were intended to convey a meaning above and beyond a mere sequence of sounds. But if her music really did convey such deep meaning, why did she have to explain it to the audience beforehand? Can music ever express semantic meaning directly, without requiring a composer or someone else to “translate” for us?
Certainly not all music is as difficult to interpret as the piece I heard that night, which featured such innovations as playing every note on the scale simultaneously on different instruments (I’ve already forgotten what this was supposed to signify). The Flight of the Bumblebee, for example, really does sound sort of like a bumblebee. Do people who don’t know that work’s title think of bees the first time they hear it? Perhaps more importantly, if they do think of bees, are they thinking about them the same way as if they’d heard the word “bumblebee,” or does musical “meaning” necessarily differ from meaning expressed in words?
A team led by Stefan Koelsch believes they have designed a set of experiments that can answer these questions. The experiments rely on a known brain response to “semantic priming.” Priming occurs when we are exposed to a word. Continuing with the “bee” example, if we read the word “bee,” and then are asked to perform some sort of task on a related word or concept, we work faster and more accurately. We might, for example, be faster at unscrambling the letters VEIH to form the word “hive.” We have been primed to think about bees, and so we’re better at dealing with bee-related concepts, from honey to stings. But what if we heard Flight of the Bumblebee for the first time, without being told what the song was about? Would we still be better at handling bee-related language? In other words, does bee music prime as effectively as the word bee?
Neuroscientists have known for decades that a specific pattern in brain activity is associated with semantic priming for words. When brain activity while reading is measured with an electroencephalograph (EEG), the results show a strikingly different pattern when a word has been primed compared to unprimed words. The pattern is revealed in a component of the EEG results known as N400.
Koelsch’s team wanted to know if music that’s associated with a specific word can prime as effectively as the word itself. They recorded dozens different musical clips from commercial CDs, each associated with a word in one of several possible ways. For example, a musical clip might literally sound like a “bird,” or it might sound “wide,” in a much more abstract sense. In a preliminary experiment, they selected the 88 clips which listeners consistently associated with the “correct” choice.
In their first experiment, listeners heard a musical excerpt or a sentence (a prime), then were presented with a word on-screen that was either related or unrelated to the prime. They then indicated whether the prime and the word were related, while brain activity was measured with an EEG. The chart below shows typical results for four different primes, each paired with the same German word, Weite (wideness):
The blue and purple plots represent the language primes; the graph on the right shows the relevant portion of the EEG results: a clear difference is seen in the area labeled N400, which occurs just under a half-second after the word is seen: When a word has been primed by related language, the N400 is significantly more negative than when the prime is unrelated. The red and orange plots show that there is a similar pattern for related (Strauss’ Salome) versus unrelated (Valpola’s E-minor piece for accordion) musical primes.
You can listen to these musical excerpts on the Nature site and judge for yourself whether you think they convey meaning. Here’s the Strauss, which the pre-testers found to be related to the concept “wide.”
Now here’s the Valpola, which was said to be unrelated to “wide.”
Do you hear “meaning” here? Arguably, the meaning in these musical works was only clear when listeners were overtly asked whether the words were related to the music. To address this concern, the same stimuli were presented to a new set of listeners, but this time, instead of judging whether the words were related to the primes, listeners were simply told to pay close attention because they would be tested on the music and words later. The same N400 activity was observed as in the first experiment.
Koelsch et al. conclude that for these examples, at least, music conveys semantic meaning in the same manner as words. While in some cases listeners weren’t as accurate at determining when musical primes were related to the target words (compared to linguistic primes), when the meaning was correctly established, it had the same priming effect as language.
Music, it appears, can convey much more meaning than we thought it did. Perhaps I didn’t need that composer to tell me what her work “meant” — her audience may have been understanding much more than she ever believed it could.
Koelsch, S., Kasper, E., Sammler, D., Schulze, K., Gunter, T., & Friderici, A.D. (2004). Music, language, and meaning: Brain signatures of semantic processing. Nature Neuroscience, 7(3), 302-307.