Last year Nora and I went on a hike in the remote Pasayten Wilderness in northern Washington state. Parts of the hike were extremely grueling, while other parts were quite easy and fun. I made this short video to try to capture the differences:
The music was added as an afterthought, but in the end I think it’s what makes the video so charming: without it, it would just be an ordinary walk in the woods. For each section of the trail, I chose a music clip that I thought expressed our feelings as we made our way along. Most people who watch the video agree: the music is totally appropriate to the accompanying video.
We’ve discussed music a lot on Cognitive Daily, but one thing we haven’t talked much about is how emotion in music is produced. Does it take a music expert to convey emotion through music, or can anyone do it?
A team led by Filippo Bonini Baraldi designed a musical task so easy that even an untrained individual could do it: Try to musically represent eight different “expressive intentions,” as described by three adjectives (like slashing, impetuous, resolute or tender, sweet, simple). There was just one limitation: the volunteer participants could only use one musical note on an electronic keyboard for each expressive intention.
Three volunteers were trained musicians, while the other three were completely untrained. Did the performances of the musicians differ from the non-musicians? This chart shows the results:
With only one note to play, there are exactly four dimensions that can vary: Pitch (the single note selected), Intensity (how loudly it is played), Articulation (how long the notes are played and how much space is between each note [more articulated = more staccato]), and Tempo (Number of notes played in a unit of time). As you can see, while the performances were very different depending on the expressive intentions, there’s not much difference between non-musicians and musicians; the difference rose to statistical significance only for tempo.
Next the researchers placed progressively more limitations on the performers, each time asking them to generate the same eight expressive intentions. First, choice of pitch was removed: all participants had to play middle C. Next, intensity was controlled: no matter how hard the key was struck, the note played at the same volume. Finally, articulation was controlled: Each note lasted exactly 250 milliseconds, and the performers could control only the tempo of their performance. Again, there was little measurable difference between the musicians and the non-musicians.
If there’s so much agreement among performers — even non-experts — it seems likely that listeners should have little trouble recognizing the emotions they intended to play. So the researchers recruited 30 volunteers to listen to the most typical example of each clip. Half the listeners were musicians, and half were non-musicians. Could they accurately recognize the intended expressions?
They could. Each clip was rated on a scale of 0-4 for how much it matched each of the eight possible expressive intentions. When performances were constrained only by the one-note limitation, 5 of the 8 intentions were matched correctly by the listeners. Performance declined as the performers were more constrained, but not by much. Even when performers could control only the tempo of their performances, listeners matched 4 of 8 intentions correctly.
Baraldi’s team says their work shows that expressive intentions in music are universal, produced and perceived even by non-musicians. Even playing only one note, everyone can perceive and generate recognizable expressions of emotions and other attributes of music.
Baraldi, F., Poli, G., & Rodà, A. (2006). Communicating expressive intentions with a single piano note Journal of New Music Research, 35 (3), 197-210 DOI: 10.1080/09298210601045575