Even non-musicians can express musical intentions with just one note

ResearchBlogging.orgLast year Nora and I went on a hike in the remote Pasayten Wilderness in northern Washington state. Parts of the hike were extremely grueling, while other parts were quite easy and fun. I made this short video to try to capture the differences:

The music was added as an afterthought, but in the end I think it's what makes the video so charming: without it, it would just be an ordinary walk in the woods. For each section of the trail, I chose a music clip that I thought expressed our feelings as we made our way along. Most people who watch the video agree: the music is totally appropriate to the accompanying video.

We've discussed music a lot on Cognitive Daily, but one thing we haven't talked much about is how emotion in music is produced. Does it take a music expert to convey emotion through music, or can anyone do it?

A team led by Filippo Bonini Baraldi designed a musical task so easy that even an untrained individual could do it: Try to musically represent eight different "expressive intentions," as described by three adjectives (like slashing, impetuous, resolute or tender, sweet, simple). There was just one limitation: the volunteer participants could only use one musical note on an electronic keyboard for each expressive intention.

Three volunteers were trained musicians, while the other three were completely untrained. Did the performances of the musicians differ from the non-musicians? This chart shows the results:


With only one note to play, there are exactly four dimensions that can vary: Pitch (the single note selected), Intensity (how loudly it is played), Articulation (how long the notes are played and how much space is between each note [more articulated = more staccato]), and Tempo (Number of notes played in a unit of time). As you can see, while the performances were very different depending on the expressive intentions, there's not much difference between non-musicians and musicians; the difference rose to statistical significance only for tempo.

Next the researchers placed progressively more limitations on the performers, each time asking them to generate the same eight expressive intentions. First, choice of pitch was removed: all participants had to play middle C. Next, intensity was controlled: no matter how hard the key was struck, the note played at the same volume. Finally, articulation was controlled: Each note lasted exactly 250 milliseconds, and the performers could control only the tempo of their performance. Again, there was little measurable difference between the musicians and the non-musicians.

If there's so much agreement among performers -- even non-experts -- it seems likely that listeners should have little trouble recognizing the emotions they intended to play. So the researchers recruited 30 volunteers to listen to the most typical example of each clip. Half the listeners were musicians, and half were non-musicians. Could they accurately recognize the intended expressions?

They could. Each clip was rated on a scale of 0-4 for how much it matched each of the eight possible expressive intentions. When performances were constrained only by the one-note limitation, 5 of the 8 intentions were matched correctly by the listeners. Performance declined as the performers were more constrained, but not by much. Even when performers could control only the tempo of their performances, listeners matched 4 of 8 intentions correctly.

Baraldi's team says their work shows that expressive intentions in music are universal, produced and perceived even by non-musicians. Even playing only one note, everyone can perceive and generate recognizable expressions of emotions and other attributes of music.

Baraldi, F., Poli, G., & Rodà, A. (2006). Communicating expressive intentions with a single piano note Journal of New Music Research, 35 (3), 197-210 DOI: 10.1080/09298210601045575

More like this

Last night in the U.S. many televisions were tuned to one of the biggest spectacles of the year: the American Idol finale, where America would learn which singer had been chosen as "America's favorite" (or, more cynically, who inspired the most teenagers to repeatedly dial toll-free numbers until…
If you've had a lot of musical training, you can probably tell the difference between a major and minor key. If you haven't had much training, even after having the difference explained to you, you're still not likely to be able to make that determination. Listen the following clip. It plays the…
Take a listen to this brief audio clip of "Unforgettable." Aside from the fact that it's a computer-generated MIDI performance, do you hear anything unusual? If you're a non-musician like me, you might not have noticed anything. It sounds basically like the familiar song, even though the…
Listen to these two short music clips. Music Clip 1Music Clip 2 Now, can you identify the musical style of each clip? If you said "Classical," you're technically only correct for the first clip. The second clip is actually in the Romantic style (bonus points for identifying the works and composers…

Next we need to see this study done across different cultures to really know how universal music is. Do Chinese, Americans, French, Eskimos, Chileans and Australian Aboriginees etc all express and experience the same emotions through music the same way?

Interesting stuff, that - as with most interesting stuff - opens even more interesting questions.

As far as cross-cultural musical expression: recently I was in Santa Fe, and hanging out with a friend of my ex-wife who is an Indian living in one of the nearby pueblos and very immersed in his own culture. He sang a few songs for us in the indigenous language. I found them very moving and expressive, but when I asked him what they were about, I was surprised. Songs that I would have thought expressed sadness, he said were about things like celebrating the fall harvest. Songs that seemed to sound joyous to me, he said were sad.

What this means, I don't know. But it certainly argues against a universal musical language.

By Ted Chabasinski (not verified) on 11 Aug 2009 #permalink

Ted -

Songwriting is, of course, completely different from what the volunteers were doing in this study, which was scoring. In songwriting, the subjective mood of the song doesn't have to have anything to do with the actual subject matter of the lyrics. If you didn't speak the language, would you think that

-"I Want You Back" is a regretful plea to a lost lover, or a party song?
-"Tramp The Dirt Down" is a sad song about lost love, or a rage-filled political indictment?
-"Semi-Charmed Life" describes a fun romp, or a desperate attempt to escape from reality?

We could go on and on, but as a professional songwriter one of the first rules I follow is that the mood and feel of the music does not at all have to match the subject matter of the lyrics. Judging from everything from tribal songs to latin american folk music to motown I'd say that this is a commonly followed practice, and that if you took the lyrics out of these songs most people in the culture in which the song originated wouldn't be able to tell you the correct meaning either.

How can you possibly measure "tempo" when you only play one note? Where on earth did graph number 4 come from? It is totally meaningless.

By Philip Potter (not verified) on 11 Aug 2009 #permalink

In that case, how do you get a single value for articulation, intensity, or tempo? You can get a very agitated effect by starting long, soft, and slow, and gradually getting faster, louder, and more staccato. I still don't understand those graphs.

By Philip Potter (not verified) on 12 Aug 2009 #permalink

Philip: I was a little confused by that as well. I'm pretty sure they're averages. You're right -- a single value doesn't capture them. But still, interesting to see that there's little difference in the average value between non-musicians and musicians.

You are right; there is still value here. I do wonder however whether the lack of difference arises because the non-musicians are innately musical or because the musicians have had their wings clipped -- ie they haven't got the freedom of expression they normally do.

I think I've got grumpy about this and am trying to see it in a bad light though. *goes and lies down for a bit*

By Philip Potter (not verified) on 12 Aug 2009 #permalink

I'm assuming that articulation and intensity are synonymous with the MIDI values of note duration and velocity. Articulation would refer to the length of the note, intensity to the volume. length of note, velocity, and tempo can all be measured very easily as separate data points.

You don't get an agitated effect by starting legato and slow and then ramping things up - you get an effect of escalating agitation. For pure agitation, you'd start at the staccato, ff, presto. The subtleties of craft that come in shifting mood, delaying gratification, and subverting the fulfillment of expectations are where I believe musicians and non-musicians would differ, but these are not subtleties that the researchers were looking for. They were looking for how we convey simple emotions.

Almost 40 years ago, I was a teaching assistant for an undergrad course -- the music quarter of a Music, Art, Philosophy trio -- designed for non-majors. This lecture hall class required all students by the end of the quarter to produce a composition (done in groups.) As one of the exercises, we had the teams create mini-pieces with only a single note (in any octave.) By holding pitch constant, we were attempting to show/liberate the other components of the music. I think most of them got it. And those with some formal musical background were not necessarily any better at it than the others.

I agree entirely - I would love to see this study expanded to include many cultures and societies from around the world. This would be extremely useful information - I teach in a music school and am always keeping an eye out for this type of research. Thank you!

By lesetoiles (not verified) on 22 Aug 2009 #permalink

I actually agree with Philip--it seems quite likely to me that the difference is primarily due to musical "wing-clipping." Trained musicians don't typically study how to make a single note sound "slashing" or "resolute" or "sweet," not just because there's no "right" way to do this, but also because it's already pretty obvious how this should be done. In fact, music teachers frequently USE descriptive words like "slashing" and "resolute" to help students understand how they're supposed to be playing certain passages--that is to say, it is assumed from the outset that the student will understand what is meant by those words and how to interpret them musically.

Musical training typically falls into two rather broad categories--performance and theory/composition. Theory is usually almost entirely focused on the relationships of various pitches to each other--polyphonic voice-leading, chordal movements, etc.--with the occasional detour into topics such as rhythmic hierarchy and instrumentation. Such knowledge would obviously be useless when faced with the task of playing a single note on an electronic keyboard. Performance, meanwhile, requires both technical proficiency (which this study was intentionally designed to ignore) and musical expressiveness. The difference between the musicality learned by performance students and that tested in this study is that students must learn 1) how to remain musical when playing not just one note but droves of notes, 2) how to play with many different articulations, dynamics, etc, sometimes at the same time (for instance, check out some of the solo piano literature of Ravel--he has absurdly long slur/phrase markings that cross the staffs and weave between contemporaneous phrases, three or more given articulations and dynamics given for different voices being played at the same time, and overall extremely dense textures both in terms of sheer numbers of notes/chords and in terms of expressive markings). This test doesn't address any of these things.