Mixing Memory

I Can’t Understand Your Accent, So Keep Talking

I have this friend from New York who, most of the time, speaks in a normal (that is to say, southern) accent that she’s acquired as a result of being surrounded for so long by people who speak the King’s English (’cause Elvis was a southerner). Occasionally, though, usually after she’s been talking to someone back home, she slips into her old Jamaica Queens accent, and when she does, I spend the first thirty seconds or so just trying to figure out whether she’s speaking English, and I don’t even bother trying to understand the meaning of those strangely accented words she’s uttering. After that period of complete incomprehension, though, I seem to get used to her relapsed accent, and suddenly I can understand her perfectly well. Of course, by this time, I’ve missed enough of what she’s saying that I have no idea what she’s talking about, but at least the words now make sense.

I’d noticed this happen several times, but never really thought about it, partly because I’m not a psycholinguist, so that sort of thing doesn’t interest me enough to think that deeply about it, and partly because I figure everyone should speak with a southern accent, and if they don’t, it’s not my fault I can’t understand them. But earlier this week, I read a paper by Maye et al. titled “The Weckud Wetch of the Wast: Lexical Adaptation to a Novel Accent” (1), because the title sucked me in, and learned a bit about how I adopt to my friends’ crazy Queens accent. And I thought I’d share what I learned with you.

Maye et al. begin by citing a bunch of research demonstrating that we humans are sort of accent experts. We’re adept at adjusting to people’s accents, and we are able to pick out subtle features of various accents that distinguish people culturally, geographically, etc. at a pretty fine grain (that’s why I can tell the difference between someone from Alabama and Georgia, for example, and not just between someone from Georgia and Maine, just by hearing them talk).

How do we adapt so easily, and so quickly, to a wide variety of accents that make the same word sound completely different (Maye et al. use the example of “dead,” which a selection of American accents would pronounce “ded,” “dad,” and “dayed”)? Maye et al. propose that what people do when hearing an accent different from one they’ve been listening to (or generally listen to) is to remap vowels — most of the differences between accents take place in the vowel sounds, as the example of “dead” illustrates — onto different areas of the “vowel space.” If you’re used to hearing people pronounce the vowel sound in “dead” as a long a, like in “dad,” and suddenly hear someone from Nashville pronouncing it with a long-a (as in “day-ed”), you just remap your expectations for that vowel sound onto another part of the vowel space. You can think of the vowel space as sort of like a map with all of the different speech sounds for vowels (or at least a particular vowel), and when we hear an accent, we just move the flag representing the vowel to the part of the map where that accent tends to keep that vowel.

To test this hypothesis, Maye et al. conducted a clever little experiment. They had participants come in for two experimental sessions on different days. On the first day, the participants listened to a 20 minute passage from the Wizard of Oz spoken in a “standard American English accent.” On the second day, they listened to the same passage, but with some of the vowel sounds changed to produce a different accent (both accents were produced using a speech synthesizer). If you want the details of the vowel changes, here they are (from p. 547):

The artificial accent was created by lowering front vowels in F1-F2 vowel space, such that the vowel /i/ was produced as [I], /I/, as [ε], /ε/ as [æ], and /æ/ as [a]. The diphthong /ei/ was produced as [εI], and /a/ (a low central vowel) was unaltered, resulting in a merger with /æ/.

In other words (if I understand their symbols correctly), from the first session to the second session, the long-e sound in “sheet” became the i sound in “hit,” the i sound in “hit” became the e sound in “wet,” that e sound was shifted to the a sound in “bat,” and the a sound in “bat” became the o sound in “not.”

In both sessions, after hearing the passage, participants completed a lexical decision task. This is a commonly used task (I have participants doing one as I type this) in which participants hear or read a letter-string, and have to decide as fast and as accurately as they can whether it’s a word. In the Maye et al. task, participants heard the strings, some of which would, with the change of accents, sound like non-words in the first session, but words in the second. For example, participants might hear the word “wetch,” pronounced with the e sound in “bet,” which would be a non-word in the first session. However, after shifting the i sound in “bit” to the e sound in “bet” in the second session’s passage, if participants adapted to the accent, “wetch” would be interpreted as “witch,” and participants would indicate that it was a word.

And of course, since I’m writing about it, that’s what participants did. In the first session, participants indicated that strings like “wetch” were words 39% of the time (actual words are correctly indicated about 90% of the time), but in the second session, they said “wetch” was a word 59% of the time, indicating that they had adapted to the accent and carried their remapped vowel sounds over to the lexical decision task.

In a second experiment, they looked at how specific the adaptation was. For example, if we hear a shift from “dad” to “dayed” in the pronunciation of the word “dead,” in addition to remapping this vowel sound, do we also remap the vowel sound in “dope?” This time, participants heard the same two passages in two sessions, but instead of strings like “wetch,” which, based on the shift in vowel sounds between the two passages would sound like a non-word in one session and a word in the second, they heard words like “weech,” which involved a completely different vowel sound. This time, participants’ responses to weech and related words were no different between the two sessions (64% vs. 69%, a non-significant difference). Assuming that this data doesn’t reflect a ceiling effect, which would mean that respones to “weech” couldn’t get much higher, and thus potential differences might be missed, this result suggests that our adaptation is accent-specific. That is, our remapping is specific to the particular vowel sound changes from one accent to another.

Now I know why, when she starts speaking, I can’t understand a thing my friend says in her New York accent, but after listening for a bit, I can understand her perfectly. At first, my vowels are just mapped to the wrong speech sounds, but as she speaks, my brain remaps them to the appropriate speech sounds (perhaps given my knowledge of Jamaica Queens accents, e.g., that they’re totally whack), and suddenly she sounds like she’s speaking English again.

1Maye, J., Aslin, R.N., Tanenhaus, M.K. (2008). The Weckud Wetch of the Wast: Lexical Adaptation to a Novel Accent. Cognitive Science, 32(3), 543-562.