Developing Intelligence

We often assume that true understanding is conveyed through spoken speech rather than gesture, but new research shows that “talking with your hands” can not only reveal different information than spoken language, it can be both more correct and yield better learning.

Goldin-Meadow and colleagues have previously shown that spontaneous gesture during speech contains a form of implicit knowledge – knowledge that cannot be verbally reported, but nonetheless affects performance. Similar phenomena are known as knowledge-action dissociations; for example, in Piagetian conservation tasks, children will fail to verbally recognize that two differently-spaced sets of objects actually contain the same number each, but will sometimes nonetheless point out their one-to-one correspondence in their spontaneous and simultaneous gesturing. This might be seen as a cognitive counterpart to blindsight, in which cortically-blind subjects nonetheless possess some “implicit” visual perception, revealed only when a ball is thrown at them or they are otherwise instinctively compelled to respond to a visual object.

Some previous work suggests that apparent knowledge-action dissociations indicates an “unstable” cognitive state in the midst of a transition from immature to more mature knowledge. For example, Broaders, Cook, Mitchell & Goldin-Meadow review evidence that children whose gestures mismatched their explicit answers in a Piagetian conservation task were more likely to benefit from subsequent instruction; likewise, previous work shows that conflicting information in gesture and speech predicts benefit from subsequent instruction in mathematics.

Other work by Goldin-Meadow and colleagues showed that gesturing teachers can be more effective than those who gesture less, but this suffered from an obvious alternative explanation: maybe better teachers naturally make more gestures, and so the causal element was not gesture itself, but rather another unmeasured quality of those teachers.

Now, new work by Broaders et al. goes further to demonstrate a causal relationship between learning and gesture: forcing children to gesture during speech can actually elicit knowledge-action dissociations and significantly improve subsequent learning!

To show this, Broaders et al. asked 106 3rd and 4th graders to solve six simple math problems and to explain their solutions (the baseline measurement). Then, children were split into three different groups to solve and explain another six math problems with slightly different instructions: one group was asked to gesture, another group was asked not to gesture, and a third group was given no instructions regarding gesture (a control group). The strategies generated by children in each group were classified into six types, three of which led to correct answers and three of which led to incorrect answers.

The results:

1) Across all three groups, children added more strategies to their repertoires in the second set of problems.

2) The new strategies were overwhelmingly those that would lead to a correct response.

3) Over 94% of these new strategies were produced uniquely in gesture.

4) Children who were told to gesture added significantly more strategies than the other two groups!

These groups didn’t differ at the baseline measurement nor in their spoken strategies during the second set of six problems – revealing that the benefit of gesture remains at the level of implicit knowledge.

But when that implicit knowledge is addressed with subsequent instruction – as in Broaders et al’s second study – gesturing children benefit more from that instruction on a subsequent paper and pencil post-test! Critically, this effect was mostly driven by the performance of children who were both told to gesture and who added strategies to their repertoire.

Intuition might suggest that information would be more consistent between gesture and speech when people are explicitly asked to gesture – but the opposite was found here, perhaps because forced gesture directs attention to previously unnoticed characteristics of the problems.

But as Broaders et al note, gesture is not always helpful – previous work has shown that hand gestures can hurt performance in tasks where verbalized rules are a more efficient strategy (in one particular task, adults who gestured were worse at predicting how a particular gear might move in a larger configuration of gears – in contrast, the verbal rule is quite simple: for an odd number of gears, the first and last gear move in the same direction). It’s unclear which task characteristics render something more or less amenable to verbal rules vs. spatial gestures.

And there are other cases, undiscussed by the authors, where gesture can actually seem less sensitive to “hidden” knowledge, as in the Dimensional Change Card Sort (a card-sorting game involving a switch from one rule to another, in which children can often verbally report the correct rule but will nonetheless sort cards incorrectly). Evidence from similar tasks indicates that this apparent discrepancy between action and knowledge may be due to strength of working memory, but it will be important for future work to determine the role that gesture may play in this and similar cognitive control tasks.

Perhaps gesture is particularly helpful in the acquisition of rules, but can be unhelpful once those rules are fully acquired. A familiar example is the child who counts with his/her fingers; this initially beneficial strategy might impede later memorization. Mechanistically speaking, gestural speech may be influenced by weaker representations than vocalized speech, allowing for the expression of knowledge (i.e., activation of representations) which have not yet reached a threshold sufficient to guide overt goal-directed action or speech. (This claim is somewhat supported by arguments that speech evolved from gesture). Then, this weakly expressed knowledge (i.e., more activated representations) can be better enhanced by instruction. This latter claim is supported by neural network learning algorithms such as hebbian learning, where representations that are not active tend to become dissociated from those that are active.

Broaders, S.C., Cook, S.W., Mitchell, Z., Goldin-Meadow, S. (2007). Making children gesture brings out implicit knowledge and leads to learning.. Journal of Experimental Psychology: General, 136(4), 539-550. DOI: 10.1037/0096-3445.136.4.539


  1. #1 Craig
    February 5, 2008

    Is the gesturing representative of visual or kinesthetic communication?

  2. #2 CHCH
    February 5, 2008

    interesting question… You could only test that AFAIK by administering an anaesthetic (so kinesthetic sense is knocked out) or by blindfolding subjects while they gesture (so it’s not visual feedback). but it could work just as well under both, if it’s an influence due to the motor planning.

    This approach is obviously not recommended for use with kids 🙂

  3. #3 Carlie
    February 6, 2008

    Interesting! So what implications might this have for online learning, which so many colleges are steamrolling towards?

  4. #4 Al Fin
    February 8, 2008

    fMRI combined with MEG might resolve the source of gesturing. Intentional vs. automatic gesturing. Bad actors and singers force gestures that the audience sees as unnatural.

    I suspect you’d have to use sophisticated mathematical techniques and clever design even with the best imaging to tease out the timing.

New comments have been disabled.