Neural Bases of Hand-Eye Coordination: the "Parietal Reach Region" & the Intraparietal Sulcus

People often use the concept "hand-eye coordination" without appreciating its neural basis. Evidence collected by Andersen & colleagues over the past ten years indicates that different areas of parietal cortex are recruited to represent targets which require different effectors, all using a common eye-based coordinate system. Thus these areas are precisely those which contribute to "hand-eye coordination."

In a 1999 issue of Science, Batista, Buneao, Snyder & Andersen reported that 84% of neurons recorded from the "parietal reach region" of 3 macaques showed a peculiar pattern of sensitivity. The critical finding was that firing in these neurons was more correlated across situations where monkeys moved their arms in different physical directions, but the same direction relative to the direction of gaze, than across situations where monkeys moved their arms in the same physical direction, but different directions relative to the direction of gaze. In other words, the neurons were specific to reaching movements, but were unaffected by the position of the hands; instead, the firing was modulated mostly by the difference in reach direction relative to the direction of gaze. So, this region appears to transform the location of an object in eye-centered coordinates (akin to primary visual cortex) to that required for reaching.

Supposing that this area represents a kind of "module" for transforming "gaze direction data" to "reach direction data," one can make a strong prediction: these cells should be very sensitive to eye movements (saccades) which intervene between the appearance of the target and a reach. Out of 34 neurons tested for this capacity, all 34 in the parietal reach region showed strong modulation by intervening saccades.

The authors suggest that the nearby lateral intraparietal area, which is known to be involved in planning eye movements, may operate with similar representations of space, allowing for a computationally and metabolically efficient coordination of hand and eye movements.

Subsequent work from the same group has indicated that nearby regions - the dorsal aspect of Brodmann's area 5 - may represent hand location in eye-centered coordinates. The authors suggest that computationally, activity from the two areas might be simply "subtracted" to arrive at an object's location in hand-centered coordinates, as would be required for reaching, and as are observed in the primary motor cortex.

Similarly, the anterior intraparietal area is thought to be specific for grasping motions, and also representations spatial information in eye-centered coordinates. An excellent review covers the possible interactions between - and anatomical proximity of - these areas, given their similar coordinate systems.

More like this

Much evidence supports the idea that parietal cortex is involved in the simple maintenance of information, such as in object permanence paradigms (also here) and other tasks. This evidence is part of the justification for the "parietofrontal integration theory", which suggests that parietal areas…
Andersen et al discuss both the attentional and intentional aspects to the function of the intraparietal sulcus. What's the distinction between attention and intention? First, let's talk about attention. The modal view, based on the biased competition model of Desimone and Duncan, and the Miller…
As enigmatic as prefrontal function seems to be, the anterior portions of prefrontal cortex (aPFC) are even more mysterious. This results partly from the fact that aPFC is particularly difficult to access and study electrophysiologically in nonhuman primates, as Ramnani and Owen note in their 2004…
In their already-classic 2001 article, Miller & Cohen use a "train track" metaphor to illustrate the function of prefrontal cortex. The idea is that myriad learned associations interconnect sensory representations with motor commands (metaphorically, these are the "train tracks"). The…

Parietal is very cool and this subject has some zen aspects. For example, is your awareness of the location of your hand in eye centered coordinates:

a) the location of your hand in eye centered coordinates or
b) a prediction of the next location of your hand in eye-centered coordinates (e.g., a forward model)

Other areas ask a similar question. The activation properties of M1, for example, are tuned to the kinematics of the movement you are about to make, not the present movement. Which do you think you're aware of?

Churchland (2006). "Self-Representation in Nervous Systems"…

By Brian Mingus (not verified) on 25 Aug 2008 #permalink