Monkey see, monkey control prosthetic arm with thoughts

Blogging on Peer-Reviewed ResearchThe realm of science-fiction has just taken a big stride towards the world of science fact, with the creation of a prosthetic arm that can be moved solely by thought. Two monkeys, using only electrodes implanted in their brains, were able to feed themselves with the robotic arm complete with working joints.

i-e2813ff6100c791e6777949aa94f3ec7-Monkeyreach.jpgBionic limbs have been fitted to people before but they have always worked by connecting to the nerve endings in the chest. This is the first time that a prosthetic has been placed under direct control of the relevant part of the brain.

The study, carried out by Meel Velliste from the University of Pittsburgh, is a massive leap forward in technology that lets the brain interface directly with machines. Previously, the best that people could do was to use a cap of electrodes to control the movement of an animated computer cursor in three-dimensional space. Velliste's work shows that signals from the brain can be used to operate a realistically jointed arm that properly interacts with objects in real-time.

The applications of the technology are both significant and obvious. Amputees and paralysed people could be kitted out with realistic prosthetics that afford them the same level of control as their lost limbs.

The fake arm that Velliste created is certainly a good step towards realism, and could rotate freely about the shoulder and flex about the elbow. Only the hand was simplified; the 'wrist' was fused and the 'hand' consisted of two opposing 'fingers'. Despite this claw-like tip, the monkeys learned to use the arm successfully and there were even signs that they came to accept it as their own.

Monkey reach

Velliste implanted an array of 116 tiny electrodes into the part of each monkey's brain that controls movement- the motor cortex. A computer programme analysed the readouts from these electrodes in real-time and converted them into signals that moved the arm. All the while, the animals own arms were restrained, forcing them to rely on their surrogate limbs.

i-6948a026c9a01f87c37e31577ff4aac7-Directions.jpgA deceptively simple programme analysed the monkeys' thoughts. Every one of the 116 electrodes was tuned to a particular direction in three-dimensional space. Every thirty milliseconds, the programme added up the directional preferences of any electrode currently firing, in order to work out where the monkey wanted the arm to end up. It then calculated a set of movement for the arm's joints to get it to the right position.

The monkeys' task was to reach a piece of food placed at random positions in front of them and bring it to their mouths. At first, they made large movements to get the arm in roughly the right place followed by small adjustments to home in on the target, be it the piece of food or their own mouths.

After training and two days of trials, the monkeys successfully grabbed and ate the food on 61% of their attempts. It's not a perfect score, but the task is considerably more difficult than anything attempted in previous experiments. The same research group, led by Andrew Schwartz, had thus far only managed to get subjects to move a virtual cursor towards a target 80% of the time. In this study, the monkey succeeded in the equivalent challenge - moving the arm in the general direction of the food - 98% of the time. Beyond that, it also had to home in on the prize, grasp it and bring it back to its mouth.

Monkey learn

Velliste believes that the secret to the arm's success lay in making it as natural as possible. For a start, it could freely move its head and eyes without affecting the signals controlling the arm. It could also move the fake arm in real time. There was only about an seventh of a second worth of delay between a burst of brain activity and the corresponding movement; natural arms have similar delays between thought and deed. This nigh-instantaneous control was obvious during one trial when the animal dropped the food and immediately stopped moving the arm.

i-a73255aa196e9fc8cb77ee8b51e88621-Monkeyreach2.jpg
This natural responsiveness made it easier for the monkeys to accept the arm as their own. They learned behaviours that had nothing to do with the task, like licking remaining food off the fingers or using the hand to push food into their mouths. They learned to move the arm in arcs to avoid knocking the food off the platform while bringing it back in a straight line. They even learned that the food (marshmallows and grape halves) stick to the fingers so while they initially opened the hand only when it was near their mouths, one of them figured out that they could open the hand well before then.

Behaviour like this is very promising, for it suggests that interfaces between brains and computers could be used to drive prosthetics in real-world situations. So far, the main flaw with the fake arm is that it's not quite as fast as a real one yet. The monkeys took about 3-5 seconds to complete the task, which is about twice as much time as they would take with their own arms. This may be due to the need for small corrective movements once the arm was in roughly the right place.

The researchers also saw that one of the monkeys made small movements with its restrained right hand. Obviously, amputees wouldn't have that luxury, but Velliste argues that it's unlikely that these movements made controlling the arm any easier. For a start, the electrodes were only implanted in the right half of the monkey's brain, which controls the left arm not the right. And we know from other studies that you don't need to move in order to control a virtual prosthetic. 

Reference: Velliste, M., Perel, S., Spalding, M.C., Whitford, A.S., Schwartz, A.B. (2008). Cortical control of a prosthetic arm for self-feeding. Nature DOI: 10.1038/nature06996

Images: from Nature

More like this

Neuroscientists at the University of Pittsburgh report that they have successfully trained monkeys to feed themselves using a robotic arm controlled by a brain-computer interface (BCI).  The study has been covered extensively in the media, and I've written quite a lot about these devices in the…
One of the bigger challenges facing researchers who are developing artificial limbs is to create prostheses that not only act but also feel like real limbs. This is especially true for the hand, which is one of the most sensitive parts of the human body, and although advanced prosthetic hands with…
SECOND LIFE is an online "virtual world" which enables users to create a customised avatar, or digital persona, with which they can interact with each other. It has become incredibly popular since its launch just over 6 years ago, with millions of "residents" now using it regularly to meet others,…
'If man in space, in addition to flying his vehicle, must continuously be checking on things and making adjustments merely to keep himself alive, he becomes a slave to the machine. The purpose of the Cyborg, as well as his own homeostatic systems, is to provide an organizational system in which…

That last paragraph made me think about even more potentially intriguing possibilities: you mention that it might be slightly different in amputees because they don't have the existing arm that the monkey moved in its restraint, but who's to say that this technology would be limited to existing/missing appendages? Do you think the same prosthetic arm could potentially be applied to a dog, which doesn't have an arm in the first place? Or better yet, maybe we could hook up some prosthetic tentacles to people! I don't know if an animal's motor cortex is only capable of controlling the types of limbs that the particular animal has, but I can't imagine it would be impossible...

I know its for the interest of good science, but how did these monkeys lose their arms in the first place?

By mike spear (not verified) on 28 May 2008 #permalink

woops looked at it again, looks like the arm wasn't actually attached.

By mike spear (not verified) on 28 May 2008 #permalink

I love that my readers look at scientific discoveries and think about how they could be used for evil. Max, for example, appears to be intent on turning himself into Doctor Octopus...

Oh and Warren, you're thinking of penguins.

"...maybe we could hook up some prosthetic tentacles to people" - PZ would definitely go for that!

I'm wondering if the TV series "Bionic Monkey" is on the fall schedule....

I'm wondering if you could clarify a bit on the program they are using to interpret the brain signals. As far as I understand, every electrode that is put in has a different orientation in space, that is known to the researchers. Basically they can be though of as a set of 116 vectors pointing in different directions in space. And then they are measuring whether the neurons around that electrode are firing. And then they sum over all the vectors corresponding to the electrodes that are firing, and move the arm in that direction? Is that right, or I am completely off?

This means that there is no "pattern-matching" going on, i.e. looking at how the neurons are firing and trying to figure out what the monkey wants, it's just the monkey is somehow learning that firing these sets of neurons does what it wants to do? Which really is even more impressive to me, if that is the case.

This is the first time that a prosthetic has been placed under direct control of the relevant part of the brain.

Wait a minute - what about the work done at Duke 5 years ago?

Learning to Control a Brain–Machine Interface for Reaching and Grasping by Primates
Carmena JM, Lebedev MA, Crist RE, O'Doherty JE, Santucci DM, et al.
PLoS Biology Vol. 1, No. 2, e42 doi:10.1371/journal.pbio.0000042

Reaching and grasping in primates depend on the coordination of neural activity in large frontoparietal ensembles. Here we demonstrate that primates can learn to reach and grasp virtual objects by controlling a robot arm through a closed-loop brain-machine interface (BMIc) that uses multiple mathematical models to extract several motor parameters (i.e., hand position, velocity, gripping force, and the EMGs of multiple arm muscles) from the electrical activity of frontoparietal neuronal ensembles. As single neurons typically contribute to the encoding of several motor parameters, we observed that high BMIc accuracy required recording from large neuronal ensembles. Continuous BMIc operation by monkeys led to significant improvements in both model predictions and behavioral performance. Using visual feedback, monkeys succeeded in producing robot reach-and-grasp movements even when their arms did not move. Learning to operate the BMIc was paralleled by functional reorganization in multiple cortical areas, suggesting that the dynamic properties of the BMIc were incorporated into motor and sensory cortical representations.

By Owlmirror (not verified) on 29 May 2008 #permalink

Good point Owlmirror, some of the coverage of this story has tended to down play previous research such as that done by Carmena et al in 2003 (though Velliste et al do cite this study in this weeks paper). Andy Schwartz's group have also published previously on this topic about the manipulation of virtual objects.

Why this study is important is that it demonstrated for the first time the ability to interact with and accuratelly manipulate real objects. Previous BMIs had manipulated virtual objects and moved prostethic limbs but did not manipulate real objects. This study also demonstrates finer control than has been achieved before and shows that fewer neurons needed to be monitored than many researchers had thought.

Perhaps it's a more incremental step than a revolutionary one, but it's a pretty important increment!

Owlmirror and Paul - thanks for pointing out the earlier research.

Coriolis - Here's the actual description of the algorithm from the paper (it was a bit difficult to 'translate'!)

"The population vector algorithm28 (PVA) used here was similar to algorithms used in some cursor-control experiments15, 21. It relies on the directional tuning of each unit, characterized by a single preferred direction in which the unit fires maximally. The real-time population vector is essentially a vector sum of the preferred directions of the units in the recorded population, weighted by the instantaneous firing rates of the units, and was taken here to represent four dimensionsvelocity of the endpoint in an arbitrary extrinsic three-dimensional cartesian coordinate frame, and aperture velocity between gripper fingers (fourth dimension). The endpoint velocity was integrated to obtain endpoint position, and converted to a joint-angular command position, for each of the robot's four degrees of freedom, using inverse kinematics."

Thanks Ed, I looked up the actual paper as well, although I'm still not quite getting it, it's a bit of a jump from physics.

Do you know what they mean by "unit" and "tuning" in the explanation you quoted? Does it mean "electrode" and "orientation of that electrode", and by "firing", whether or not the neuron that is connected to that electrode is firing? And then the monkey learns that if it fires that neuron, the arm will move in the direction that electrode connected to the neuron is associated with?

Or are they somehow directly figuring out which neurons in the cortex are inherently associated with movements in which direction (I find this pretty unlikely).

Basically I'm trying to understand whether they are providing the monkey's brain with a system to move the arm and then the monkey learns how to use it. Or instead trying to figure out what firing pattern means movement in what direction and then adjusting their program to make the arm move in that way. From what I understand, the first is the correct notion, but I'm not sure.