Would you believe this brain?
Every few months, sometimes more often, someone tries to ramrod fMRI lie detection into the courtrooms. Each time, it gets a little closer. Wired Science carries the latest alarming story:
A Brooklyn attorney hopes to break new ground this week when he offers a brain scan as evidence that a key witness in a civil trial is telling the truth, Wired.com has learned.
If the fMRI scan is admitted, it would be a legal first in the United States and could have major consequences for the future of neuroscience in court.
The lawyer, David Levin, wants to use that evidence to break a he-said/she-said stalemate in an employer-retaliation case. He’s representing Cynette Wilson, a woman who claims that after she complained to temp agency CoreStaff Services about sexual harassment at a job site, she no longer received good assignments. Another worker at CoreStaff claims he heard her supervisor say that she should not be placed on jobs because of her complaint. The supervisor denies that he said anything of the sort.
So, Levin had the coworker undergo an fMRI brain scan by the company Cephos, which claims to provide “independent, scientific validation that someone is telling the truth.”
Laboratory studies using fMRI, which measures blood-oxygen levels in the brain, have suggested that when someone lies, the brain sends more blood to the ventrolateral area of the prefrontal cortex. In a very small number of studies, researchers have identified lying in study subjects (.pdf) with accuracy ranging from 76 percent to over 90 percent. But some scientists and lawyers like New York University neuroscientist Elizabeth Phelps doubts those results can be applied outside the lab.
“The data in their studies don’t appear to be reliable enough to use in a court of law,” Phelps said. “There is just no reason to think that this is going to be a good measure of whether someone is telling the truth.
Phelps, who’s one of the savvier, more careful imaging scientists around — though hardly an fMRI basher — almost certainly has it right here. One problem, as Phelps notes, is that we simply lack enough data to call this reliable lie detection.
Brooklyn Law School professor Ed Cheng, meanwhile, says that’s not quite the point:
Humans, [Cheng points out[,, are terrible lie detectors and yet our legal system is based on allowing them to make those determinations. If slightly better than chance is the baseline, any improvement on that could be a reason to allow the evidence into court.
“The validation studies may have some problems,” he said. “But if we can help the jury make this decision even a little bit better, it’s hard to defend keeping this stuff out.”
A nice thought, but it misses something critical: As I noted in an earlier article on the overreach of forensic science, juries tend to be overly credulous about any evidence offered as forensic or scientific evidence. And other studies show that imaging studies generate an extra layer of overcredulousness. (On those, see Dave Munger and Jonah Lehrer.) So when an ‘expert’ shows a jury a bunch of brain images and says he’s certain the images say a person is lying (or not), the jury will led this evidence far more weight than it deserves.
Finally, bringing fMRI into the courtroom as a lie detector implies, as per the rules of scientific evidence, that the notion of using fMRI as a lie detector enjoys “general acceptance” among the neuroimaging discipline. Anyone telling you that the case is … well, let’s just say they‘re mistaken.