No-Lie fMRI

This is disturbing stuff. According to the Stanford Center for Law and the Biosciences, No-Lie MRI has recently produced a report that's being offered as evidence in a California court.

The case is a child protection hearing being conducted in the juvenile court. In brief, and because the details of the case are sealed and of a sensitive nature, the issue is whether a minor has suffered sexual abuse at the hands of a custodial parent and should remain removed from the home. The parent has contracted No Lie MRI and apparently undergone a brain scan.

The defense plans to claim the fMRI-based lie detection (or "truth verification") technology is accurate and generally accepted within the relevant scientific community in part by narrowly defining the relevant community as only those who research and develop fMRI-based lie detection. [Note: California follows its own version of the Frye test of admissibility, not the current federal test under Daubert.]

Limiting the "relevant community" to only those who research and develop fMRI based lie detection is without merit, if only because such a definition precludes effective or sufficient peer-review. Indeed, it is arguable such a narrowly-defined community has a strong incentive to exaggerate its claims of accuracy and overlook unanswered questions for financial gain if such techniques are "legally admissible."

I think we need to tread very, very carefully when it comes to incorporating fMRI data into the legal system. Brain scans can be incredibly useful, and have generated lots of really exciting research, but I worry about juries and judges subscribing to a false metaphor, which is that these massive magnets are accurate "windows" into the brain/mind/soul. (This is the "myth of transparency," which I've written about before.) It's important to remind ourselves that every fMRI image is highly processed snapshot of blood flow, not some magic readout of our secret thoughts.

And then there's the bias that's introduced when people are shown silhouettes of the skull, complete with splotches of primary color:

Deena Skolnick Weisberg, a researcher at Rutgers University, recently demonstrated how referencing brain scans can bias the evaluation of scientific papers. When she gave neuroscience students and ordinary adults a few examples of obviously flawed scientific explanations, people were consistently able to find the flaws. However, when the same explanations were prefaced with the phrase "Brain scans indicate" both the students and adults became much less critical.

In short, I'd want to see a lot more peer-reviewed work on fMRI and truth detection before I'd feel comfortable seeing brain scan data in court. Otherwise, I think it's too easy to be seduced and convinced by data that looks scientific (the Latinate anatomy! the cortical references! the expensive machines!) but might actually be shoddy pseudoscience.

More like this

Would you believe this brain? Every few months, sometimes more often, someone tries to ramrod fMRI lie detection into the courtrooms. Each time, it gets a little closer. Wired Science carries the latest alarming story: A Brooklyn attorney hopes to break new ground this week when he offers a brain…
A report on ABC news suggests that using fMRI brain imaging to detect lies is as simple as comparing two "pictures" of brain activity: How do you tell which is the truthteller? It's easy, the article claims: Who needs Pinocchio's nose to find a lie? The FMRI scan on the right detects a brain…
Psychologists often complain that neuroscientists get a disproportionate share of the glory when the mainstream media reports on their studies. It seems to some that an important new psychology study is often neglected or ignored entirely, while neuroscience studies of similar importance are hailed…
I had tried to give the Dr. Mark Geier and his son David a rest for a while, as I suspected my readers may have been getting a little tired of my bashing them, no matter how deserved that bashing may have been. After all, they do shoddy science in the service of "proving" that mercury in vaccines…

I was under the impression that neuroimaging data is already admissible as evidence in American law courts.

Neuroimaging, like any other kind of evidence, is not "admissible" or "inadmissible" as a blanket statement. It depends on the purpose for which it is being offered. Structural or medical scans can definitely show gross lesions or certain types of injury or degeneration, which is relevant if that is what is at issue. The use of functional brain imaging, particularly fMRI, for mental state or lie detection purposes has not yet gained general acceptance in the scientific community because it is not yet reliable or valid for such purposes. As such, it should not yet be admissible in court.

By Emily Murphy (not verified) on 14 Mar 2009 #permalink

I was referring specifically to data from companies like No Lie MRI. There is at least one other company in the U.S. which offers its brain imaging for lie detection service to the legal profession, even though, as you say, the method id unreliable.

Not that we know of - this is the first case we've had wind of where No Lie MRI is offering their product as evidence. The other player in this market is Cephos, which, according to its website, expects its technology to be admissible, but I find their arguments unconvincing. Of course, it doesn't matter what I think - all that matters is what the judge hearing the case thinks.

By Emily Murphy (not verified) on 14 Mar 2009 #permalink

aha! tin-foil hat time!

:(

When I was teaching i explained to my students that I believed that each of us has a unique sit of filters that our life experiences has formed. Each of these sets of filters specifically effects the way in which our brain is allowed to derive how we feel about something at any given time. Assuming that even a part of that theory is true would this not interrupt the exactitude of extraction when attempting to read emotional responses in fMRI's given to different people?

This is terrible. We are not far from "1984" at last.
What the world will be like in a hundred years...

By David Burns (not verified) on 15 Mar 2009 #permalink

I am very worried about the idea of using fMRI scans as evidence. It's definitely not ready for prime time, but to compare the technology to phrenology or palm-reading is just as nuts. The folks who study this are in the process of building a body of knowledge, and there are general things that have been uncovered. And as they go along, many more things will be added to the body of knowledge. The same was never said of phrenology, palm-reading, or even tea leaves.

But some day neurotechnology for the identification of truth, as when was valid for the polygraph, would pass the Davenport criteria for scientific admisibility. And that day even the judge have to pass an scanning session for possible breaches of fair proceddures etc.

Another candidate for the CSI-effect.

This is very scary stuff. fMRI data are generally quite noisy and variability between individuals can be huge. Weâre getting fantastic behavior-to-brain information by comparing groups of subjects, but weâre far from being able to reliably go brain-to-behavior.

People in the public may blindly trust the phase âbrain scans indicate,â while scientists in cellular and molecular neuroscience blindly distrust it. The truth is somewhere in the middle. For now, MRI should stay in the hands of scientists and physicians, and far, far away from a court of law.