Misleading "lie detector" science from ABC News

A report on ABC news suggests that using fMRI brain imaging to detect lies is as simple as comparing two "pictures" of brain activity:

i-cf3bc44c9cc942e087004b09b07b8f79-abc_mri.jpg

How do you tell which is the truthteller? It's easy, the article claims:

Who needs Pinocchio's nose to find a lie? The FMRI scan on the right detects a brain processing a false statement; the less colorful brain on the left corresponds to someone in the middle of a truthful statement.

According to the article,

When someone lies, the brain first stops itself from telling the truth, then generates the deception. When the brain is working hard at lying, more blood rushes to specific portions of the brain and that's what can be detected on the machine.

Accurate lie detection, it seems, could be conducted by a monkey with an fMRI. Why aren't the courts scrambling to adopt this new technology?

As several blogs have pointed out, things aren't really that simple.

Unless you're dead, an fMRI image never looks like the one on the left. An fMRI measures blood flow, and there's always blood flowing through the brain, even during sleep. As the Neurocritic points out, if we are to believe these images, "the only activity in the truth-telling brain on the left seems to be located mostly outside the cerebral cortex."

Reading between the lines of the ABC News article, I'm guessing that how the technology really works is by comparing two different fMRI images. You establish a baseline by asking the suspect to make true responses to questions like "Am I wearing a blue shirt?" or "Is it raining today?"

Then you move on to questions about the alleged crime ("Miss Scarlet, did you kill Professor Plum in the billiard room?"). It's only by comparing the baseline scans with the scans taken during the questions about the crime that you can generate images like the ones depicted above. Each pixel on the image (or "voxel" in a 3D space) is generated by subtracting the value from one scan and another. In fact, virtually any fMRI image you ever see is the result of a similar subtraction process, usually between the "resting state" and some other state that is of interest to researchers.

But the brain process responses to questions about shirt color differently than questions about murder. So subtracting two MRI scans during responses to different questions, even when both responses are truthful, is never going to look like the image on the left. fMRI lie detection involves a lot more than just subtraction: the operator needs to know which areas of the brain are typically activated in a myriad of processes. The problem comes with the decision to filter those out so that clear "colorful" images can be created, making the business of lie detection seem simpler than it is.

Who's to judge whether the fMRI operator made good decisions while analyzing the data? There are dozens of points where error or bias can be introduced. It's no surprise, then, that No Lie MRI can claim only 90 percent accuracy.

I wonder if, in the messy real world rather than a controlled lab environment, they can even be that accurate.

Tags

More like this

Would you believe this brain? Every few months, sometimes more often, someone tries to ramrod fMRI lie detection into the courtrooms. Each time, it gets a little closer. Wired Science carries the latest alarming story: A Brooklyn attorney hopes to break new ground this week when he offers a brain…
Washington Treasury Secret Service Bureau chief M. R. Allen acts as a subject in a demonstration of the polygraph test, at the U.S. Secret Service Men's Convention in 1941. (Image: Bettmann/ Corbis) This week, the NPR Morning edition featured a three-part series on lie detection, which included…
What if we lived in a world with no secrets? As the field of neuroscience matures, the need for a new brand of ethics--"neuroethics"--is becoming clear, as highlighted in this month's Nature. . Society would be a different place if all our lies, however trivial, were abandoned in favour of blunt…
Spence et al. wanted to test the use of fMRI for lie detection. In order to do so, they found a subject who had been convicted for child molestation because she has Munchausen's syndrome by proxy. There are two important parts of background to this piece. First, Munchausen's syndrome by proxy (…

What is the current number of scans needed to get a "baseline" anyway? It used to be I think around 10. Then you would need to ask the questions a bunch of times also I think if you want a decent signal. I am having a hard time seeing this as effective in the short term. What if the recipient imagines lying? When the discrimination is that good we'll just about be mind reading anyway.

When someone lies, the brain first stops itself from telling the truth, then generates the deception.

But is that really always true? What if the deception is pre-prepared, and has been repeated so often that it's the first thing that comes to mind? That's what I do if I know I'm going to have to lie about something... And I'm not a particularly good liar. I have a friend who's an excellent liar, and he mostly doesn't seem to even realise he's doing it.

In the real world, what matters is that the need for numerous questions to establish a 'baseline' is a perfect opportunity to go on a fishing expedition, as frequently occurs with the 'polygraph'. If this fMRI 'no lie' technique is ever cheap enough for law enforcement, we'll see little to no restrictions on the questions used to establish a 'baseline'. (This of course will make any resulting 'baseline' useless, but once the victim admits to something incriminating that doesn't matter.)

It bothers me that self-styled experts will blindly assert that lying takes more thinking than telling the truth, and therefore we can detect deception by detecting increase.

First off, we are always editing our speech, both ahead of and while speaking, even if we never give voice to our speech.

It takes real effort to get something just right, and little effort to get it all wrong.

For an idiot-simple demonstration, I will answer a question with a lie, and you are to try telling the truth. Our measure of deception will be the difficulty of answering.

Question: What exactly do we mean by deception?
My Lie: A giant bowl of green Jell-O.
Now, your truthful answer is what?

Secondly, try this thought experiment. We hook up to polygraphs two teams of people, the one being scientists defending evolution, the other being creationists attacking evolution. Which group will show the greater difficulty getting their story out?

BTW, in polygraphing, they begin with calibration, and once that's done all the rest of the questioning is outside the calibrated range -- which proves the calibration is just a gimmick.

By Watt de Fawke (not verified) on 04 Sep 2007 #permalink

This was shown earlier in a series of programmes on Channel 4 here in the UK. The subject is given a series of yes/no questions, and given ample time (weeks) to get to know the researcher, the experiment, and the questions which are to be asked.

The questions are asked several times while the subject is being scanned, and - crucially - the subject is asked to give BOTH sets of answers, both those which describe his version of events, and those which describe his accuser's. The scans then (if all goes well) fall into two clusters, one which looks more like a person lying, and one which looks more like someone telling the truth.

The method in practice seemed very good at homing in on weak points in the person's story, and (probably significant for any use in legal cases) each answer came with a confidence estimate of how well the method has worked for that question.

By Ian Kemmish (not verified) on 04 Sep 2007 #permalink

i'm surprised that they even reach 90% accuracy.

glad to see a post on this - there's been chatter on the dangers of lie detection using fMRI for years now, but scientists still haven't seemed to wade through the sex appeal to convincingly convey the inherent problems with using fMRI technology for such ends.

ugh.

From experience, and from some conversation I had with William Hirstein on confabulation, it is absolutely NOT the case that the brain "first stops itself from telling the truth". On the contrary, there is some very good evidence (including from MRIs, and which is additionally borne out by studies of stroke and traumatic brain injury victims) that the brain continuously generates "likely" stories about various phenomena or experiences, many of which explicitly serve the individual's interests -- and then there is some additional processing in the right hemisphere (mostly) which "fact-checks" these stories and weeds out the ones that aren't true. Damage that fact-checking and censoring process, and people both knowingly lie and unconsciously confabulate easily, fluently, and often.

By Luna_the_cat (not verified) on 06 Sep 2007 #permalink

There was a pretty interesting article on new brainscan-based "lie detectors" in a recent New Yorker (one I've already thrown out, so I don't know the date, but this summer) that looked at both the history of attempts at lie-detection and at the strengths and weaknesses (really, the preliminary state) of brain scanning. Interesting and pretty careful, I think. (Interestingly, the big market right now appears to be divorce cases and the like, not law-enforcement...)

Oh! here it is! yay, Internet! :)