Neuroethics and the Little White Lie

What if we lived in a world with no secrets? As the field of neuroscience matures, the need for a new brand of ethics--"neuroethics"--is becoming clear, as highlighted in this month's Nature. .

Society would be a different place if all our lies, however trivial, were abandoned in favour of blunt honesty. In some areas, such as criminal prosecution, this might be advantageous. In others, where little white lies help life run smoothly, knowing all the facts might be uncomfortable. These thoughts are brought to the fore by the arrival of two US start-up companies, No Lie MRI and Cephos, which are about to offer functional magnetic resonance imaging (fMRI) brain scans in order to detect lies. The companies, which plan to launch their services later this year, say their goal is to help exonerate the innocent, and to replace the widely discredited polygraph machines used by US government agencies for screening their staff.

The mission of these two companies cannot be taken lightly, especially since many neuroscientists are downright skeptical of fMRI as a lie-detection tool. There is only the flimsiest of evidence that it can tell when someone is lying in the relaxed atmosphere of the lab test--let alone the high-pressure real-world scenarios. (More under the fold......)

Ethicists are troubled that fMRI may be used ineffectively to discern lies, but whats even MORE troubling is if it was used effectively.

Society would, for the first time, hold in its hands a reliable tool with which to finger deceit, and this could have a profound impact on individual privacy and human rights.

Which raises the question, do we have a right to keep our dirty little secrets, and to lie? The private nature of "privledged access" to our thoughts is something that makes us human and free. How might our thinking change if we knew someone else could discern it?

Last month, a group of scientists, lawyers, and ethicists decided to form the Neuroethics Society. This is not unlike the same unrest that the genetics community faced more than 30 years ago, when they got together to discuss the regulation of recombinant DNA as well as how genetic information could possibly be used as a discrimination tactic. But,

Whereas a genetic test can say something definitive about a particular genetic make-up, and therefore about predisposition to disease, for example, an fMRI scan is just an indirect measure of neural activity based on oxygenated blood flow. For now, neuroscientists have only the most basic grasp of what this says about how the brain processes information.

It is exciting and exhilerating to see science progress so far, so fast; the ethics community is desperately trying to keep up. Yet through this new-found power comes great responsibility, in the applications that we set this science to. And this technology could be horribly abused.


More like this

The TV show Lie To Me focuses on the exploits of an expert in lie-detection as he solves perplexing crimes in his high-tech Washington laboratory. It's actually fun to watch, especially since it appears to make some effort to get the science right (a real-life expert on lie-detection, Paul Ekman,…
Would you believe this brain? Every few months, sometimes more often, someone tries to ramrod fMRI lie detection into the courtrooms. Each time, it gets a little closer. Wired Science carries the latest alarming story: A Brooklyn attorney hopes to break new ground this week when he offers a brain…
Washington Treasury Secret Service Bureau chief M. R. Allen acts as a subject in a demonstration of the polygraph test, at the U.S. Secret Service Men's Convention in 1941. (Image: Bettmann/ Corbis) This week, the NPR Morning edition featured a three-part series on lie detection, which included…
In case you missed them, here are my picks from's Psychology and Neuroscience posts from the past week. Mice navigate a virtual-reality maze. Go for the amazingly cute video. Stay for the science! Brain imaging for lie-detection doesn't live up to the hype. Remember all those…

I'm not sure I find the possible use of fMRI as an interesting an ethical dilemma as you do. It seems to me that use of fMRI for lie detection wouldn't be all that different a kind of thing than use of say something such as measuring galvanic skin response or respiratory rate through a polygraph. Perhaps more accurate but not in kind different.

However, I agree with you that genetic screening might pose more of a concern especially in light of history. For example, how XYY males were considered predisposed to criminal activity during the mid 20th century. Moreover, given the faith that people often place in genetics as a predictor of behaviour (and a wide range of other things), it seems that cases of sloppy science (which seems to be the kind most often cited by those of public authority) regarding genetics could result in discriminatory practices.

It's an interesting issue.

Here's where the problem arises, CK:

As with genetics, and many other forms of science, there exists a disconnect between the actual usefulness/accuracy of a method and its perceived usefulness/accuracy in society. This is of serious concern when there are businesses who wish to exploit said method, and politicians who are in charge of many funding decisions as well as setting the emotional tone in the public sphere.

It is a natural phenomenon, because American culture is by nature obsessed with new and exciting technology. This is especially true when it *may* give us some insight into the human mind. Many people, and scientists fall prey to this too, gloss over the failures of a thing in deference to its potential for good.

With fMRI, it is truly amazing that we can monitor blood flow to brain regions in real-time. This is a major breakthrough. And its easy to get caught up in the method's potential (i'm sure geneticists felt the same way when recombinant DNA was developed, and when the human genome was sequenced). The issue is in interpretation of the data the method proveides and presenting a whole and accurate picture of what to expect.

The point? We don't know what blood flow to a region means, in most cases. Some actions/feelings/thoughts actually promote a DECREASE in bloodflow, and the individual variation is so high. Therefore, how to interpret the data and be fair? While you can "fudge" or "gloss" the results when you are performing a study which is general and investigating something new, when a person's guilt or innoncence is at stake we have to own up to what we do not know. That seems to be the ethical thing, in my opinion.

Granted the mistaken belief that fMRI can accurately identify deception when it can't (or when it's unclear whether it can or not) would likely be problematic, especially in light of the general faith in the progress of technology that many people have. However, current "lie detection" methods are inaccurate as well, perhaps even moreso that fMRI might be (I admit I don't know much about fMRI or imaging technology, when I studied neuro my interest was in development and electro-phys).

Such being the case I think if there's an ethical issue it isn't one that's particular to fMRI but applicable to the more general belief that accurate lie-detection is possible or the reliance/faith that is placed upon lie-detection technology by various organizations.

On a bit of a tangent - there was a novel published by James Halperin way back in 1997 which I read (shock horror) when I was still in highschool. It explores a near-future society in which a truth machine was invented. Its been a while since I read it, but in the story the invention ultimately screws over the inventor. I remember it being a pretty absorbing read though. Its called 'The Truth Machine' (Amazon)'.