Does the Brain Care About the Truth?

A friend of mine recently asked me a simple question that I couldn't answer:

i want to know if there is a physiological explanation for why we have an easier time remembering things that we perceive to be true.

bad example: suppose that you believed that the earth was flat. then i took you out into space and showed you that it was round. you would not forget that. i'm having a hard time expressing this very simple thought: we are constantly replacing old beliefs with new ones that are in some sense "more true". you can intuitively distinguish between those thoughts that you have that you think are true, and those that you think are false. it's a feeling. my question is if there is a physiological explanation. a "true" marker that your brain can put on memories, or a filing cabinet in your brain that is reserved for "true facts".

i don't care whether any facts are ever actually true or not. that's not the question. the question is how your brain sifts out the things YOU THINK are true from the things you think are false.

I did a quick pubmed search for "truth" and "fMRI" and came back empty handed. On the one hand, you might expect the brain to have a neural pathway dedicated to ideas that are perceived as statements of truth. Such a pathway might allow the cortex to privilege such truths over less trustworthy ideas. But the truth doesn't seem to have a distinct neural correlate. This strikes me as a little odd, considering that we have mental modules designed to detect lots of other things, like happy faces, acts of agency and cheaters. But I guess the brain wasn't designed with epistemology in mind. Being able to recognize the truth isn't that useful, at least from an evolutionary perspective.

But what about an even more elementary distinction, like being able to tell the difference between statements of fact (a proxy for truth) and matters of subjective opinion? Again, the brain doesn't seem to care. Your limbic system can instantly tell which statements come with an emotional component - and I guess opinions tend to be more emotional than uncontested facts - but that hardly counts as a "physiological marker of truth".

So does this mean that our evolved brain doesn't care about the truth? Is the mind just a pragmatic machine, only interested in what works? How else could experiments even look for the neural correlates of truth?

More like this

this seems like an empirical question even though truth might be subjective. I'm surprised there are no publications on this.

Do it the classic way: Tell people two 'facts' where one is 'true' and one is 'false' (but where the 'facts' are emotionally neutral: ie 'the ball is blue', 'the ball is green') and then setup a reaction time test (without the balls visible during the actual test phase) based on giving the _false_ answer and a comparison test where they have to give the _true_ answer.

By Benjamin Franz (not verified) on 08 Mar 2007 #permalink

The mind does not record events, it models them. Even our memories are models (and subject to adjustment). There is an intrinsic sense that a model is 'right', even when it is wrong. A better model replaces a lesser model simply because it is either 'right' while the other is 'wrong', or it is 'righter' and the other is less so.

As a simple modern example, you go into a large building, up and down elevators, through mazes of corridors, and take the fire stairs to leave, expecting to come out facing the street. However, somewhere inside your sense of direction got turned around. Until you found otherwise, your model of which way was which seemed right. Now that it's corrected, the sense seems as right as the earlier one had, the one that somewhere went wrong.

Thinking in terms of modeling should make it clear that we don't carry around models we think are wrong, along with those we think are right. We have models, period. We discard the ones that blunder. We may have models for how wrong-thinking people operate, but we think those models are true, not false.

Well, that's the rational part of us.

Many of us have models that have proven wrong time and time again, yet we hang onto them for some reason other than utility as guides. The gambler who believes he can change his luck is running with a faulty model; a statistician can prove he's wrong, but the gambler will stick to his model because it serves some purpose for him other than money management.

Many people live out their lives guided by a sense of duty, always doing what they're supposed to do, regardless of the cost to themselves. (I was one of them for over 50 years.) If you always do what you're supposed to do, there will be no end of people supposing that you do things.

Think of grownups who will take out their savings in cash so a gypsy fortuneteller can bless the money. Think of people who repeatedly get stung by the pigeon drop, or who always fall prey to chain letters. This behavior makes sense only if they cling to wrong models because there is a payoff in their somewhere, even if we can't see it.

So we do not have a number of models in mind that accomplish the same thing, one ringing true and the others ringing false. We discard models we think are wrong, and what remains always seems right, intrinsically, even if some of them will later prove wrong.

Your limbic system can instantly tell which statements come with an emotional component...

Can you go into more detail (maybe some non-gated references) because that sounds really interesting.

An emotional component for whom?

Also, how can you "remember" something you perceive to be false? If someone asks me a question about something I did yestersday, I only have "memory" of true facts. I could answer them something false, but I'm not "remembering" a false fact (unless I'm schizophrenic).

I've been able to find a few articles on "deception detection" but none of those use fMRI (at least, not in the sense of the phrase that we're talking about here).

Is "deception detection" the same as "truth detection"? The human brain is relatively adept at detecting cheaters and liars, so why isn't it also good at detecting the truth?

As for the detection of emotional components...There's a huge body of literature on this. When you see a fearful face, or hear a scream, your amygdala automatically lights up as if you were frightened. Fear is contagious. So is happiness, as smiles induce unconscious acts of motor mimicry. When we see somebody smile, a faint smile also passes across our face. (Sometimes, you need electronic sensors to detect this invisible mimcry, but it's there.) Perhaps mirror neurons are responsible for this sort of automatic sympathy, but that remains to be proven.

If you are looking for a general introduction to this sort of research, I would recommend "Emotional Intelligence" and "Social Intelligence" by Daniel Goleman.

And yes, we are designed to always believe our memory, even if the memory isn't true. The brain is gullible, which helps explain why it's so easy to implant false memories.

Okay, Goleman. But is "EI" different from "EQ"?

But the way you describe fear responses in the amygdala sounds pretty simple. And a frightened scream is pretty unambiguous (there's a pretty high signal to noise ratio because humans don't go around howling unless they want to call attention to something).

"Truth detection" is limited by your available evidence, though. And I don't know why you say the brain is worse at detecting the truth, anyway. To take an example from the first chapter of The Logic of Science, "Suppose some dark night a policeman walks down a street, apparently deserted; but suddenly he hears a burglar alarm, looks across the street, and sees a jewelry store with a broken window. Then a gentleman wearing a mask comes crawling out through the broken window, carrying a bag which turns out to be full of expensive jewelry. The policeman doesn't hesitate at all in deciding that this gentleman is dishonest. But by what reasoning process does he arrive at this conclusion?"

Plausible reasoning is easy, and we do it all the time.

i guess neuropsychological/neurophysiological studies of people with various kinds of delusion and false belief (alien hand syndrome, anosognosia, somatoparaphrenia, and the likes) will shed some light on this issue, speaking of the brain...

This is in press for Neuroimage, unfortunately I don't think it is possible to get even an abstract or a proper link without a journal subscription.

They found inferior parietal and precuneus activity for both true and false belief tasks, but no areas specific to true belief only.

Neural correlates of true and false belief reasoning
In Press, Accepted Manuscript, Available online 12 February 2007,
Monika Sommer, Katrin D�hnel, Beate Sodian, J�rg Meinhardt, Claudia Thoermer and G�ran Hajak

There are a few relevant studies (fMRI and others).
Most relevant:
Hagoort, P. et al. (2004). Integration of word meaning and world knowledge in language comprehension. Science 304: 438-441. [prior work by van berkum also relevant]

fMRI study on the illusory truth effect:
fMRI Evidence for the Role of Recollection in Suppressing Misattribution Errors: The Illusory Truth Effect (Mitchell et al.,2005). But, it's really about source memory for whether a statement was said to be true or false.

Numerous fMRI studies of deductive reasoning (Goel and others) have repeatedly looked at whether judging a conclusion's validity (i.e., does it follow from X) differ in any brain region from the simple reading of the same statement.

Ferstl et al., 2005 (Emotional and Temporal Aspects of Situation Model Processing) examine coherence (i.e., consistency) between a given statement, and prior context (which is not "truth" in the long-term knowledge sense, but the local discourse sense).

ERP studies *have* looked at the comprehension of true and false statements, but that literature has task demands problems (verification paradigms).

I'm 30 years outdated on this aside from casual reading, but I recall "intentionality" as a keyword. Something recent, also, about using a video in which some things moved purely according to physical law (inertia, expected force of gravity) and others moved in ways that did not. The subjects (I forget if they were pigeons, rats, or undergrads) did immediately detect the difference.

Also don't all the studies being done for politics and other forms of marketing about how convincing advertising is relate to this question?

I'd love to see a study done on Nat'l Public Radio listeners when that syrupy-voiced man who does the Archer-Daniels-Midland commercials comes on, for instance. Or any other clearly (how is it clear, I wonder) PR/BS ad.

Anecdote:

Long, long ago I interned at a mental health facility. One of the psychiatrists told us that during the Nixon impeachment story, it was most interesting to watch how the then inpatients reacted to the TV news. She said that quite consistently, the depressives tended to believe what they heard, and the active schizophrenics ignored the words, watched the faces on the screen --- and whenever Nixon showed up, would say "he's lying, you can see it on his face."

The latter was told to us youngsters to make the point that we should never try to lie to patients --- that it made them worse, because it imposed a double bind on them; they could tell we were lying, and wanted to act according to what we said rather than what we were clearly communicating by all means other than words.

Much later, after Nixon had dumped most of the longterm mental hospital inmates on the streets and called it "community mental health" I was working at one of those facilities, still as a youngster. The Director called the staff together and told us that they were short on budget, so were creating an activity program --- get a bunch of the outpatients together on a weekend day, take them to the zoo or the beach, and call it activity therapy.

We did that. Of course the patients asked, did this mean they were getting worse because they needed another day of treatment. Of course the young "VISTA" volunteer assigned told them the truth. And the patients, who's been reluctant to do this, decided the clinic needed their help -- got organized on their own, made sure everyone showed up, and made the program hugely successful.

When the Director learned that, he fired the Vista volunteer. The patients weren't supposed to know.

Aside -- it was one of the most successful things the place did. I recall one trip -- with mostly young inner-city "staff/volunteer" people managing it --drove a couple vanloads of mostly elderly, mostly quite crazy-acting patients out to a farm that one of the staff doctors owned. The vans pulled up. The patients got out.

They were next to a barbed wire fenced pasture. And they all went over to the fence and the horses came over, of course. And they started petting them, talking to them.
The 'staff' got nervous being around these big strange animals.

Then the patients pulled up the barbed wire (quite effectively) and crawled into the pasture with the horses. They'd all grown up on farms, they all knew exactly what to do. The 'staff' freaked out but couldn't do a thing about it, they were scared to go in there.

I only heard about it the following week -- when I asked some of the people how the trip had been and why all the supposedly crazy people who'd been on it seemed so happy and so clear about where they'd been and what they 'd seen and wanted to go back.

Long, long ago.

By Hank Roberts (not verified) on 10 Mar 2007 #permalink

Interesting article; Truth may simply be the memory patterns that have a consistency reinforced by new information, and a lie or deception a conflict with the pattern. How this relates to cognitive dissonance would be interesting. It may be similar to how the brain has an unconsious image of the body that gets concious when perceiving pain.

By Dan Slaby (not verified) on 27 Jul 2011 #permalink