Cognitive Daily

What is your mind doing when you think about something? For decades, the prevailing wisdom was that when you imagine, say, the scent of a flower or your lover’s perfume, your mind is doing something different from when you actually smell those things. The metaphor was a computer: The hardware for sensing things was distinct from the software for thinking about things.

More recent evidence suggests that the way we understand concepts relies on the sensorimotor system. When you think of the sound of a dripping faucet, the same parts of your brain are activated as when you are actually hearing a faucet dripping. (Computer geeks should see how the computer metaphor breaks down: it’s as if searching a database of images required the server to access its video card.)

But if conceptual thinking requires the sensorimotor system, then thinking about concepts should have the same limitations as our senses. For example, in 2000, Charles Spence, Michael Nicholls, and Jon Driver found that the reaction time for signals was slower after a change of modalities (like touch and hearing) compared to when the modality stayed the same (for example, a visual signal followed by another visual signal).

Diane Pecher, René Zeelenberg, and Lawrence W. Barsalou designed an experiment to see if thinking about different modalities showed the same reaction-time differences. Volunteers were shown a series of simple statements and asked to indicate whether the statements were true or false. The statements all followed the same pattern: OBJECT can be PROPERTY. For example:

BLENDER can be LOUD
TOAST can be WARM
MARBLE can be COOL
BUTTERMILK can be SQUEAKING

Participants rated 300 statements. Pecher’s team was interested specifically in cases where the modality of the property changed. In the list above, Blender-Loud is an auditory property, but Toast-Warm is a touch property—the modality changes. The next transition, to Marble-Cool would be an example where the modality does not change. Buttermilk-Squeaking is a decoy, as were most items on the test, so that participants didn’t catch on to the real goal of the experiment. Here are the results:

i-cf3a55c48290e4c2dfad801d6ef2c7d5-modality.gif

Even though participants were engaged in a language task, reaction time was significantly longer when the properties they were considering came from different sensory modalities.

This appears to be compelling evidence that our thought process relies on the sensorimotor system, but the team conducted a second experiment to eliminate an alternate explanation. Perhaps we react faster merely because the words from a particular modality are more closely related linguistically than other words. In the second experiment, the team selected pairs that were very closely related: for example, the words spotless and clean. When used in the form “SHEET can be SPOTLESS” and “AIR can be CLEAN,” these words aren’t related to any specific modality—this is the “related word” condition. A pair such as “SHEET can be SPOTLESS” and “MEAL can be CHEAP” is an example of “unrelated words.” This type of word pair was inserted in to a new experiment that also included same-modality and different-modality pairs. Here are the results:

i-e03ae04fee8a00a36d37f86c0cdd0c3e-modality2.gif

There was no difference between related and unrelated words, but once again, a significant difference between same-modality and different-modality words was found. Pecher et al. argue that these experiments offer compelling evidence that the way we process concepts is not independent of other systems of the brain; it appears, by contrast, that conceptualization requires the use of the sensorimotor system. Unlike computers, whose highly specialized hardware often performs only a single task, the mind appears to make use of sensory systems not only for sensing, but also for imagining.

Pecher, D., Zeelenberg, R., & Barsalou, L.W. (2003). Verifying different-modality properties for concepts produces switching costs. Psychological Science, 14(2), 119-124.

Comments

  1. #1 William Bell
    December 30, 2005

    People who work in human services occupations are trained to be empathetic rather than sympathetic. There are being asked to make intentional use of their imaginings of emotional states.

    Would being asked to be empathetic of an emotional state that one had never experienced be like being unsighted and being asked to envisage (say) a Saharan landscape?

  2. #2 The Working Network
    January 3, 2006

    Back to work…

    I hope your holidays were great. Mine
    worked just like they’re supposed to—I’m jazzed, pumped, ready…

  3. #3 Scott Reynen
    January 3, 2006

    The computer=brain analogy you are debunking seems based on a few misunderstandings of computers. First, video cards are generally used for output (showing), not input (seeing), so a server would never get information from a video card. Nor would a server typically get data from a video camera directly. Such data is usually passed through a socket, which is a file on the disk that acts as a buffer for incoming data. So a computer actually *does* access current and recorded visual data in basically the same way, from memory. And they do that because most computers *are not* specialized. A computer could hardly be more general, building everything from a series of simple on/off signals. I would even speculate that the brain is actually *more* specialized than a computer. But I don’t know a lot about brains.

  4. #4 Dave Munger
    January 3, 2006

    Scott—The monitor/video camera distinction is well-taken. I was trying (rather crudely, it turns out), to suggest that database software is distinct from image processing software.

    Up until recently, the most commonly accepted theories about the way the brain worked envisioned a network of specialized systems—sensorimotor, language, emotion, memory, etc.—which, though connected, were independent. This experiment and other such work suggest that the memory system relies on the sensorimotor system at a most basic level. I imagine a very carefully designed computer could work the same way—a database using some of the same algorithms as Photoshop for calling up images, for example—but for now, these applications are not only distinct, they are created by separate teams of programmers.

    Of course, the brain can’t throw raw resources at a problem the way a computer can. We don’t store visual memories the same way Photoshop does, because, vast though the brain’s memory capacity is, storing images as bitmaps would quickly exhaust it. Instead, it seems that we use something like pointers to recreate the image using the visual system. Previously, theories speculated that memories were stored separately, in some kind of vast data warehouse that didn’t rely on the visual system at all. Does that clear things up?

  5. #5 Diane Kramer, Ph.D.
    January 5, 2006

    Neuro-Linguistic Programming is a well-established attempt to construct a programming language of our experiences based on sensory modalities. By careful observation and calibration, it is possible to model out human experience and translate that experience into teachable learning programs. For instance, an architect might look up to the right to access a creative idea for a building, look over to the left and talk to himself about the design, and then look down to the right to check out the feeling, which is often the decision point for yes and continue to next step or no and recycle. Recycle might include looking up left to access old pictures…while the above explains a general pattern, there are many variations so this is not a simple canned program. NLPers develop the skills of observing language patterns, eye movements, body posture, facial gesture to hypothesize an individual’s information processing strategy and then check out their hypotheses with careful questions and observations. NLPers focus on learning the patterns of excellence of high performance people. You can learn more by putting NLP Encyclopedia into goggle.

  6. #6 Bob
    January 7, 2006

    For those who’d like to learn more about NLP, you may want to read the Wikipedia entry at http://en.wikipedia.org/wiki/Neuro-linguistic_programming.

    Specifically, there’s a section entitled “Scientific analysis of NLP” that you may find interesting. Basically NLP is not supported by scientific studies. Diane Kramer is a principal in a business which sells NLP as a solution and has a financial interest in it being taken seriously. Claiming that it is “well-established” begs the question of whom it is well-established with? The scientific community, no. The fringe, perhaps.

  7. #7 Dave Munger
    January 7, 2006

    Thanks for the information, Bob. More to the point: don’t believe everything you read in a comment on a website. Check it out for yourself. We can’t be responsible for everything people say in comments here on Cognitive Daily, and though we do make an effort to respond to misinformation, we can’t be everywhere all the time.

  8. #8 Mihai
    January 10, 2006

    “searching a database of images required the server to access its video card”

    But we all know this is the case. In all the movies, when one searches a database for a bad guy, all the faces are displayed very fast on screen.

    But joking aside, I don’t think anyone workind daily with computers can take seriously the “mind like a computer” theory.

  9. #9 Paul W.
    May 7, 2006

    Aargh. As a computer scientist who’s studied cognition pretty seriously, it bugs me when people casually talk about how the mind “isn’t like a computer.”

    Sorry to quibble about that, but many people really think the brain “isn’t like a computer” in some deep, basic philosophical sense. (And they think “everybody knows” this “obvious” fact. Au contraire.)

    The brain literally is primarily a computer; it’s just a rather different kind of computer than the one you’re sitting in front of now.

    Even current (desktop and laptop) “computers” often use their hardware subsystems in ways similar to the brain’s, at a basic level; there are basic computer science reasons for various specializations and duplications of hardware, and for using the more general hardware for a variety of tasks.

    For example, your laptop computer might have a fast number-crunching coprocessor that is used by both image-generating software and image-understanding software, but not used much or at all by various other kinds of software. This might be a separate DSP (digital signal processor) on the motherboard, or a vector processor embedded in the CPU chip itself. (Essentially all general-purpose commercial computers have that, now, but it’s hidden inside the CPU chip. It’s not obvious to the user a modern CPU includes a collection of semi-specialized hardware gadgets that may perform one higher-level task, or a few, or many—rather like brain circuits, some of which are more task-specific than others.)

    The brain is rather different, of course, because it has a larger variety of computationally specialized circuits for fairly specific tasks. It’s not terribly surprising, though, if some of these are re-used for processes that are computationally very similar—such as high-level visual processing of either real or imagined images. (Others are pretty much duplicated for different tasks, so that they can perform computationally similar tasks simultaneously, and maybe with specialized tweaks, without having to time-share the slow neural hardware.)

    None of this has much to do with whether the brain “is a computer” in any deep sense; it’s all about which kind of computer architecture the brain has, and how the “software” is mapped onto that hardware.

    The details are mostly different, but the underlying principles are the same, because evolution had to solve many of the same kinds of architectural problems that human computer architects do.

The site is currently under maintenance and will be back shortly. New comments have been disabled during this time, please check back soon.