Ramachandran on Consciousness

Over at Seed, V.S. Ramachandran shares his thoughts on how science can solve consciousness. Color me unimpressed:

We know that awareness is not a property of the whole brain, so the problem can be reduced to, "What particular neural circuits are involved in consciousness? And what's so special about these circuits that they can explain consciousness?"

I suggest that a new set of brain structures evolved during hominid evolution, turning the output from more primitive sensory areas of the brain into what I call a "metarepresentation"... I believe the anatomical structures involved in creating this metarepresentation include the inferior parietal lobule, Wernicke's language comprehension area and the anterior cingulate cortex. Find out how these structures perform their job and we will have figured out what it means to be a conscious human being.

At first glance, this kind of approach seems obvious. Neuroscience is reductionist, and it plans to "solve" consciousness by finding its phsyical substrate. But I believe this method is hopelessly flawed, and for a childishly simple reason: self-consciousness, at least when felt from the inside, feels like more than the sum of its cells. Any explanation of our experience solely in terms of our neurons will never explain our experience, because we don't experience our neurons. To believe otherwise is to indulge in a simple category mistake.

If you read old science books on consciousness, they all use an obsolete phrase: bridging properties. Scientists used to assume that we would one day discover some completely magical neural phenomenon - the "bridge" - that allowed experience to arise from shuttling ions and squirts of neurotransmitter. Now, of course, we know better. In a telling shift of rhetoric, neuroscientists have stopped looking for bridging properties and have instead started looking for the "neuronal correlates of consciousness." What's the difference? Nothing less than the difference between the map of a place (its correlate) and the place itself. The fact is, we know just enough about our cortex to know that there is no wondrous answer awaiting us inside. All we are is several hundred billion neurons enclosed within a sphere of bone. There is no ghost in the machine, there is only the hum of the machinery. You are an illusion.

And yet, if we know anything, it's that we are not an illusion. Even if neuroscience discovers the neuronal correlates of consciousness one day - assuming they can even be found - the answer still won't be very interesting. It still won't explain how we exceed our cells, or how 40 Hz oscillations in the pre-frontal cortex create this, here, now. It is ironic, but true: the one reality science cannot reduce is the only reality we will ever know.

So what should neuroscience do? Scientist's like Ramachandran don't give up, and for good reason. After all, there is always the slim chance that someone will figure out how a shudder of electricity becomes the word "becomes." But I wouldn't bet on it. In the meantime, neuroscience must be realistic about what it hopes to explain. It should stop talking about consciousness as if it's nothing but its cellular correlate. The most mysterious thing about the human brain is that the more we know about it the deeper our own mystery becomes.

PS. But if neuroscience can't "solve" consciousness, then who can give us insight into our experience? Astute readers of this blog won't be surprised by my answer. As Noam Chomsky declared, "It is quite possible-- overwhelmingly probable, one might guess--that we will always learn more about human life and personality from novels than from scientific psychology."

More like this

Are there any? In my post last week on consciousness studies, I argued that neuroscience will never tell us anything interesting about how the water of the brain becomes the wine of conscious experience: Even if neuroscience discovers the neuronal correlates of consciousness one day - assuming they…
In his latest New Yorker article (an otherwise problematic discussion of Enron), Malcolm Gladwell makes an interesting distinction between "puzzles" and "mysteries": Osama bin Laden's whereabouts are a puzzle. We can't find him because we don't have enough information. The key to the puzzle will…
Over at Salon, there's a quite interesting interview with UC-Berkeley philosopher Alva Noe, author of Out of Our Heads. (I reviewed the book in the SF Chronicle last month.) Q: Maybe I'm naive but it seems kind of obvious that the brain is the mechanism that -- in the context of a person's life and…
I admire David Brooks for trying to expand the list of topics written about by Times columnists. (To be honest, I'm a little tired of reading about presidential politics.) His latest column, on "The Neural Buddhists," tries to interject modern neuroscience into the current debate over New Atheists…

"We know that awareness is not a property of the whole brain..."

I don't think we know this at all.

By PhysioProf (not verified) on 06 Oct 2006 #permalink

So you are aware of the part of your brain that makes your heart beat? Or the part that controls perspiration? Tell us the secret -- we'll save millions in deodorant bills!

Well, we can hack into our own brains, or take drugs that paralyse certain bits, to check that fact.

On the other hand, I'm fairly convinced that the likes of Ramachandran are wasting their time. 'Awareness' is ill defined even as a concept. Philosophers have worked on it for ages, and the reality of the situation is that it is something that is simply not amenable to external measurement.

All that can be achieved through neuroscience is an idea of how to reproduce the behaviour associated with awareness, when philosophy tells us that there is really no empirical reason at all to believe that anyone other than oneself is conscious.

I think what PhysioProf was trying to say was that it's very difficult to isolate emergent phenomena (like consciousness) into discrete brain regions. To believe otherwise is to risk falling into the trap of the Cartesian Stage. Of course, this doesn't mean we have conscious access to all of our brain processes. (Even my amygdala doesn't obey my orders.) But it does mean that there's no clear dividing line between the brain regions responsible for awareness and those responsible for unconscious sensation, etc. All neural areas feedback onto each other; the brain is the universe's largest knot.

Any explanation of our experience solely in terms of our neurons will never explain our experience, because we don't experience our neurons. To believe otherwise is to indulge in a simple category mistake.

Compare: "Any explanation of heat in terms of molecular motion will never explain heat, because molecules are not hot. To believe otherwise is to indulge in a simple category mistake." This is obviously false.

The whole point of a "reduction" is to explain something in terms of something more fundamental, which does not involve the reduced concept. I see no reason to believe we cannot understand consciousness as a high-level. emergent property of systems of neurons, just as with heat and molecules.

I'm not critiquing reductionism in general, or advocating throwing out the molecular theory of heat. I'm only criticizing the application of neural reductionism to conscious experience. Simply stated, I don't believe it's possible to come up with a reductionist theory of experience that is capable of actually accounting for our experience. Sure, scientists can disregard our consciousness as little more than an "epiphenomenon," or an elaborate illusion of 40Hz oscillations in the PFC, but I hardly see what good this does. Are you really willing to say that nothing is lost when your conscious, subjective, 1st-person experience is described in reductionist terms? Are the two descriptions really equivalent? Compare that to the molecular theory of heat: no phenomena are lost when heat is reducted into atomic movement. All the peculiar properties of heat are still accounted for.
Donald Davidson said it best: �Mental characteristics are supervenient on physical characteristics. Supervenience of this kind does not entail reducibility through a law of definition.� Davidson�s theory of mind, known as "anomalous monism," accepts the non-negotiable fact that the mind is the brain without believing that the mind is nothing but the brain. Imagine a painting which has been perfectly forged. While the two paintings are literally identical � at least from the perspective of their paint � they are also not at all identical. One is real, and one is an ersatz copy of that reality.

As always regarded the new mysterian style philosophy, such as what your promoting, as rather defeatist. As the classic saying goes, any sufficiantly advanced technology is indistigushible from magic. While we may be currently ignorant as to how we get from electrochemical signals to consciousness, I don't really see any reason why it shouldn't be possible to understand it.

I'm not saying it'll be an easy thing for anyone to understand. Hell I don't pretend to completely understand how we get from the electrical on/off bits within this laptop to typing out this comment in your blog, but I know that someone out there knows how it all works.

Granted going from electrochemical signals to consciousness is more complicated, but everyday several new humans rise into conscious awareness from the knitting together of a clump of cells created by a DNA recipe. We may be a really long way off from completely understanding that construction process. However one day we won't be. One day we'll know how that convoluted and complex mechanical process transforms into what we call consciousness.

That should read "I've always regarded..."

I checked it all for errors except I guess the first word

I agree that a direct reduction of thoughts to neurons does not seem possible, but I think that one that uses functional/computational concepts as intermediate steps may be possible. That is: complex systems of neurons can have computational/functional properties that give them "meaning" (in the simple, non awareness-requiring sense that bits of a computer can have) and higher and more complex "intentional systems" can be built up from them, until we may end up with self-consciousness at the highest level. You may deduce that I am a bit of a fan of Dennett; this is more or less what he calls "homuncular functionalism". I obviously do not think Dennett's theory of consciousness is final, there is much to be done on both the neurological and the philosophical (and also the AI) sides; but I see no reason to believe that kind of program cannot be successful ultimatly. All the reasons given by mysterians and their like (zombies, Mary, Chinese room...) seem to me essentially question-begging.

Defeatist indeed... a stance that is informed by lack of technical insight. The current state of experimental neuroscience--vClamp to 2-Photon to MRI--is stagnant with respect to systems processing. Perhaps patience is the best stance for the non-experimentalists interested in the "The Hard Problem".

By Kellen Betts (not verified) on 07 Oct 2006 #permalink

"The current state of experimental neuroscience--vClamp to 2-Photon to MRI--is stagnant with respect to systems processing."

I am not as pessimistic as you. There are new approaches being applied for both measuring and perturbing neural function that operate at the systems level, but still make distinctions among the numerous functional subtypes of neurons--defined by their intrinsic biophysical properties and synaptic connectivity--present in any chunk of brain tissue. We are obviously just at the beginnings of this enterprise, but there is reason for hope that progress will be made.

This gets around what I consider to be the fatal flaw of fMRI: it cannot distinguish the metabolic activities of different cell types present in any chunk of brain tissue, no matter how small that chunk is.

By PhysioProf (not verified) on 07 Oct 2006 #permalink

"I think what PhysioProf was trying to say was that it's very difficult to isolate emergent phenomena (like consciousness) into discrete brain regions."

Yes, this was my point. I think it is highly likely that consciousness will turn out to be an emergent property of the entire brain, and will not be localizable to any particular brain region. Part of my reason for thinking this is that in all of the history of neuropsychological research on patients will localized brain lesions, none of them have ever given evidence of a lack of consiousness of their own existence while still displaying the capacity for other complex human behaviors, such as social interaction, langage, problem solving, etc.

By PhysioProf (not verified) on 07 Oct 2006 #permalink

I think it is highly likely that consciousness will turn out to be an emergent property of the entire brain, and will not be localizable to any particular brain region. Part of my reason for thinking this is that in all of the history of neuropsychological research on patients will localized brain lesions, none of them have ever given evidence of a lack of consiousness of their own existence while still displaying the capacity for other complex human behaviors, such as social interaction, langage, problem solving, etc.

I may not be understanding what PhysioProf wrote above...

There are no neurological syndromes I'm aware of in which a *complete* lack of consciousness is seen (although Cotard's delusion is a hard one to think about - if the patient thinks they are dead, clearly they are conscious of those thoughts about themselves...?...).

But unilateral neglect patients clearly show a lack of awareness of particular aspects of their conscious life, such as the existence of the left side of extrapersonal space. Anosognosia (often concomitant with neglect) is a relatively common deficit in which the patient denies they have any deficit with their hemiplegic arm, for example. What is this except for an alteration, nay a reduction!, in full consciousness, one that occurs in people who may have no other problems in the kinds of capacities PhysioProf mentions? Did I miss something?

I believe a reductionist, systems-and-circuits level approach will ultimately reveal many aspects of consciousness that we already read, write and think about. Furthermore, and this is crucial for me as a student of the field, there is no *necessary* loss of wonder and delight at the way the whole system works.

Dear Jonah and readers,

What a beutifully written little piece! I really think you captured the essence of the problem. It seemed a shame to me, however, that you stopped short of drawing what to me seems the obvious conclusion: that the question of consciousness is without meaning. You first write:

"There is no ghost in the machine, there is only the hum of the machinery. You are an illusion."

Then in the first line of the subsequent paragraph you seem to give up all the hard-won ground:

"And yet, if we know anything, it's that we are not an illusion."

What makes you say this? Devoid of any means to test this hypothesis, shouldn't you instead conclude that the question is meaningless? And with it all the philosophical baggage associated with questions about zombies and the experience of red, etc? Should we not remain silent concerning that which we cannot, even in principle, test emperically?

Talk of emergent properties and synchronous binding and the like does nothing to illuminate the central issue, in my opinion. I think you did a nice job of summarizing what many neuroscientists, such as myself, actually believe. Let's just stick to answering the computational questions of information processing which we can answer and let the magic spark dwindle in the dark.

A live creature looking for source of consciousness is akin to a shadow looking for its source.Science is probalistic and never can reach an absolute truth invariant with time and place.Consciousness is such invariant truth.
ramaiah abburi

By ramaiah abburi (not verified) on 10 Nov 2009 #permalink