Bird brains

i-ccbc028bf567ec6e49f3b515a2c4c149-old_pharyngula.gif

I'm teaching a course in neurobiology this term, and it's strange how it warps my brain; suddenly I find myself reaching more and more for papers on the nervous system in my reading. It's not about just keeping up with the subjects I have to present in lectures (although there is that, too), but also with unconsciously gravitating toward the subject in my casual reading, too.

"Unconsciously"…which brings up the question of exactly what consciousness is. One of the papers I put on the pile on my desk was on exactly that subject: Evolution of the neural basis of consciousness: a bird-mammal comparison. I finally got to sit down and read it carefully this afternoon, and although it is an interesting paper and well worth the time, it doesn't come anywhere near answering the question implied in the title. It is a useful general review of neuroanatomical theories of consciousness—even if it left me feeling they are all full of crap—but in particular it's an interesting comparative look at avian brain organization.

The paper briefly reviews four classes of models that attempt to locate the centers of consciousness, or "consciousness generators", in the mammalian brain. Alas, they all seem to contradict one another, and are all driven by the authors' hypotheses to invent validation in the structure, rather than by actual data that might lead to useful hypotheses. I won't get into these, other than to paste in the summary—while I'm not at all dazzled by any of them, it is handy to have such tidy summaries.

Classification of consciousness—brain
theories

Bottom-up Top-down
Sensory systems A B
Motor systems C D

A: A representative bottom-up theory proposed by Crick
and Koch concerns the visual system and asserts
that visual awareness is associated with activity in higher
order visual areas that are in direct contact with prefrontal
cortex. Although the "cortical system" covered by this
theory includes the entire cerebral cortex, dorsal thalamus,
claustrum (a nonlaminated structure deep to the cortex),
and dorsal striatopallidal complex (caudate, putamen,
globus pallidus—also referred to as basal ganglia and
involved in motor control) in the forebrain as well as
the motor control-related cerebellum and various brainstem
projection systems, the generator neurons seem to be
limited to temporal, parietal and prefrontal regions of the
neocortex. Crick and Koch limit the generator structure
further by assuming that activity in a subpopulation of
neurons in cortical layer V, characterized by firing in burst
patterns, is crucial. A prominent feature in the theory of
Crick and Koch is the insistence that the primary visual
cortex is not a generator structure.

B: The theory of Edelman and Tononi appears to
be an example of a top-down sensory approach and
focuses on the general features of consciousness—such
as complexity and unity. It asser ts that consciousness is
associated with activity in the temporal and frontal
associative and motor regions of the cortex: a "dynamic
core", characterized by "re-entrant" interactions within
limited portions of the CNS. Possibly, thalamic neurons are
included in the dynamic core. Structures supporting the
dynamic core seem to be the septal region, amygdala,
hippocampus, dorsal thalamus, and hypothalamus within
the forebrain, as well as the reticular activating
system in the brainstem. Edelman and Tononi seem to
assume a larger and more dynamic population of generator
neurons than Crick and Koch. In comparison to the latter,
the inclusion of limbic system structures—septal region,
amygdala, and hippocampus—that are related to emotion
and learning make this theory more broadly based.

C: The theory suggested by Eccles may be said to
represent a bottom-up approach, largely based on studies
of the motor system. It takes consciousness to be
associated with activity in cortical columns of the pre- or
supplementary motor areas, for med by groups of generator
pyramidal cells, organized in a specific way (dendrons).
This means that Eccles's theory assumes the smallest and
most well defined population of generator neurons of all the
theories discussed here.

D: The theory formulated by Cotterill may be said to
be an example of a motor system top-down approach.
According to this, consciousness is associated with
activity in a circuit consisting of sensory and motor, cortical
and thalamic structures. Fast feedback from muscle
activity, making muscle spindles (sensory receptors for
the degree of stretch within muscles) critical for the
generation of consciousness, is a central par t of the
theory. Fundamental to Cotterill's, as well as to other
top-down theories, is a set of proposals implying that the
neural basis for consciousness lies in the ability of an
organism to know itself, or its proto-self, within its environment, by bodily movements and homeostatic
functions. In all these top-down theories, several
non-cortical regions seem to be included in the generator
structure of consciousness. Cotterill includes the amygdala, hippocampus, dorsal thalamus, subthalamus, hypothalamus, and dorsal striato-pallidal complex (caudate,
putamen, and globus pallidus) within the forebrain
as well as multiple brainstem structures—the superior
colliculus (involved in sensory and motor mapping of
space), various structures involved with motor system
regulation (cerebellum, substantia nigra, pontine nucleus,
red nucleus, and inferior olive), and part of the autonomic
nervous system (for control of visceral functions)—thus
for ming a larger generator structure than any other theory
discussed here. Additional motor system top-down ap-
proaches include that of John, who includes the
thalamus, limbic system, and dorsal striato-pallidal complex, and that of Parvezi and Damasio,(24) who incorporate
the hypothalamus, intralaminar and reticular nuclei of the
dorsal thalamus, basal forebrain, and various cholinergic,
glutamatergic, noradrenergic, dopaminergic and serotonergic projection systems that regulate the activity of the
cortex.

There's also a diagram of where these theories place the "conciousness generators" (in dark gray) in the human brain. As you can see, they all agree it is not in the brainstem or cerebellum, but from there anything goes.

i-869f6da3abb99555c0b4c8b369d4fc43-consciousness_regions.gif
Diagrammatic representations of the distribution of generator neurons (shaded in gray) for the four principal groups of brain-consciousness theories outlined in the present essay as they would appear for the human brain. For each theory, the lateral aspect of the
human brain is shown on the left. The entire extent of the frontal lobe is shaded in B at the rostral (to the right) end of the cerebral hemisphere;
within it lie motor cortices and, rostral to them, the prefrontal cortex. The occipital lobe is unshaded in both A and B, lying at the caudal pole
(to the left) of the hemipshere. The parietal lobe (dorsally) and temporal lobe (ventrally) lie between the occipital and frontal lobes. The medial
aspect of the hemisphere is shown for each theory on the right. The hippocampal formation, olfactory (piriform) cortex, and amygdala all lie
within the temporal lobe, deep to the neocortical areas shown here.

The more productive part of the paper is the comparison between mammals and birds. Here's the premise:

We posit
that, since highly complex cognitive abilities are correlated with
presumed consciousness in at least some mammals, including but not limited to humans, and since highly complex cognitive abilities are evinced by birds, it is likely that consciousness
is also present in birds. Given that hypothesis, we then can
compare the anatomical organization of mammalian and avian
brains. We reason that if (1) complex cognition and consciousness are present in both mammals and birds and (2) consciousness has any neural basis, then birds should have at
least some neural features in common with mammals to
generate consciousness.

That sounds promising, but several problems come to mind. 1) If no one can even agree on the neural features responsible for consciousness within mammals, how is this comparison going to identify commonalities? 2) Birds and mammals are related lineages, so many brain similarities are going to be consequences of shared history, not function. Why not go all out and compare more distinct lineages…say cats and octopus? 3) Since we don't even know what features of the areas of the brain are responsible for consciousness, we aren't going to be able to recognize if different regions in birds and mammals have independently acquired whatever mysterious property is involved. While I think the comparative approach is terrific, in this case it's premature and targeted at the wrong level.

But hey, you've got to start somewhere, and this wasn't one of those exasperating papers that I toss into the wastebasket. It has a good summary of the evidence for avian intelligence, listing the various features they've exhibited.

  • Transitive inference. You can train pigeons (Pigeons! Birds that are archetypically stupid!) to recognize rank order, such as that A&lt>B, and B<C, and they can use that information to recognize tha A<C.
  • Coherence. Pigeons can respond variably to the ambiguity in figures like the Necker cube. That suggests that they have a mental model of what it should be, and their impression of its orientation can "flip".
  • Episodic memory. Scrub jays can recollect when, where, and what is stored in their food caches.
  • Piagetian object constancy. Doves, magpies, parrots, and ravens aren't fooled if an object is hidden—despite being out of sight, they have a mental model of its position.
  • Cognitive abilities. Gray parrots are singled out as particularly brilliant, with individuals able to count to 7, recognize the concept of "none", and able to understand the concepts of "same" and "different".
  • Tool use. Ravens, parrots, and New Caledonian crows have all been shown to be able to make and use simple tools.
  • Theory of mind. Scrub jays are able to attribute their own predilections to other members of the same species. Jays that rob caches are more likely to move their own caches than "honest" jays.

While I have my doubts about the neuroanatomical comparisons, the authors bring up one very general point. In mammals, the neocortex—the hugely enlarged part of our forebrains that is the first thing you see when you crack open our skulls—is central to higher level thinking. The comparable structure in birds is called the hyperpallium, or Wulst (German for "bulge"), and the posterolateral portion of it is an important visual center, comparable to the striate cortex, or visual areas of our brain. The fascinating thing is that the cellular organization of these two areas with similar functions and perhaps similar roles in generating consciousness are very different.

The mammalian visual cortex is characterized by a beautifully layered organization. Different inputs segregate to different layers, and pyramidal neurons extend long dendrites orthogonal to those layers, like long antennae reaching up and sampling incoming data streams, each of which is segregated spatially. The Wulst, on the other hand, is organized like other nuclei of the brain, and the primary neurons are star-shaped, reaching out in all directions to contact their inputs. They lack the rigid but elegantly arrayed spatial segregation seen in us mammals.

The pictures below don't really do it justice, but a good neurohistologist (sometimes even a barely adequate one, like me) can pick out 6 discrete layers in an appropriate slice of mammalian visual cortex. The bird Wulst is alien-looking. Huge, but weird.

i-7f1fd15a2bc5d0f0c48b0d707676e153-cortex_comparisons.jpg
Photomicrographs of the visual lemnopallium in the barn owl (Tyto alba), common ferret (Mustela putorious) and eastern tube-nosed megabat (Nyctimene robinsoni). In the owl, the photomicrograph depicts a coronal section stained for Nissl substance (localized in
cell bodies) through the visual hyperpallium, the Wulst (pial surface to the top of the page). Note the thick outer layer termed the hyperpallium
apicale (HA), the high cell density of the nucleus interstitialis hyperpallii apicalis (IHA), and the deepest portion termed the hyperpallium
densocellulare (HD). This region of the Wulst, forms what are termed pseudolayers, which are best thought
of as flattened and stretched out nuclei rather than true layers as is evident in the cerebral cortex of the ferret and megabat pictured here.
This architecture contrasts with the typical 6-layered cerebral cortex found in the primary visual cortex of mammals.

In their conclusion, the authors are a bit vague about the relevance of the earlier theories of consciousness to bird neuroanatomy. Parts fit, others don't, but since I think the theories are so nebulous that it's nearly impossible to draw any conclusions from that, they can't come down strongly one way or another. One suggestive observation is that bird brains are more similar to reptilian brains than mammalian brains are to stem amniotes'. If birds are conscious, that makes the assumption that the capacity for consciousness arose at that stem amniote-mammalian border suspect. The capacity, in the sense of having neural circuitry that could be adapted to generate consciousness, could have been present earlier.


Ann B. Butler, Paul R. Manger, B.I.B. Lindahl, Peter Århem. Evolution of the neural basis of consciousness: a bird-mammal comparison (p 923-936).

More like this

The organization of the human prefrontal cortex (PFC) is a lasting mystery in cognitive neuroscience, but not for lack of answers - the issue is deciding among them, since all seem to characterize prefrontal function in very different but apparently equally-valid ways. If this mystery were…
Wired Science reports on a fascinating finding: schizophrenics have trouble seeing the hollow mask illusion, in which people perceive the concave inside of a mask as an actual face. The reason we're vulnerable to this illusion is that our expectations of what we'll see - we're used to seeing real…
This essay I wrote was shortlisted in the Association of British Science Writers competition in 2002. It was the first thing I posted on the old blog. It was written as an introduction to what were generally believed to be the fundamentals of brain function, starting from the molecular level and…
There's a ton of super-interesting transcranial magnetic stimulation work coming out these days (e.g., here, here, here, here, here, and here) and much of it pertains to a very particular "paired-pulse" form of TMS (ppTMS). Before diving into the new work, I wanted a basic crash course on what we…

I would be more convinced that neurologist had a clear notion of what they meant by the term "consciousness," if they worked with a symptom that included the loss of it and it alone. I think it is telling that "unconscious" typically means various degrees of gross unawareness or unresponsiveness, and in that sense, it is as applicable to birds as to people. "The parakeet returned to consciousness when it was removed from the mine."

The various cognitive functions this paper lists make perfect sense. Neurologists are quite adept at detailing those. Consider how many different kinds of aphasia there are. I quite believe that neurologists know what they are talking about when they describe aphasias due to loss of motor control of the tongue or larynx, versus aphasias due to loss of the ability to locate words or form sentences. And people who have suffered strokes or brain injury of different kinds can present with different kinds of aphasia separately.

But what is this elusive "consciousness" that people allegedly have, that other animals don't, and that strangely never is lost by itself? How would a person behave who had lost just this higher-level consciousness, but who retained all other cognitive faculties?

Maybe I'm too much the skeptic. ;-)

Maybe I'm too much the skeptic. ;-)

No, I think you are spot-on.

One suggestive observation is that bird brains are more similar to reptilian brains than mammalian brains are to stem amniotes'.

And what is a stem-amniote then? The diadectomorphs are extinct, we know the shape of the braincase of some of them from the inside, but clearly that's not what you mean...

Amniote phylogeny (fossils cut out for brevity, relative time runs left to right, vertical dimension meaningless):

.. ,--Mammalia
--|
.. | .. ,--Testudines (turtles)
.. `--|
........ | .. ,--Lepidosauria (tuatara, lizards, snakes...)
........ `--|
.............. | .. ,--Crocodylia
.............. `--|
.................... `--Aves (birds)

By David Marjanović (not verified) on 10 Jan 2007 #permalink

Wintermute notes, quite appopriately:

Why, then you would be a philosophical zombie.

Interestingly, it's absolutely impossible for anyone to know that they aren't a philosophical zombie. You think you feel pain and experience qualia. Of course. But a philosophical zombie thinks the same thing. Perhaps what you experience is merely the tepid stuff of neurons firing, and isn't the Really True Qualia that philosophers write about. ;-)

Indeed, the philosophical zombie seems as unconvincing to me as all other arguments I have ever heard used to defend dualism, particularly given the evidence we have from neurology to show that certain abilities or qualities of the mind reside specifically in certain physical areas of the brain.

By valhar2000 (not verified) on 10 Jan 2007 #permalink

I wanted to tell Russell this: it may be that "conciousness" is an emergent property of all other cognitive processes working together, in which case the question you ask would be meaningless.

By valhar2000 (not verified) on 10 Jan 2007 #permalink

valhar2000 writes:

I wanted to tell Russell this: it may be that "conciousness" is an emergent property of all other cognitive processes working together, in which case the question you ask would be meaningless.

OK. But how do we know that "consciousness" isn't just a word we give to things that have all those other pcoesses working together? I deal with systems that have emergent properties, eg, complex performance characteristics that are difficult to predict from the simpler pieces. Despite the fact that the properties are emergent, they are measurable.

Is there a database of people who have lost various parts of their brains due to strokes and tumors, and the corresponding symptoms? Or even a half decent review of such literature? It seems like this could at least eliminate Brain Theory C, which has a comparitively small brain area responsible for conciousness.

Though I think that I am on board with the idea that until we find a person who has all cognitive functions except consciousness, that cognition and consciousness are at least firmly linked, and may be the exact same thing. Which would mean that birds and octopi are conscious, and insects might be.

Are we unwilling to admit consciousness in non-human animals because of the false animal/human dichotomy? Or is it out of reluctance to anthropomorphize?

By Jeff Campbell (not verified) on 10 Jan 2007 #permalink

But how do we know that "consciousness" isn't just a word we give to things that have all those other pcoesses working together?

That was pretty much what I was getting at; perhaps I was not too clear.

Is there a database of people who have lost various parts of their brains due to strokes and tumors, and the corresponding symptoms?

I can't give you that, but I can give this: http://www.ebonmusings.org/atheism/ghost.html

There are examples there of damage to specific but quite different areas of the brain that produce noticeble differences in the thoughts, emotions and personality of those afflicted.

You can check his sources too.

By valhar2000 (not verified) on 10 Jan 2007 #permalink

Consciousness is s thorny problem, but it's a cop-out to argue that it's not really real. There's something going on in our subjective experience (the fact that we *have* subjective experience, for starters) which while not easily definable nonetheless has an objective component. This makes it a difficult issue to study, but I believe the problem can be cracked. That will occur when we reach the level of seeing the forest of brain operational principles through the trees of neuronal wiring.

Having clear defintions of a phenomena is useful but not entirely necessary. Before Darwin, Watson and Crick, no one could really define exactly what "life" was. Now we can state it explicitly and formally, and can speculate about other types of life processes. The definition of nebulous ideas like "life," "heat," or "consciousness" follow from their scientific understanding, not the other way around.

When someone does manage to put together a workable and convincing theory of consciousness it will have a transforming effect on our culture that will make evolution look like a minor fad. Look at the influence Freud had on our thinking about ourselves, and his ideas were mostly wrong.

Personally.. I would think "consciousness" would be best defined as, "When we run tests, all bits seem to work, but some key bits don't coordinate that information into a response." And that "can" be tested. I would say that there may even be some branches of nueroscience that are pinpointing specific areas responsible for the high level awareness and response needed. And then there is the whole, "I act, then I figure out why I did it.", feature of a lot of what we do, which implies that consciousness isn't so much a thought process, in situations where quick responses are needed, but a means to explain *why* we reacted the way we did in a way that lets us learn and predict future results. Its all very muddy, but as with anything that isn't well understood, there are as many fools looking in the wrong places and imagining silly definitions of where something happens, as there are people looking in the right place, but missing the details. Most of the real progress seems to be happening in examining people that have had injuries, and lost certain specific features of cognition. Most of the papers are being written by groups of people doing what amounts to neurologic phrenology. Its not a big surprise that a) no sane definition of "consciousness" exists in that environment and b) they all have their own questionable approaches to finding something in the wrong places.

Oh, and who says something like consciousness can't arise from "different" structures with analogous function? That they must be very similar is a stupid assumption in and of itself, save that it should be possible to find "similarities" in structure and thus function, even if the two are completely different on the surface. The problem is looking at the surface structure, instead of the function within the system, which is way harder to manage.

No that's wrong. Zombies don't think.

Interestingly, it's absolutely impossible for anyone to know that they aren't a philosophical zombie. You think you feel pain and experience qualia. Of course. But a philosophical zombie thinks the same thing. Perhaps what you experience is merely the tepid stuff of neurons firing, and isn't the Really True Qualia that philosophers write about. ;-)

Your feeling of pain may be an illusion but experiencing even an illusion means you cannot be a Zombie. What philosophers make a distinction between types of qualia such that some feelings aren't really qualia? Zombies don't think they have qualia. Zombies don't think at all. But if you ask them if they think then they might reply that they do - if they are either liars or else they got confused about what you meant, since they have nothing to relate the words to.

However assuming the Zombie is an honest Zombie and you can work to overcome the confusion of terms there's no reason at all that a Zombie wouldn't be able to deduce it was a Zombie. Deduction and analysis after all are not conscious activities.

By DavidByron (not verified) on 10 Jan 2007 #permalink

It seems like birds might be especially problematical for a comparison - so many of these structures have been adapted for maximum function and minimum weight, a feature that wouldn't have been as critical in mammals (but maybe it was in early dinosaurs, leading to big critters with itty-bitty heads, or just big critters in general). They might have opted the same bits for some of these cognitive tasks, but with the flexibility of the brain (just look at all of the possibilities for what parts generate consciousness!), they very easily could do it a completely different way.

Its not a big surprise that a) no sane definition of "consciousness" exists

The English word has a few meanings but clearly in the context it means exactly one thing.

By DavidByron (not verified) on 10 Jan 2007 #permalink

jack write:

There's something going on in our subjective experience (the fact that we *have* subjective experience, for starters)..

Yes, of course. The question is what will count as an explanation of that? Imagine, at some point in the future, neurologist think they have a full accounting of that, so that they can explain "the subjective experience of foo is the cognitive functions x, y, and z." Their accounting is backed by experiment, e.g., they (harmlessly and reversibly, this being the far future) disable some some cognitive function, and the person involved says, "yes, I hear the music, and I know it's music, but I can't recognize it." They disable another function, and the person involved say, "yes, I hear Beethoven's 9th, but somehow it doesn't feel like I'm hearing it, it's as if it's separate from me and of no account." They reenable that function, and the person involved hears the music and has the full emotional response to it.

The philosophical question then will be: Is that actually an account of consciousness? Some people will say, of course, on the same grounds that we think we now have a full accounting of phlogiston as a property, heat, rather than a substance, and no longer require an accounting of the elan vital. Others will say "no," for the same reason that there are still those who criticize biologists for thinking they understand what constitutes life. There is, in short, different questions of consciousness. Some of those questions I think are valid and will be answered. Others I think are philosophical fluff, that will become more and more difficult to take seriously, as the first questions are answered.

David Byron:

No that's wrong. Zombies don't think. .. However assuming the Zombie is an honest Zombie and you can work to overcome the confusion of terms there's no reason at all that a Zombie wouldn't be able to deduce it was a Zombie. Deduction and analysis after all are not conscious activities.

I hope David forgives me for running together his first and last paragraph, because I think that they demonstrate some of the telltale inconsistencies in how philosophical zombies are used. Let's begin with the on the word "think," which I purposely chose rather than "feel." I'm not sure why we shouldn't use "think" to describe what any being does who is capable of the level of analysis and introspection required in the description above, after the ellipsis, to both behave as an almost-convincing philosophical zombie, and reach some conclusion about its own zombieness.

Second, there are a variety of philosophical zombies, but most of them share an important quality: they are behaviorially indistinguishable from humans. This quality is key to their purpose, which is as a thought device about whether or not that elusive consciousness is fully explainable by neurology. In particular, this means that a true philosophical zombie would not be able to reach the conclusion that it was a zombie, rather than a human. The zombie David presents fails that test in the end, and that's why I said it was only "almost convincing." The zombie may not be conscious, but it cannot tell that it isn't. For example, a zombie will be able to distinguish green from red, and otherwise report correctly on what it sees, at least as well as we would expect of a human in its position. That, of course, doesn't mean that it sees the redness of red, the same qualia that I see. And presumably that you see. But it sees and is capable of discussing what it sees, despite that.

What I pointed out is that the "behaviorally indistinguishable" means, alas, that you cannot tell that you aren't a zombie. You might think you're not, the kind of analysis required for that not being what distinguishes a zombie. And sure, you can tell green from red. But even though you might think you're seeing colors, the way real humans see colors, that thought, perhaps false, being required for you to behave the same with regard to conversations such as this, the fact is you don't. There's a je ne sais qua to real human experience that you lack. There's no way to put our finger on it behaviorally, since otherwise you wouldn't be a true zombie, but something else. Maybe a robot. You lack it nonetheless. Sorry 'bout that. ;-)

Russel wrote:
The question is what will count as an explanation of that?

Oh, there will be doubters. Just as there are vested interests today who bash and rant and generally foam at the mouth about how evolution is an innacurate or at least incomplete account of human origins, there will certainly be those that will want to marginalize any scientific theory of consciousness, no matter how complete or compelling. The subjective self is perhaps even more sacrosanct than the human form -- being closely related to the so-called soul -- so any attempt at a mechanical explanation will be met with powerful resistance.

Even in less controversial domains the scientific account encounters difficulty with people at large. There are many who said, after the devastating tsunami a few years ago, that science could not explain it. How is plate tectonics and hydrodynamics *NOT* a complete explanation of that terrible event? Somehow the mechanistic description doesn't satisfy. Humans want a human story, and narratives include motive. They feel there must be a WHY, and the inexorable movement of a restless Earth doesn't seem to be enough.

By I don't think that's a problem in the long term. Just as with evolution the scientific story is complete and facinating in itself. Eventually people will wrap their minds around it and grasp that yes, this explains my dog and my wife and our gut bacteria just as fully as gravity explains why stuff falls. Remember that we don't know what a theory of consciousness will look like. We're like biologists in 1807, understanding that there is something different about living matter that makes it different from the non-living chemicals its made from, but we don't exactly know what. When our species finally has that "Aha" moment about consciousness it's going to rock our world to its foundations.

I'm with Russell on this. If a zombie is behaviorally indistinguishable from a human, then there's no way to tell them apart. They can't tell that they aren't a zombie, and neither can you. This line of reasoning is solipsistic, because you soon realize that everyone around you may be a philozombie, and there's no way you can disprove it.

Functionalism wins again! ^_^

By Xanthir, FCD (not verified) on 10 Jan 2007 #permalink

Xanthir:

Functionalism wins again!

Or to put it another way, a difference that makes no difference is no difference. I wish I could determine a citation for this famous dictum. I've seen it credited to William James, but never with a reference to a work that contains it. I know Spock said it. But I suspect it was in circulation before Star Trek. ;-)

I think Leibniz might be the source? The Identity of Indistinguishables and the Indistinguishability of Identicals. (on a triple word score, too!)

By Stephen Wells (not verified) on 10 Jan 2007 #permalink

Yeah, but someone gets credit for the nifty phrase. "Identify of indistinguishables" just doesn't have the same catchiness.

We don't know what is necessary for a zombie to be behaviourally indistiguishable. I suspect that at least some sort of abstract consiousness will automatically arise, when complex processing is done. When there is a detailled self model percieved as acting in a thoroughly modelled world (the models being created by the brain of the zombie).

So I think, a zombie would be able to infer and communicate, that he is not experiencing human qualia (but maybe diferent qualia arising spontaniously from lets say complex continuos visual processing).

I think a zombie is something we can imagine, but which doesn't make sense, when you try to work out the details (like totally independent free will). I know, I have no proof, but this is my bet.

In theory you could construct a gigantic lookup table, which would always deliver the right human behavioral response, without any actual processing done. This would really be a philosophical zombie. But I'm not sure about the storage requirenments for such a lookup table, maybe they are totally infeasible.

Discussions of 'consciousness' tend to remind me of discussions of 'altruism.' The various phenomena denoted by these words are surely real and not illusions, but that doesn't mean the various phenomena are best seen as aspects of one phenomena.

If we define consciousness as self-awareness relative to the human world--more or less the older latinate meaning of the word--the autistic, the mentally retarded, and Einstein when he was forgetting to put on his pants, might be considered to have scarcely any 'consciousness' at all.

If, on the other hand, we define consciousness as awareness of our own thought processes, our untrousered Einstein, the mentally retarded, and particularly those with autism, might be considered super-conscious, living as they seem to do in a reverie of concentration and fascination with their own thoughts.

These two kinds of awareness may be no more related than a talent for mathematics and a talent for tennis.

We seem to have inherited the notion from the Victorians, wherever they got it, that sentience, self-awareness, cognition and identity are aspects of a single thing called 'consciousness,' as we seem to have inherited from them the idea that dutifulness, courage, mercy and generosity are part of a single thing called 'altruism.'

I'm not trying to diss the Victorians, but Nature may not entirely agree with them. Or us.

Jim:

We don't know what is necessary for a zombie to be behaviourally indistiguishable.

Well, we do know one thing. When you ask a zombie whether it experiences human qualia, it must answer "yes," or "as far as I can tell," or "what's a qualia," but not "no, of course not, I'm a zombie," since the latter would be behavior that distinguishes the zombie from humans. In other words, to answer like a human, a zombie either must not be able to tell it is a zombie, or must pursue a sophisticated and clever ruse. If a zombie can tell it is not human, and accurately communicates that, then it fails to be a zombie. It's something else. Maybe a robot. Or a replicant. But not a philosophical zombie.

To put it another way, "behaviorally indistinguishable" sets a very high bar. I think you're right that a philosophical zombie likely doesn't make sense.

This is an interesting, though far from novel approach. Comparative approaches to the neuroscience of consciousness have been popular for a couple decades, at least. But birds are an interesting choice. Unfortunately, at least in your description, there seem to be several problems with this paper, at least as described. First off, the examples of neural theories of consciousness the authors chose are, well, odd. Edelman and Tononi's theory (detailed in A Universe of Consciousness is, as far as I can tell, almost universally considered bizarre. Crick and Koch's theory has an even worse reputation (I think it's pretty much just ignored by most people). I probably don't need to tell you about the oddities of Eccles' theory, and the philosophy behind it ("dualist interactionism," if you need to look it up). I don't know Cotterill's theory all that well (I've read a few of his papers, but it's been at least 4 years), but it's safe to say that his is not a widely influential one. I suppose the authors were looking for examples of each of their types of theories, but since they chose odd, less that respectable theories to do so, one has to wonder about the schema itself.

I will say, quickly, that at least in practice, the thalamocortical loop is almost universally treated as one of the more important brain areas correlated with higher-order consciousness, and such theories would generally fit into the "top-down" theories in this paper's schema.

However, what's missing from this paper, which would be absolutely necessary for such comparative work (especially with non-mammals), is a treatment of "levels" or "types" of consciousness. Antonio Damasio, whose theories aren't exactly widely accepted, but who at least is doing good work on the subject, has three levels, and increasingly, theorists are treating different levels as conceptually and neurally distinct. If you want to compare neuroanatomy between species with consciousness, you have to be clear about what levels of consciousness you're talking about, or the comparisons will proceed without sufficient information.

The zombie argument has always bothered me. My feeling is that zombies are impossible and we can only imagine zombies as thought experiments because we don't know what consciousness is or how the brain works.

I can easily imagine a 747 without a splectstrummer flying from New York to London because I have no idea what a splectstrummer is or how a 747 works. An airplane engineer who knows these things might tell you that a 747 that flies(or rather, can convince you it's flying) without a splectstrummer is impossible.

The zombie people seem to claim that because we can imagine zombies as logical possibilities, it tells us something about the nature of consciousness. I think all it tells us is we don't know what we're talking about.

Zombies don't think.

Philozombies are fraught with problems. If we are going to conclude anything about the world (except their likely impossibility) they need to be possible, and we can't ascertain that. Neurological clones are supposedly conscious, so it seems most like a last attempt to keep open for philosophical arbitrariness akin to solipsism.

But I'm not sure about the storage requirenments for such a lookup table, maybe they are totally infeasible.

A behavioral zombie would need to have a potentially infinite lookup table, but since we can only make a finite number of tests it would be a possibility that we would not detect the difference. And you need a miniaturized technology to pull it off, possibly too demanding.

Anyway, the only possibility to generate the lookup table would be to simulate the real behavior first, and such a simulator would very likely be the conscious entity, and part of the total system.

Deduction and analysis after all are not conscious activities.

I'm not sure this implies anything regarding consciousness. Contingent tasks, perhaps even low level, seem to demand symbolic processing and analysis for reinforcement or error-driven learning and generalization. Now they have found that biologically inspired neural nets can display this behavior. ( http://develintel.blogspot.com/2006/10/generalization-and-symbolic-proc… )

If symbols and analysis are low level and pre-language, it could well be part of basic consciousness.

Antonio Damasio, whose theories aren't exactly widely accepted, but who at least is doing good work on the subject, has three levels, and increasingly, theorists are treating different levels as conceptually and neurally distinct.

Thank you for the explanation and the reference.

By Torbjörn Larsson (not verified) on 10 Jan 2007 #permalink

Remember that Crick & Koch, unlike many other "consciousness theorists," have defined consciousness in a way that can be uncontroversially measured (motion-induced blindness, p.14 in Quest for Consciousness).

In contrast, the measures of "intelligence" used here are sorely lacking and should not be conflated with measurements of consciousness: for example, would a robot be conscious if it passed the same tests used to establish that birds have transitive inference, episodic memory, piagetian object constancy, counting and discrimination ability, tool use, and theory of mind? I think these are very different issues.

I share PZ's confusion that we haven't seen any work on consciousness in octopi, who have a radically different neural architecture than humans/primates/mammals. Most people probably wouldn't recognize octopus brain as a brain at all!

Russell: Emergence sensu Bunge (and most scientists) is an ontological notion, so the interesting area of research is precisely determining how it occurs.

DavidByron: No, a zombie will not get confused, unless it is a zombie like JJC Smart or myself, who often try to show how silly (strong, dualistic) qualia are by feigning ignorance about what they supposed to be.

miko: For what it is worth, a lot of philosophers now agree with you. Dennett and the Churchlands are the most well known, but there are many others, including me.

Maybe I'm too much the skeptic. ;-)

No, I think you are spot-on.

One suggestive observation is that bird brains are more similar to reptilian brains than mammalian brains are to stem amniotes'.

And what is a stem-amniote then? The diadectomorphs are extinct, we know the shape of the braincase of some of them from the inside, but clearly that's not what you mean...

Amniote phylogeny (fossils cut out for brevity, relative time runs left to right, vertical dimension meaningless):

.. ,--Mammalia
--|
.. | .. ,--Testudines (turtles)
.. `--|
........ | .. ,--Lepidosauria (tuatara, lizards, snakes...)
........ `--|
.............. | .. ,--Crocodylia
.............. `--|
.................... `--Aves (birds)

By David Marjanović (not verified) on 10 Jan 2007 #permalink

Zombies don't think.

Philozombies are fraught with problems. If we are going to conclude anything about the world (except their likely impossibility) they need to be possible, and we can't ascertain that. Neurological clones are supposedly conscious, so it seems most like a last attempt to keep open for philosophical arbitrariness akin to solipsism.

But I'm not sure about the storage requirenments for such a lookup table, maybe they are totally infeasible.

A behavioral zombie would need to have a potentially infinite lookup table, but since we can only make a finite number of tests it would be a possibility that we would not detect the difference. And you need a miniaturized technology to pull it off, possibly too demanding.

Anyway, the only possibility to generate the lookup table would be to simulate the real behavior first, and such a simulator would very likely be the conscious entity, and part of the total system.

Deduction and analysis after all are not conscious activities.

I'm not sure this implies anything regarding consciousness. Contingent tasks, perhaps even low level, seem to demand symbolic processing and analysis for reinforcement or error-driven learning and generalization. Now they have found that biologically inspired neural nets can display this behavior. ( http://develintel.blogspot.com/2006/10/generalization-and-symbolic-proc… )

If symbols and analysis are low level and pre-language, it could well be part of basic consciousness.

Antonio Damasio, whose theories aren't exactly widely accepted, but who at least is doing good work on the subject, has three levels, and increasingly, theorists are treating different levels as conceptually and neurally distinct.

Thank you for the explanation and the reference.

By Torbjörn Larsson (not verified) on 10 Jan 2007 #permalink