I've been reading a number of papers on the "science" of consciousness - I'll let the quotes express my skepticism - and I thought this clever metaphor from Francis Crick and Christof Koch, in their influential 2003 Nature review, was revealing. They compare the competition among our sensations to a democratic election, in which all those fleeting stimuli must fight for our limited attentional resources:
It may help to make a crude political analogy. The primaries and the early events in an election would correspond roughly to the preliminary unconscious processing. The winning coalition associated with an object or event would correspond to the winning party, which would remain in power for some time and would attempt to influence and control future events. 'Attention' would correspond to the efforts of journalists, pollsters and others to focus on certain issues rather than others, and thus attempt to bias the electorate in their favor. Perhaps those large pyramidal cells in cortical layer 5 that project to the superior colliculus and the thalamus (both involved in attention) would correspond to electoral polls. These progress from early, tentative polls to later, rather more accurate ones as the election approaches. It is unlikely that all this happens in the brain in a fixed time sequence. The brain may resemble more the British system, in which the time between one election and the next can be irregular.
It's a revealing analogy. While there certainly is an intense competition among our multiplicity of neural representations - the sensation that wins is what you perceive - the metaphor of voting presupposes a vote. It assumes that, at some point, those pyramidal cells or the PFC or some other nub of flesh will settle the argument; a winner will be picked. The point is that, although Crick and Koch set out to demolish the old ghost in the machine - the ghost is just a trick of matter - they can't escape the allure of imagining a "voter" somewhere in your head.
I'm certainly not arguing that such a metaphysical ghost exists. (What Gertrude Stein said about Oakland is also true of the cortex: "There is no there there.") Like Crick and Koch, I believe our head holds a raucous parliament of cells that endlessly debate what sensations and feelings should become conscious. These neurons are distributed all across the brain, and their firing unfolds over time. This means that we are not a place: we are a process. As the influential philosopher Daniel Dennett wrote, our mind is made up "of multiple channels in which specialist circuits try, in parallel pandemoniums, to do their various things, creating Multiple Drafts as they go." What we call reality is merely the final draft. (Of course, the very next moment requires a whole new manuscript.)
And yet, and yet...There is the problem of the election. If this blink of conscious perception is a vote, then where is the voter? We can disguise the mystery with euphemisms (top-down attention, executive control, etc.) but the mystery still exists, as mysterious as ever. We deny the ghost, but still rely on models, metaphors and analogies in which the ghost controls the machine.
- Log in to post comments
in these models, the neurons are the voters right?, so the voter isn't missing. Maybe you mean the vote counter? or some other kind of election official?
There are plenty of examples in engineering where simple elements vote to produce an outcome that has some useful properties that none of them has (or could have) individually. The individual voters are simple local "machines", entirely transparent in the systems we build. Their vote-based synthesis is less simple, less transparent, but more robust.
The examples I'm most familiar with are in computer science, where voting is used to decide what time it is (in distributed clock algorithms, including internet time), what version to use (in distributed storage mechanisms), what transaction to commit (in distributed databases), etc.
Voting is also used to merge distributed, noisy sensor results into a conclusion that is reported as "how things are" -- something closer to Crick and Koch.
Limiting this to voting is too narrow -- actually it is usually statistical merging of results, which can produce the same results as a voting algorithm given the right background assumptions. The actual assumptions used depend on the specific character of the system and sometimes the history of the elements. Some elements may be found over time to predict the result more reliably, just as our reporters find "bellwether" counties that have always voted with the majority in past elections. (This is actually the wrong word since the bellwether sheep leads the others.)
But to your point, this sort of voting is pervasive in distributed systems -- in fact it seems to be necessary to build distributed systems that can act coherently in the presence of errors. It could easily be implemented using neurons. Voting is typically used to create a more reliable synthesis of unreliable reports.
In the most interesting cases the voting doesn't just synthesize reports, but also filters them by relevance to current expectations and especially by anything likely to change those expectations (i.e. surprising). This kind of voting can direct scarce processing and retrieval resources -- "attention".
In all these systems there's no agent at the "top" that "uses" the vote. (Such an agent would be a potential single point of failure, so it would actually be very bad to have one.) The voting doesn't generate reports to "someone" who then decides to direct the resources. The voting directs the resources, just like some kind of direct democracy could direct resources. Addictions, etc. can corrupt the direction of attention, just as social voting can be corrupted.
In your question "If this blink of conscious perception is a vote, then where is the voter?" your word choice has made understanding harder, because a "blink of conscious attention" isn't a vote cast by a single voter it is an election (many local votes, integrated in a complicated way), electing a coalition of mutually reinforcing summaries to represent "how things are" and "what we should do next".
While there is work to be done, at this point we can reasonably guess that such a coalition (lasting for maybe 300 ms) just is a "blink of conscious attention".
How about a wholesome dose of "Who"? (to be found in "Who shoves whom around inside the careenium...", compliments of Douglas Hofstadter)
Your brain operates as a question asker and answerer at the same time. And it can still be working on questions that it raised within itself from birth. And still questioning all its earlier answers as well as questioning the basis of its earlier questions. So when sensory inputs give it something that it "senses" might have a bearing on these earlier decisions, which in some sense are always open questions, it pays a form of conscious attention.
And that attention comes in the form of a question as well. Each question becoming a vote for itself as first in the order of importance. And the brain also has a hierarchy of managerial processors who by consensus rule on that importance, with a constant stream of such rulings going on.
Arbitrated at all times by the metaphorical executive centered in the emotional part of the brain which has from the beginning offered the rest of the mechanism a set of purposes for both raising questions and seeking out their answers.
And deciding continuously, with assistance from the more conscious input from its rational section, which of those purposes will best be served by its further examination of tentative answers in the order of their tentatively determined importance, or significance if you will - that order in turn gauged in the end by how these earlier purposive efforts can be expected to assist in dealing with its most pressing purpose of assessing the consequences of any actions that are under consideration for giving service to its most immediate needs. The purpose then of which we will be the most conscious.
Or something like that.
And yet, and yet... What tbell1 said.
This is not exactly a new idea, even for 2003. Take, for example, Marvin Minsky's "Society of Mind" in the early 70's. Or one could look to theories spread over the intervening years on emergent herd/flock/mob behaviors. As Jed says, there is no need to invoke any higher function, or even assume only the majority vote wins; the behavior is the weighted sum of many independent (and likely more than somewhat chaotic) actors in a loosely coupled system.
I like this post. In your book I've been a bit troubled by examples of minds that learn somehow to apply intellect and intuition in some serendipitous combination - without an explanation of whether the intuitive or reasoning mind (or something else) is exerting the control - and just how such control is established.
Maybe I'm just not seeing what you intended to say but posts like this - and especially the comments - might eventually help me find what I'm looking for.
Maybe the answer to my question is your last paragraph above.
When I read your descriptions of top-down attention, and executive control in the book I thought you implied that they were the result of conscious cognition rather than intuition. Perhaps I read that wrong. I'll be watching for future installments of the ghost hunt.
Any model built specifically to counter the "ghost in the machine" model is probably very little aside from that. Right now, the theory consciousness doesn't need a grand model and it doesn't need counter grand models. It needs some modesty.
Which you've shown--I suppose we're on our way!
Perception and attention are really the easy part of "consciousness". Jed does a decent job in comment #2. Anyway, anyone who has played with complex adaptive systems should have a decent intuition for how these can work, even if the language to describe it isn't terribly good.
What is harder to explain is introspection: the sense of identity, continuity, and self. I'll go ahead and claim that this "executive" is only an illusion... but it is a really powerful illusion. A just-so evolutionary story for how it might have been created isn't too hard. Memories, feelings, and other derived 'qualia' (I hate that word) getting fed back into the system as perceptions themselves makes sense when evaluating current perceptions, and sets up the needed recursion. The perception of a 'self' would just be a conceptualization (classification/grouping of correlated perceptions) at the next level of recursion. You could say there is a 'self' because you perceive yourself thinking about yourself. Almost tautological, but not quite since there are non-self-referential perceptual systems (visual, auditory, ect.) at the bottom level and not just turtles all the way down.
Nobody seems to consider that consciousness involves a hierarchy of the purposes for which the brain needs its higher forms of consciousness to emerge. Perhaps because the rage to do computer modeling has ignored the problem that computers don't rank their own purposes, we do that for them.
The heart and the gut are also large neural networks with their own consciousnesses.
The brain likes to "think" it's the only "thinker." Some of the things my brain figured out late in the game were things I "knew" in my heart or my gut.
I really appreciate this short reminder that although certain faculties of the mind can be described, modeled, and looked for in some "nub of flesh", our "consciousness", the concept of our attention, our "train of thought", is something that can simply cannot be subjected to such reductionism.
I think the quotes, and thus, the skepticism, belong on the word "consciousness" rather than "science." We know that the science exists, but this consciousness, the self, the ghost, etc. are just concepts. They are words for describing, not cognitive processes, put the emergent properties of their interactions. To quote Antonio Damasio, it's just "The FEELING of what happens."
When describing consciousness with language, you are reaching the limits of language. Crick, IS demolishing the ghost of the machine by suggesting itâs not only an empirically testable phenomenon, one that is potentially quantitative, w (n) neurons making up a vote.
But I wonder how much his view is limited by the metaphor, one which is so obviously rooted in the culture he believes in? Is it really an issue of majority rules? Always?
I wonder if there's any utility in a preference ranking/instant runoff voting model. This might help explain how neurons are recruited to support outcomes that differ from their initial position. (I've created a visualization tool that illustrates consensus-finding processes in ranked choice elections, if anyone's interested.)
J Collin @14 says: "But I wonder how much his view is limited by the metaphor, one which is so obviously rooted in the culture he believes in? Is it really an issue of majority rules? Always?"
Majority rules? Isn't that just a neural algorithm? Wouldn't any other neural algorithm still do away with freewill just as effectively (unless it was a factor)?
This "democracy" metaphor seems to rely heavily on a logics-based rendering of the mind. (I suppose most neurological philosophizing does this?) Something like: If switch "A" does this, then switch "B" will do that... but more complex than that.
My question: How is forgetting calculated into this logics-based narrative of neural activity? I don't see how the democracy metaphor accounts for the fact that, in elections both real and imagined, our ability to make a practical choice usually (always?) coincides with a certain kind of forgetting.
Was it Socrates who said that he didn't want an invention for remembering, but for forgetting? And some other guy said, a way of seeing is a way of not seeing... so a way of remembering is also a way of not remembering?
I'm sure the "'science' of consciousness" has a theory of forgetting, but does it merge this theory with its theory of decision making? If making a decision is likened to this vast machine of levers and switches in our mind/body, how do we account for systematic forgetting--which tends to dissolve clarity and cause accidents of purpose?
Ray in Seattle: Wouldn't any other neural algorithm still do away with freewill just as effectively (unless it was a factor)?
Depends a whole lot on what one means by "free will"...
Lots of interesting comments, representing just about all the current points of view. No responses from our host, unfortunately. I'd very much like to know what Jonah thinks about some of the points made, and more generally why he's dubious about the scientific study of consciousness (some commenters have explained their doubts, but I don't want to attribute those to him).
In this comment I'll just introduce another useful metaphor, and then respond to Travc. Maybe I'll have a chance to respond to other points later.
Pinker, in his response (PDF) to Fodor's book The Mind Doesn't Work That Way, describes consciousness as like the film of a soap bubble; its shape is determined by the pulls from each local patch (of sensory input, memory, emotion, or whatever), and the bubble sums these all up into a beautiful, globally optimal solution. As he points out, this kind of integration is described as distributed constraint solving in computer science; it is largely equivalent to "voting" as we've been discussing it (you can think of each patch of the soap bubble as "voting" for the amount of stretch and curvature that it wants).
Travc raises the very interesting question of our persistent "sense of identity, continuity, and self." The most basic source of these is the need for us animals to behave coherently, even when surprised or conflicted. If different parts of our mind / body reacted independently to events we'd fall in a heap, which is not likely to optimize our inclusive fitness. So our mind / body tries to keep this "soap bubble integration" active all the time, creating a unified "self". (Of course this breaks down in extreme situations but it is amazingly robust.) Ezequiel Morsella has done interesting work on this recently and he's continuing a tradition that goes back to Bernard Baars and arguably to William James.
I don't think this persistent soap bubble of consciousness is an illusion, any more than the persistent stability of our body temperature or blood ph is an illusion -- all three of these are created by multiple interacting very highly evolved mechanisms that maintain them through all kinds of stress.
But the persistence of the soap bubble alone doesn't fully explain the apparent continuity Tavc mentions. As anyone who's done a little meditation knows, if we pay the right kind of attention we very quickly find that our consciousness "blinks" off and on again quite frequently, and often it is focused on very different things before and after a blink. In terms of the metaphor, the soap bubble is frequently breaking and always very quickly reforming, often in a very different shape.
None the less consciousness normally seems continuous. Partly I think we could reasonably call this an illusion: these blinks happen but we aren't aware of them any more than we are aware of our visual saccades -- awareness of them would be distracting so we edit them out.
There's also a sense in which the continuity is real, though not what it appears to be. Each blink of consciousness is partly shaped by memories of our previous similar periods of consciousness; it is continuous with "who we've been" before in a similar state of mind. However we have multiple "continuous" threads of memory and switch between them; often this leads us to forget something because it "belongs" in a different context. Circumstances that force us to deal with two previously separate threads at the same time can be disorienting -- for example meeting an acquaintance in a very different social context.
As Travc says, these mechanisms effectively involve "Memories, feelings, [etc.]... getting fed back into the system as perceptions themselves..." At the levels I've described, I'm quite sure these feedbacks exist in non-verbal animals, even ones with pretty small brains, such as mice. The mechanisms all have functional benefits simply at the level of integrating complex behavior, applying previous experience to the current situation, etc. so they are important to mice as well as men.
The rest of Travc's comment is a sketch of higher order theories of consciousness. I think these higher level theories do require language and are needed to explain some kinds of reflective thought, but shouldn't be confused with the lower level mechanisms described above.
Ray Ingles, Yeah, I should have said "freewill" in quotes ;-)
å¾ç½ªå°é¢¨ was not my conscious purpose.
Your "if there is a vote, where is the voter?" question gets it wrong. In your analogy the votes and voters are unproblematic, they're just certain "sensory irritations" (Quine), which can be described purely physically; the problematic element in your analogy is something like 'what rules govern the parliament?' i.e. once you've taken all the votes (sensory input) put them through various processings and permutations, what do you end up with in the "final draft" other than just a certain grouping of individual votes (sensory inputs)? If you just describe the final collection of mediated inputs and say "THAT'S consciousness" then you've explained nothing, you might as well just point at the initial sensory inputs and say "we're conscious of them." Describing various procedural, computational, permutational steps in between the votes and the parliament doesn't add any explanation of concsiousness, it just gives us a rearranged set of inputs.
(For my money, these analogies and things like the multiple drafts model just convince because they intuitively describe something like that of which we're conscious. However they're still necessarily unable to provide an answer to the fundamentally different question of what how we are conscious of our (processed) sensory inputs or of what it means to be conscious at all.
I love the work coming out of Francis Crick's and Christof Koch's lab. The experiments are clever and they know how to communicate with non-specialists.
tbell1: I think when JL asks rhetorically about the voters, he's not failing to understand the analogy. He's pointing out that you have to be careful in interpreting this analogy.
Otherwise the little "voters" (neurons or neuron complexes or whatever) become new homunculi or ghosts in our metaphorical machine. And that poses a very old philosophical problem: rather than answering any questions about consciousness, this kind of "explanation" begs the question entirely.
An overly rich interpretion of "voter" shifts the seat of consciousness one step lower, but the problem has simply been shifted. The answer (if there is one) still hangs tantalizingly just out of reach.
I think the use of the term "vote" makes it sound a lot more civilized and orderly - but I'm guessing that it's more of a matter of what message gets shouted most loudly, or perhaps for the longest duration. . . and it might even depend on the nature of the set of neurons receiving the signal; making the "decision".
Probably - many of the "transmitting" neurons may fire multiple times, maybe with signal pulses at a higher frequency, or amplitude. Whatever's more successful at getting the receiving neurons to respond. There may be circuituitous routes, by which a single neuron can control many "votes" (where learning has taken place, and fibers have grown pathways to reinforce that learning).
You may end up with a decision from the loudest, most boisterous set of signals. Then, that one weak signal, lingers on, that little voice in the back of your head saying ("no, that's a bad choice") - maybe because that one lingers on after the main electrochemical burst, it ends up stimulating the response that ends up affecting the whole organism's behavior. ("he listened to his gut").
Who is the voter? The crowd in the brain. The AI-guys, in computer science, simulate the flow of an impulse, or decision from one node of a directed graph to another (a "neural network") - by assigning a "weight" value to the line (or multiple paths) connecting the nodes; and an algorithm, like Bayes' simulates a set of statistical functions to "infer a likelihood". Of course, computer hardware isn't really able to come up with "soft" values, or real random numbers. . . what do biological neurons do?
So where does self awareness fit into all this??? Even with all our elaborate and sophisticated models of consciensness, we can't seem to synthesize or duplicate self awareness. Indeed!! It has been the one big hurdle specialists in the field of A.I. Have a hard time with!!! What causes it, what circumstances or conditions or systems create it??? Any thoughts on this???
Addendum to my post above. Maybe it's time to move completly beyond the possibly outmoded and
outdated Cartesian metaphor to something more appropiate and relevant to the task at hand???
Interesting article on the influence of culture on brain development in the Winter issue of Tufts Magazine:
http://www.tufts.edu/alumni/magazine/winter2010/features/the-brain.html