Learning The Language of Thought: 4 Candidate Neural Codes

How does your brain represent the feelings and thoughts that are a part of conscious experience? Even the simplest aspects of this question are still a matter of heated debate, reflecting science's continuing uncertainty about "the neural code." The fact is that we still don't have a clear picture of the ways in which neurons transmit information. Here's a quick guide to current theories, beginning with well-established theories and moving into ideas that are considered more theoretical.

The canonical model: firing rate

Clearly, neurons encode some information in the rate of their firing. Although individual neurons are relatively unreliable and noisy devices, average firing rates across hundreds or thousands of neurons provides a more reliable spatio-temporal code for conveying information. Some recent evidence, reviewed below, suggests that this simple conception of the neural code is in need of elaboration.

Firing rate coherence

Our connection to the external world occurs only through the thalamus, through which all sensory signals (except olfaction) must pass in order to gain access to the neocortex. Although our sensory systems are exquisitely sensitive (9 photons or less are sufficient for inducing a conscious visual experience, and observers can report indentations of their skin only 1-3 microns deep), connections between thalamus and neocortex are surprisingly weak (by some estimates, 30 times weaker than intracortical connections).

Recently, Bruno & Sakman demonstrated that this incredible sensitivity may be enabled by the synchronous firing of neurons in the thalamus. According to this model, the relatively weak thalamocortical connections can be "amplified" by the coordinated and synchronous firing of populations of thalamic neurons.

Synchronous firing also seems important for motor functioning in healthy subjects, and there are some indications that reduced synchrony, but not reduced firing rate, may underlie some symptoms of Parkinson's disease.

Precise Spike Timing: Phase relationships in Neural Firing

Two populations of neurons may fire at the same rate; these two populations may also show phase synchrony, where the peak activity in one population occurs at nearly the same time as that of another population. Yet there might be additional information hidden in the temporal relationship of these populations. Such a mechanism would be far more sophisticated than firing rate and neuronal synchrony alone: information might be conveyed in a "relational code" between the spikes comprising the activity in each population.

There is emerging evidence that even such tiny temporal differences might have an important computational role in neural networks. For example, synaptic efficacy may be particularly malleable when action potentials fire with a particular temporal relationship - known as "spike timing dependent plasticity," or STDP.

Others have argued that behavioral responses can often be initiated too quickly for "firing rate" to be a reliable mechanism of generating these responses. According to this argument, "time-to-first-spike" might be a faster and more reliable carrier of information - and indeed, recent simulations indicate it may be 10-20 msec faster than firing-rate codes.

N:M Nested Oscillations: phase-relationships in firing rate coherence

In their in-press TICS article, Jensen & Colgin review evidence from direct, intracranial electrode recordings in humans (perhaps the holy grail of current recording methods, allowing for an unsurpassed combination of spatial and temporal precision). In one such study, the amount of activity occuring at relatively high frequencieis (30-150Hz, aka the gamma rhythm) was systematically modulated by a much slower frequency peak in the spectrum of neuronal oscillations (5-8 Hz, aka the theta rhythm) but that the same relationship did not exist with other slow oscillations. This "cross-frequency coupling" (also known as "n:m phase synchrony") was observed across a wide swath of cortex, and across a wide variety of tasks, suggesting it may have a central role in neural communication.

This claim has been met with much resistance in the neuroscience community, partially because it's unclear how such precise "multiplexed" signals might emerge from real neural networks, with all their inherent noise and apparent randomness. As an implicit response to this question, Jensen & Colgin point to a neural network model in which slow and fast GABAergic feedback signals give rise to concurrent theta and gamma waves.

What computational role might these multiplexed, cross-frequency phase couplings have? The authors indicate that if the faster oscillation (gamma) functionally "divides" the slower oscillation into multiple time slots, then each of those slots might convey different information. This idea has given rise to a model for the physiological basis of working memory capacity (at least, the estimate of 7 +/- 2, now known to be a little high), for Sternberg scanning, and for phase precession in the hippocampus during spatial navigation.

Conclusions

This is a highly selective list of what I feel are the most likely candidates for helping us to understand the neural code. Certainly there are others I've left out - including Hameroff & Penrose's claims about neuro-quantum computation, and other recent proposals involving computation with soliton waves.

Related:
The Myth of Mind Control: Will Anyone Ever Decode the Human Brain?
How do we crack the neural code? (@ Brown)
Liam Paninsky's research

Categories

More like this

In the new issue of Seed, Douglas Hofstadter talks about "strange loops" - his term for patterns of level-crossing feedback inside some medium (such as neurons) - and their role in consciousness. Likewise, Gerald Edelman has talked about how a "reentrant dynamic core" of neural activity could…
The neural processing of color, shape, and location appears to be widely separated in the brain, and yet our subjective experience of the world is highly coherent: we perceive colored shapes in particular locations. How do these distributed representation about visual features get brought or "…
The infamous "binding problem" concerns how a coherent subjective experience of the world can emerge from the widely-distributed processing of individual object characteristics (for example, object identity and object spatial locations appear to be processed by independent neural systems). It is…
A continuing challenge in cognitive neuroscience is determining which neural structures are actually responsible for certain thoughts and behaviors. For example, fMRI and other neuroimaging techniques cannot tell us if a certain region of visual cortex is necessary for perceiving motion, or if it…

why 'the' neural code? parsimony? don't biologists and neuroscientists expect a multiplicity?

From the little knowledge that I have about the Hameroff and Penrose argument, it seems slightly mystical - not that the brain isn't a mystery - I just wonder if their perspective is overly complex, and philosophically tepid. I seem to remember that Christof Koch (and others) poured cold water on the idea, so maybe that's why it appears that way to me.

As regards the soliton model - I wonder if in some ways it could still be seen as a signals-based model - there would still be resonance / interference etc. etc. if it were pressure waves instead of electrical energy.

As regards the structure of the signals, the Jensen & Colgin idea sounds very interesting.

By mickgrierson (not verified) on 18 Jun 2007 #permalink

Hey Chris,

This is nice. I had to skip all the other posts just to read this =)

I like the Jensen & Colgin theory. It seems very fitting if our mind would sort of organize things into "packages" which take up "phases" because each thing we do is computated as a "process" and not an "instance".

Though, i don't know the full extent of their theory. But how would it be possible to differentiate between the firing of impulses from multiple tasks? Example: if a person is multitasking, which is almost all the time, how would Jensen and Colgin differentiate between which "package" in one single "phase" is for which task?

Thanks everybody - lots of good points here. MS, I think that multitasking would involve multiple regions in the PFC - see the temporal cascade model I just reviewed in today's post. But I agree with your implied point, which is that there's not a good "decoder" mechanism for the multiplexed signals, if such a thing even exists in the brain in the first place.

A decoder...that reminds me..could it be the hippocampus?

It's found to be responsible for consolidation...maybe those signals are decoded by the hippocampus.....and then dysfunction of the hippocampus can make the signals be computed wrongly...

:S but it's another array of variables to take into account.

By MoonShadow (not verified) on 22 Jun 2007 #permalink