Peter Hankins has written an excellent commentary criticizing the "positive comparisons" I make after contrasting brains with computers.
"... the concept of processing speed has no useful application in the brain rather than that it isn't fixed."
While this statement may intuitively appeal to some philosophers, temporal limitations in neural processing are both critical for neuronal function and well accepted in both neuroscience and psychometrics. At the biological level, the membrane capacitance of neurons is important for regulating the firing rate of neurons, which itself has an upper limit. Myelination is another feature of neurons which is clearly critical for the speed that information can be processed. At the level of individual differences, "processing speed" is a well-established psychometric construct which can be reliably measured, can reliably predict higher cognitive function, may be related to myelination, and is thought by some to be a parameter that is critical for capturing age-related change in cognition.
Peter Hankins says:
"Are brains analogue? Granted they're not digital...I take digital and analogue to be two different ways of representing real-world quantities; I don't think we really know exactly how the brain represents things at the moment."
It's increasingly accepted that the brain uses a sparse, distributed code for representing information. Computational models based on these principles are able to account for an increasingly wide variety of interactions between cognition, pharmacology, and deep brain stimulation. Work on the sparse distributed nature of these representations, and the learning processes which generate them, has driven the development and partial success of pattern classifiers for deciphering fMRI data - providing converging evidence that we do have an increasingly good idea of how brains represent information. Peter is probably uncomfortable with equating "sparse distributed representation" with "analog," as I may have in my original post.
Of course, there are arguments that portions of the brain do represent information in a digital fashion, though in my opinion these similarities are somewhat superficial.
"... are brains 'massively parallel'? ... Chris is really warning against an excessively modular view."
It's true I abhor modular views of cognition, but it's hard for me to imagine how the brain is not a massively parallel device - in the intuitive use of that phrase. Most attempts to identify the flow of information processing through cortical networks end up positing bidirectional connectivity between most cortical regions. I think Peter's derision of the "massively parallel" phrase actually confounds the issue of "massively parallel" with "massively parallel computing". And what we mean by "computing" is, of course, the crux of the issue.
OK, so brains don't have a CPU clock. But some CPUs with a clock can change speeds, pause for low power. Also, asynchronous computers have been constructed that have no clock.
More interesting is that brains can change performance for a task, especially with practice. Is it higher speed connections? Is it more neurons? Is it better learned connections (which connections actually work)?
Take a task you know. Adding pairs of numbers. Through practice, you can very likely quadruple your speed. Do you really know anything more about addition? Maybe. Maybe not.
It'd be nice to have a good substitute for brains to placate the zombies when they come.
Although sources claim they're a lot like tofu, texture-wise if not taste-wise.
I'm wary of unqualified claims like "the brain uses a sparse code." Which brains? Which parts of brains? Which types of neurons? During what task? Such questions are typically relevant. (Incidentally, Scholarpedia has a good entry on sparse codes here).
Some specific examples.
1. It is a little tricky to define 'sparse' for neurons that don't fire action potentials, as is common in many vertebrate systems. It is possible to give a definition, but it seems almost out of place to talk about sparse codes in such systems.
2. Many neuronal populations don't use sparse codes, such as motoneuron polls whose members maintain a relatively high basal firing rate.
3. While true that many cortical areas have a low proportion of neurons spiking at any given time (the definition of a sparse code), we should be a little cautious about saying this is evidence for a sparse code. When a stimulus relevant for that population is presented, you can sometimes see quite large population responses. Just because they are quiet when not being stimulated doesn't mean the code is sparse--what is more relevant is the proportion in a population activated by a driving stimulus. At one extreme, if I pluck out a rat's whiskers, its barrel cortex will likely have activity that looks very sparse, but this doesn't imply the cortex uses a sparse code.
4. Related to number 3, the question of sparseness probably makes the most sense in the context of naturalistic conditions (naturalistic stimuli in awake behaving animals), while much of the research used to support sparse codes is done in anesthetized preparations in which receptive fields are mapped using stimuli such as white noise or gabor functions. There are exceptions to this, though.
Sparse codes are interesting theoretically, and are are useful for engineers, and are suggestive about how brains might work, but this is still only an interesting possibility, not something to bet the farm on. I know when I am trying to sort units in my rats running around their little Skinner Boxes, it sure seems all their damned neurons are firing at the same time! :)
"Work on the sparse distributed nature of these representations, and the learning processes which generate them, has driven the development and partial success of pattern classifiers for deciphering fMRI data - providing converging evidence that we do have an increasingly good idea of how brains represent information."
That's pretty funny dude! What is the good idea then?
Recent paper arguing for sparse coding in the auditory cortex in awake head-fixed rats.
Curiously, I agree with most of both your arguments and Peter's. I'm not being excessively agreeable; rather, each of you are correct about most things given slightly different uses of language. The only substantive issue I disagree with is Peter's claim that brains are not parallel processors - I think he is arguing for this point beyond merely conflating parallel processors with massively parallel computing. Brains work in parallel in two important senses. First is the sense that 'everything happens at once;' every neuron does its computa- whoops, information processing, at the same time. The second important sense is that different brain areas are all working on separate computations at once. Visual cortex is processing visual information at the same time that auditory cortex is processing audio information at the same time that orbitofrontal cortex is calculating reward status while motor cortex is calculating a potential action. Each of these areas is separate, but interconnected: processing of audio information informs processing of visual information as both progress, while both of these inform processing about potential reward values. In his argument that brains aren't massively parallel computers ( http://www.consciousentities.com/pseudodoxia.htm#parallel ) Peter rhetorically asks what good it would be if a massively parallel brain ran a virtual serial machine (as Dennett among others have proposed). The answer is that having a virtual serial machine constrains the many parallel streams of computing to a single topic of computation: the sight of a cat will help you identify the sound as a purr, and processing both aspects of the same thing will help your orbitofrontal cortex calculate a correct reward value of your potential cat-petting action. Without this virtual serial aspect, you might hear a nearby fan's sound as a cat's purr and get scratched. Having a serial topic of attention ensures that the information constantly being passed by the many connections between areas is mutually informative and therefore helpful.
This thinking also offers an explanation of a (more-or-less) unified stream of consciousness.
I also agree with you that we can be pretty sure brains use some sort of distributed representation. The article "Fast Readout of Object Identity from Macaque Inferior Temporal Cortex" and associated literature provide a wealth of empirical evidence, not to mention the theoretical evidence from artificial neural network simulations. While it is good to be cautious about how much is known, it is crucial for any scientific progress to recognize when we can take an accurate guess; the brain all but certainly uses distributed representations.
Vlad, I said we have an "increasingly good" idea, not a good idea, of how the brain works ;)
Eric, I meant sparse in terms of the total number of neurons contributing to a given representation, not the average activity level of a neuron (some of your criticisms seem relevant to the latter rather than the former). Otherwise I completely agree with your points. I can't speak to the motor neurons you mention.
Fair enough. You can just define 'sparse' as the proportion that change their activation (rather than the proportion firing versus quiet) and this works.
I'm not sure how to square the evidence supports a population code, in which each neuron coding is broadly tuned and contributes to the representation, with the evidence supporting sparse codes. It probably depends on the animal, the neuronal types, the conditions of recording, the behavioral state of the animal, etc..
This tension between two popular ideas (population codes and sparse codes) is discussed here.
I know in the leech somatosensory system, they use a population code. Also, 'sparseness' and 'population-ness' are both quantitative features of coding--there is no clear-cut proportion of neurons below which it is a clear sparse code or above which it is a clear population code (and for that matter, you can also have a sparse population code, so the terminology is not really well worked out yet).