To efficiently direct learning, it may be useful for the brain to attend to those items which are maximally novel - this novelty may obscure some predictive or rewarding value that has not yet been learned or exploited. This is widely acknowledged, usually in passing; but ambiguities in what counts as "novel" have profound and unacknowledged consequences for research on high-level cognition (generalization, rule-use, etc).
But let's start with what we do know about novelty processing in the brain.
As would be expected from a creature that has much to learn, infants show a very robust "novelty preference." As demonstrated in a 2005 paper by Shinskey & Munakata, this novelty preference is intimately related to active maintenance: the expression of a novelty preference requires a lasting representation of the novel item itself. Even for adults outside the laboratory, novel events are typically those which occur either very briefly or very rarely. Thus, for infants and adults alike, novel events warrant the rapid redirection of attention to the novel event, updating of that information into working memory, and sustained attention on that item in order to learn from it.
As pointed out in a 2006 paper by Wiebe et al., this process is the first phase in "familiarizing" onself with a previously-novel stimulus. And it is very observable: Wiebe et al demonstrated that familiarity-related ERP components in 9-month-old infants change in shape after as little as 15 seconds of cumulative exposure to a novel stimulus - indicating remarkably fast execution of this chain of processes.
A number of brain regions have been implicated in the processing of novelty, including the anterior cingulate, prefrontal cortex (particularly its inferior aspect), posterior parietal cortex (including the anterior intraparietal sulcus and temporo-parietal junction), and the hippocampus. The involvement of this network in novelty processing jives with a variety of previous research indicating roles for these regions in performance monitoring (for which novelty may predict performance errors), cognitive control (required for responding appropriate to novel events), maintenance of stimulus-response mappings (which may need to change based on novel events), and episodic memory (which encodes novel events more strongly), respectively. The prefrontal cortex may be particularly important for the detection of novelty or contextual change in the environment, even at the earliest stages of life, and even in the absence of top-down control over attention (as demonstrated via near-infrared spectroscopy in sleeping infants).
Given the breadth of these "novelty-sensitive" networks and their putative functions, "novelty detection" seems to be a very underdetermined construct. For example, external stimuli, internally generated actions, and internal representations may all be novel, and different neural substrates may process novelty of these different kinds. Furthermore, novelty may reflect an inherent unfamiliarity of a particular stimulus/action/representation, or a more complex "contextual" unfamiliarity. For example, items that are independently familiar may be perceived as novel if they are encountered in a novel spatial or temporal configuration. Finally, sensitivity to novelty may be a universal characteristic of neural tissue, as observed by habituation (in infants) and repetition suppression (via fMRI), perhaps implemented by the "accomodation" currents of neurons.
These ambiguities are easily illustrated with a historical example of about 30 years of confusion in cognitive neuroscience and neuropsychology. Since the development of the EEG technique in the early 20th century, psychologists have furiously documented the variety of scalp electrical responses that are elicited by various novel stimuli. Here's a few, to give you a taste:
1) Task-irrelevant and highly novel stimuli elicited a positive waveform on the scalp called a "Novelty P3."
2) Task-irrelevant and less novel stimuli elicited a similar waveform called a "P3a."
3) Unexpected task-relevant stimuli also elicit a similar "P3b."
4) Incorrect responses elicit a similar "Pe".
Although dimensionality reduction techniques have helped clarify how some of these ERPs interrelate, we still have basically no idea how these putatively different indices map to the cognitive processing of novelty, because none of the factors listed above (best summarized as novelty of what, with respect to what?) have been adequately disentangled. The result is widespread confusion among cognitive scientists about the cognitive processes involved in novelty detection, and hence the generators of these event-related responses to novelty.
The wider problem is that any paradigm with low-frequency content - with rare stimuli, requiring rare responses, or even recombining these in rare ways - is necessarily confounded with a form of "novelty detection" that we barely understand. This criticism contaminates almost all research on high-level cognition, including that on:
A) categorization (e.g., Rosch-like paradigms, in which atypical exemplars are compared with more typical ones; penguin vs. robin as examples of birds),
B) prepotent responding (e.g., Go/NoGo, in which a low-frequency alternative response is pitted against a more common one),
C) prepotent processing (e.g., Stroop, in which a low-frequency stimulus is presented, differing from a more common one)
D) rule-use and reversal learning (e.g., the Wisconsin Card Sort, in which B and C are basically combined)
E) generalization (e.g., analogical reasoning, in which novelty appears in the form of two situations which are identical despite many salient surface dissimilarities)
F) etc etc.
Perhaps people writing about these capacities mean to be referring to different kinds of novelty detection, but I don't think so. Instead, novelty detection appears to be a deep and unacknowledged confound in common experimental designs.
- Log in to post comments
I think that to get the business loans from creditors you must present a great motivation. However, once I have received a financial loan, just because I was willing to buy a bike.
I propose not to wait until you earn big sum of cash to buy goods! You can take the credit loans or collateral loan and feel comfortable
Don't you acknowledge that it is high time to receive the home loans, which will realize your dreams.
This is well known that money can make us autonomous. But what to do if one does not have money? The one way is to receive the personal loans and just car loan.
The mortgage loans seem to be useful for people, which want to ground their organization. By the way, it's not very hard to get a commercial loan.
That's known that money makes us disembarrass. But how to act when one doesn't have money? The only one way is to try to get the personal loans or just short term loan.
One acknowledges that modern life seems to be expensive, nevertheless we need money for different stuff and not every person gets enough cash. Hence to receive fast loans and just auto loan should be a correct way out.
Different people in all countries get the loans from various banks, just because it is easy and fast.
Although dimensionality reduction techniques have helped clarify how some of these ERPs interrelate, we still have basically no idea how these putatively different indices map to the cognitive processing of novelty, because none of the factors listed above (best summarized as novelty of what, with respect to what?) have been adequately disentangled.
He! he! he!
In many fields of research the "question" is slowly emerging: WTF counts as a "concept"?
That is, when is it beneficial to bundle a bunch of phenomenons (experiences, samples, etc...) under a single nominal "handle".
Platonism has been crippling scientific thought for over two millenia with the idiotic assumption that there are perfect entities "out there" in reality which are awaiting to be "discovered" by us and it is still harming research in its more recents variants of realism (*).
The metaphysical "existence" question (is there really a novelty concept, etc...) is wholly irrelevant to the purposes of research, what matters is how fruitful it is to "carve out" such or such concept with respect to the prediction power/adequacy of further experiments making use of the new concept.
It is very likely that the most useful concepts will not easily lend themselves to clear cut definitions (platonic ideas) as in the good old times of early scientific research.
Even in mathematics there is no reason to suppose that we can definitely corral "all of reality" in a fixed set of rules as Gregory Chaitin proves in The Limits of Reason, it is rather the case that there is an unbounded stock of possible models out of which we have to pick the most convenients for the questions at hand and not hope for any kind of "definitive answers" (no TOE in physics for instance).
But there is no point being chagrined by such a "limitation", for one this is a limitation of our discourse about reality, not a limitation of reality, and for two it means that our investigation of reality is an endless game, no risk of having some kind of Thermodynamic Death puting an end to the adventure:
La chair est triste, hélas! et j'ai lu tous les livres, not quite yet idiot!
* - Beware of leaving comments on Turney's blog, he will edit them and the result may not end up being what you meant.