developingintelligence en Performance Improves with Transcranial Random Noise Stimulation <span>Performance Improves with Transcranial Random Noise Stimulation</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Stimulating the brain with high frequency electrical noise can supersede the beneficial effects observed from transcranial direct current stimulation, either anodal or cathodal (as well as those observed from sham stimulation), in perceptual learning, as newly reported by <a href="">Fertonani, Pirully &amp; Miniussi</a> in the Journal of Neuroscience. The authors suggest that transcranial random noise stimulation may work by preventing those neurophysiological homeostatic mechanisms that govern ion channel conductance from rebalancing the changes induced by prolonged practice on this perceptual learning task.</p> <!--more--><p>Over several experiments, a total of 99 subjects underwent transcranical random noise stimulation, consisting of an AC current of 1.5 mA intensity at random frequencies between 0.1 to 100 Hz (for low-frequency stimulation) or 100-640 Hz (for high frequency stimulation). Direct current stimuliation was similarly provided at 1.5mA. (In case you wanted to replicate this experiment at home, <a href="">the company that sells the device used in this study</a> has made it clear they're perfectly fine with selling you one for your own personal use ["Unser Service für Sie persönliche Beratung"] - not that I'm recommending that!) At any rate, no artifactual visual perceptions were induced by these stimulations, and all women were tested during their follicular menstrual phase (<a href="">at which point their cortical excitability is most similar to that of men</a>). Stimulation was provided for approximately 4 minutes over the occipital lobe (or vertex, for control stimulation conditions) during each block of a visual orientation discrimination task. Subjects simply had to say whether a given stimulus was tilted clockwise or counterclockwise relative to a preceding reference stimulus.</p> <p>Over the course of five successive blocks of this task, subjects undergoing high frequency random electrical stimulation performed consistently better than subjects undergoing any other kind of stimulation, including low frequency random stimulation, cathodal or anodal direct current, control stimulation to the vertex, or sham stimulation. The rate of change in performance was also increased in high frequency random stimulation relative to anodal direct current, which yielded no apparent learning effect - even though anodal direct current is typically thought to enhance neural activity and <a href="">is in other domains helpful to performance</a>. The authors even replicated these advantages of high frequency random stimulation (just relative to sham stimulation) in a second experiment.</p> <p>And in case you think these effects are driven by demand characteristics, note that participants failed to correctly guess whether they received actual stimulation or placebo (sham) stimulation - indicating these effects are unlikely to be driven by any explicit perception arising from electrical stimulation. Moreover, anodal and cathodal direct current stimulation was associated with an increased report of itchiness, irritation and burning than the other conditions. In no case did reported sensations during stimulation correlate with performance (absolute R values <.1 and="" random="" noise="" stimulation="" was="" never="" differentiable="" from="" sham="" neither="" in="" terms="" of="" explicit="" report="" nor="" subject="" ratings="" various="" subjective="" experiences="" like="" itching="" burning="" irritation="" pain="" heat="" or="" taste=""><p>So, how on earth is this happening? Fertonani et al suggest that repeated random stimulation at a high frequency can actually support temporal summation of neural activity, whereas anodal direct current will induce a facilitation that is followed by homeostatic re-regulation of the ion channel conductances and thus ultimately reduce neuronal excitability. I think the authors are reasonably careful to acknowledge that this particular scenario may be highly dependent on a number of factors, including the exact placement of reference electrodes, the exact stimulation parameters used, as well as possibly more interesting things like the cytoarchitectural features of the areas undergoing stimulation. </p> <p>Nonetheless, Fertonani can't resist some speculations about another possible explanation for these effects: <a href="">stochastic resonance</a>. Stochastic resonance refers to the (apparently) paradoxical phenomenon by which the signal to noise ratio in a thresholded system can sometimes be enhanced following the addition of broadband noise, which may provide additional excitation that allows nascent signals to reach a criterial threshold for experiencing positive feedback. Originally, stochastic resonance was proposed as an explanation for the presence of ice ages throughout geological history, and has subsequently been (<a href="">hypothesized to explain some neuropsychiatric phenomena</a>; indeed, it has been observed in <a href="">hippocampus</a>, and a number of sensory regions). Fertonani et al carefully suggest that random noise stimulation could have beneficial effects by pushing the neuronal population "over the threshold" required for some form of positive feedback (perhaps due to recurrent activation or perhaps thalamocortical in nature) or by preferentially recruiting additional subthreshold neurons to participate in such neuronal population coding. </p> <p>This of course is not mutually exclusive with the idea that random noise stimulation eliminated homeostatic mechanisms for regulating ion channel conductances, but I do tend to prefer the stochastic resonance interpretation. It's unclear to me why anodal direction current stimulation should ever benefit performance if these homeostatic mechanisms are so perniciously counterbalancing any changes that are being induced, unless such homeostatic mechanisms are simply more operative in visual cortex than over other regions.</p> <p>An alternative explanation, unmentioned by Fertonani et al., is that their transcranial random noise stimulation effectively acted as a biological version of the simulated annealing process sometimes used to improve learning in artificial neural networks. In simulated annealing, the injection of random noise during learning can bump the system out of local minima in the energy landscape and promote better long-term performance. Although orientation discrimination is presumably a well-learned skill in the adults used in this experiment, there may be task-specific associative learning that is occurring over the course of the experiment, and such learning could conceivably be enhanced through this kind of annealing process. </p> <p>At the same time, there are new reports that random noise stimulation is <a href="">not effective</a> in improving performance in tasks relying crucially on more anterior cortical regions - including everyone's favorite area, the DLPFC, in everyone's favorite task, the n-back. It is difficult to integrate these failures with a weight-based interpretation of short-term synaptic facilitation demonstrated by <a href="">Itskov et al</a> to be important for stabilizing attractor states in the prefrontal cortex. Indeed, random noise stimulation may <a href="">decrease motor-related BOLD responses</a> even as it <a href="">increases corticospinal excitability</a>. These confusing and sometimes conflicting results pose a significant challenge to any explanation of random noise stimulation invoking stochastically resonant, or annealing-sensitive, neurobiological mechanisms.</p> </.1></p></div> <span><a title="View user profile." href="/author/developingintelligence">developinginte…</a></span> <span>Mon, 11/21/2011 - 05:04</span> Mon, 21 Nov 2011 10:04:44 +0000 developingintelligence 144078 at Attractors All the Way Up: Metastability, Rostrocaudal Hierarchies, and Synaptic Facilitation <span>Attractors All the Way Up: Metastability, Rostrocaudal Hierarchies, and Synaptic Facilitation</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>In their wonderful Neuroimage article, <a href="">Braun &amp; Mattia</a> present a comprehensive introduction to the possible neuronal implementations and cognitive sequelae of a particular dynamical phenomenon: the attractor state. In another excellent paper, just recently out in Frontiers, <a href=";utm_medium=email&amp;utm_campaign=Neuroscience-w46-2011">Itskov, Hansel and Tsodyks</a> describe how such attractor dynamics may be insufficient to support working memory processing unless supplemented by rapid synaptic modification - a mechanism which has in fact been described neuroanatomically and previously utilized neurocomputationally to describe cognitive phenomena. To see how these ideas tie together a number of different neuroanatomical and cognitive discoveries, let's start with the basics of attraction.</p> <!--more--><p> Attractors are those patterns in some abstract state space (e.g., a 3 dimensional space as defined by the firing rates of 3 different neurons) towards which a system will naturally converge over time as it loses energy. These can be simple points (say, where despite its initial conditions, our 3-neuron system will always end up with firing rates of 0, 0, and 1 Hz respectively), or lines (where our 3-neuron system might end up with firing rates of 0, 0, and between .3 and .7Hz respectively), rings (where our 3-neuron system might ultimately end somewhere along a path encircling the value .5,.5.,5), or shapes with fractional dimensions (e.g., the classic "<a href="">Lorenz attractor</a>"). Systems can have multiple attractors of any type; the "energy landscape" of a dynamical system can be plotted as a function of how different initial conditions may ultimately fall into the "basin of attraction" for various attractors. Here's an example of the energy landscape of a multi-point attractor system, where the point attractors are illustrated in red:</p> <p><img src="" width="300" /></p> <p>But attractor dynamics can be far more complex. As pointed out by Braun &amp; Mattia, neural dynamics may smoothly traverse multiple attractor states if, upon reaching a point of attraction, the energy landscape of the neural population changes (say, as a result of neuronal fatigue in the neurons supporting the pattern of firing that comprises the attractor). As such, neuronal dynamics might be understood as traversing an energy landscape that is itself composed of multiple such landscapes - that is, a kind of attractor dynamic of attractors, or what Braun &amp; Mattia term a "metastable state."</p> <p>As reviewed by Braun &amp; Mattia, slices of visual, auditory and somatosensory cortex demonstrate spontaneous patterns of firing that are almost identical to those observed following stimulation of the thalamic areas that innervate them. This observation suggests that the cellular architecture of these regions defines a state space that is remarkably metastable, with spontaneous activity reflecting a serial transition through attractors within this state space. Spontaneous activity of this kind gives rise to a kind of "avalanche" dynamic in which synchronous neural firing in superficial cortical layers triggers a chain reaction of avalanches across interconnected cortical sites.</p> <p>These physiological dynamics, as well as those from the domain of perceptual decision making (and associated signal detection as well as diffusion models of this domain) are well-captured by neural network models that include lateral inhibitory competitive dynamics that support winner-take-all processing, when superimposed on sparse excitation. Diffusion of perceptual information into the system can be understood as the neuronal population being perturbed from its initial low-energy state, and haphazardly navigating the energy landscape of the state space until a basin of attraction is found and the lowest-energy attractor reached. </p> <p>One related perceptual domain is that of bistable perception, classically illustrated by the two depth interpretations that are possible of the necker cube: </p> <p><img src="" /></p> <p>But a more fun illustration of this phenomenon is the "dancer" animation. Which way does the dancer turn in your perception? And can you see the dancer turn in the other direction?</p> <p><img src="" /></p> <p>(Take a minute or two with that one if you fail to see her reverse; it's a sudden but unpredictable shift). One last one, since these are so much fun:</p> <iframe width="420" height="315" src="" frameborder="0" allowfullscreen=""></iframe><p> One can understand the perceptual transitions between these various interpretations as the state space traversal of neuronal populations responsible for depth and motion (respectively in the above two examples) between two different points of attraction, such that the energy landscape is itself dynamic. At a higher level, it could be understood as a kind of nested attractor, where there is a ring attractor that governs transitions between two point attractors.</p> <p>Interestingly, tri-stable percepts can also be found. Transitions between the three interpretations of these ambiguous stimuli (which we'll call A, B, and C) are temporally interdependent, as would be expected if neuronal fatigue is driving the transition among the various points in state space. Here's an example - you should be able to see motion towards the left, the right, or straight up.</p> <iframe width="420" height="315" src=";loop=1&amp;playlist=jQQsOnMlB8k" frameborder="0" allowfullscreen=""></iframe><p> As reported by <a href="">Naber et al.</a>, the shorter a percept has lasted and the longer since it has re-appeared, the more likely it is to re-appear. </p> <p>(Interestingly, it has been noted that the right ventrolateral prefrontal cortex, the focus of yesterday's <a href="">excruciating post</a>, tracks the duration of transitional states in these multistable perceptions [as newly reported by <a href="">Knappen et al</a>], possibly suggestive of a role for the rVLPFC in detecting [but not initiating] shifts in the energy landscape of neuronal state space. Such a role would also be consistent with this area's <a href="">functionally-interposed membership</a> with the default and task-positive networks). </p> <p>Braun &amp; Mattie suggest that such nested attractors may also reflect a hierarchical structure of anatomical connectivity, either strictly corticocortical or those that may be more regionally-diverse (e.g., nested cortico-striatal loops).</p> <p>Particularly relevant to this latter point is a recent computational exploration of the details of such attractor networks, presented by <a href=";utm_medium=email&amp;utm_campaign=Neuroscience-w46-2011">Itskov, Hansel, and Tsodyks</a>. Itskov et al rightly point out that, while appealing in principle, attractor dynamics in what I'll call "runnable" neural network models can require exquisite hand tuning, and are particularly sensitive to noise in connectivity or activation. Although widely hypothesized to be a mechanism for the active maintenance of information over time, the noisy nature of the brain could be taken to imply that frameworks like Braun &amp; Mattia's cannot actually apply to working memory in the physically-realized brain. </p> <p>However, Itskov et al's elegant computational modeling work demonstrates that, so long as the connectivity is sufficient to support the presence of an attractor in response to a stimulus in the first place, that attractor can be stably maintained even in the absence of this stimulus so long as there is relatively minor and short-term, Hebbian-like synaptic facilitation of the weights of the units participating in the attractor. Without this form of short term weight change, plausible levels of noise in activation and connectivity is enough to so seriously damage the attractors that no delay-period stimulus maintenance is possible.</p> <p>What's particularly interesting about this solution - in conjunction with the meta-stable nested attractor framework of Braun &amp; Mattia - is that it confirms that attractor dynamics could indeed be a mechanism by which hierarchical frontal and frontostriatal processing occurs. It matches not only with previous computational models (e.g., that of<a href=""> Reynolds et al.</a>, who demonstrate that short-term synaptic facilitation in the prefrontal cortex may be important for capturing some important task-switching phenomena) but also with detailed neurophysiological investigations which confirm that, <a href=";Cmd=ShowDetailView&amp;TermToSearch=16547512">indeed</a>, prefrontal neurons contain more short-term synaptic facilitation effects than observed in posterior sensory cortex. To be clear, I don't think this work settles the debate about whether the short-term facilitation is necessarily weight-based in nature, or whether it might instead be due to some kind of thalamocortical positive feedback loop (indeed, recent data from <a href="">Freyer et al in the Journal of Neuroscience</a> appear to be suggestive of the latter, with respect to the human alpha rhythm in the resting state, and a related paper implies <a href="">this phenomenon may be functional interdependent with stimulus-evoked BOLD</a>). Certainly both are present and operative, and this work provides further justification for believing there are important functional consequences to these neuroanatomical features.</p> </div> <span><a title="View user profile." href="/author/developingintelligence">developinginte…</a></span> <span>Fri, 11/18/2011 - 10:18</span> Fri, 18 Nov 2011 15:18:44 +0000 developingintelligence 144077 at Architecture of the VLPFC and its Monkey/Human Mapping <span>Architecture of the VLPFC and its Monkey/Human Mapping</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>If you ever said to yourself, "I wonder whether the human mid- and posterior ventrolateral prefrontal cortex has a homologue in the monkey, and what features of its cytoarchitecture or subcortical connectivity may differentiate it from other regions of PFC" then this post is for you.</p> <p>Otherwise, move along.</p> <!--more--><p>The mid/posterior ventrolateral prefrontal cortex (pars opercularis and pars triangularis, or Brodmann's Areas 44 and 45) is <a href="">very clearly different, both anatomically and functionally, from its anterior sector</a> (which involves the pars orbitalis, or Brodmann's Area 47). It is also probably (though not yet certainly) true that these sections of posterior ventrolateral prefrontal cortex are functionally distinct from the inferior frontal junction area (i.e., at the junction of the inferior frontal sulcus with the precentral sulcus, and therefore dorsal to the pVLPFC areas I will be focusing on; it is <a href="">probably most similar</a> to "Walker's Area 45" or the "frontal eye field area 45" in the monkey, which has <a href="">more dorsal sources of parietal input</a> than the more ventral pVLPFC area of interest here). </p> <p>This area is also differentiable from more anterior (BA 47/12) and more dorsal (46v) areas by <a href="">virtue of its connectivity with the superior temporal sulcus</a>. Classically the pVLPFC is sometimes referred to as "Broca's Area", and although it turns out that's somewhat of a misnomer, it is a (largely) fortunate one for our purposes: there's lots of detailed neuroanatomical research done on this area, both in the human and the primate. </p> <p>In this light, our first problem may seem surprising: does this area exist in the monkey?</p> <p>Yes is the short answer, although about <a href="">10 years of debate</a> surround that simple answer (as stated by <a href="">Gerbelli et al.</a>, "[this sector] occupies a cortical sector of a highly controversial architectonic attribution, assigned to areas 46 and 12 by Walker (1940), to areas 8 ventral and 46 by Barbas and Pandya (1989) and mostly to area 12 by Preuss and Goldman-Rakic (1991) and Romanski (2004, 2007)"). </p> <p>The long answer is most easily grasped visually (image from <a href="">Leh, Petrides &amp; Strafella</a>):</p> <p><a href=""><img src="" alt="i-d5ee3713bbab05af377fd33e988a4614-jpg-thumb-350x204-70662.jpg" /></a></p> <p>As reviewed by <a href="">Petrides, Cadoret &amp; Mackey</a>, it has been argued that human BA44 has no homologue in the monkey. Others have argued that it does, such that human BA 44 corresponds to monkey area F5, sometimes termed PMv (ventral premotor cortex - behind the arcuate sulcus), with the monkey homologue of human BA 45 lying just anterior to the arcuate. But on the basis of their own careful analysis, Petrides et al suggest that BA 44 actually lies within the arcuate sulcus in the monkey, with ventral BA6 lying behind it; BA 45 is anterior to 44. We will assume Petrides et al's view to be the correct one for the remainder of this post.</p> <p>Now that we have identified where in the monkey these areas exist, it is worth covering the noteworthy differences between 44 and 45. And there is one - although <a href="">perhaps</a> only <a href="">one: in terms of the presence of layer IV neurons</a>, which are only "incipient" in 44, but well developed in area 45. Otherwise, these two areas <a href="">share many features</a>, including large pyramidal cells in deep layers III &amp; V, the lack of a clear border between layers II vs. III, and a low cell density in layer VI. Gerbella et al., on the basis of cyto-, myelo-, and chemo-architectural studies, suggest that the relevant region (45B, although they did not report data from deep within the arcuate) can be defined solely on the basis of its extremely large outstanding layer III pyramidal cells, which are comparaitvely greater and more dense than those in layer V.</p> <p>The functional significance of these laminar features may be better understood with respect to general principles of cortico-basal ganglia and cortico-thalamic projections (as described by <a href="">McFarland &amp; Haber, 2002</a>). Layer V is reciprocally/bidirectionally connected with thalamus and represents a kind of positive feedback loop for corticothalamic processing. (A subset of these layer V neurons with bidirectional thalamic connectivity also have axon collaterals that project to the striatum). Layer I tends to be a recipient of more diverse corticothalamic projections, and thus represents a kind of "open loop" in the thalamocortical architecture. Finally, Layer III neurons tend to project preferentially to the striatum in prefrontal cortex (whereas in posterior cortex it represents a source of more local, cortico-cortical loops). MD subregions in particular may receive nonreciprocal projections from ACC and pre-SMA. </p> <p>These claims, however, are not very specific to our particular region of interest. So what about the connectivity of this pVLPFC region in particular?</p> <p><big><strong>Human BA 44/45, aka pars opercularis and triangularis, of the human VLPFC, and its cortical/thalamic/striatal interconnectivity.</strong></big> </p> <p>Striato-thalamic input to pVLPFC has been investigated by <a href="">Tanibuchi, Kitano &amp; Jinnai 2009</a> who studied <a href="">precisely the area Petrides et al consider to be the monkey homologue of the human pVLPFC (check out recording site PSvc)</a>. Yet the connectivity here is somewhat surprising: this area is innervated by thalamic area MDmf/pc, which is itself innervated by the caudal area of the substantia nigra pars reitculata, as opposed to the pallidostriatal pathway that is commonly thought the dominant striatal pathway for innervating the thalamic areas that project to more dorsal regions of premotor and prefrontal cortex. This is in turn reflected in the cortical input to these pathways; as noted by <a href="">Kitano, Tanibuchi &amp; Jinnai 1998</a>, SNr neurons with multisynaptic inhibitory input from dorsal prefrontal cortex are three times fewer than those with multisynaptic inhibitory input from ventral prefrontal cortex; conversely, dorsal prefrontal input to striatum is conveyed mainly by through GPi. Similar results were observed by <a href="">Middleton &amp; Strick 2002</a>, and <a href="">Middleton &amp; Strick 2001</a>, who said "Labeled neurons were found mainly in GPi after virus injections into area 46d [dorsal PFC], whereas labeled neurons were found mainly in SNpr after virus injections into area 46v [ventral PFC]." </p> <p>Tanibuchi et al argued that "signals emanating from the PSv [primarily PSvc, or our pVLPFC region - CHCH], via inhibitory caudatonigral and nigrothalamic pathways, have a disinhibitory effect on thalamic neurons in the rostrolateral MD, wherefrom they may eventually return to the same cortical area as positive feedback signals." These authors further argued that this PMv/SNr circuit is "concerned with recognition of the relationship between the visual stimulus and the behavior." </p> <p>But aren't GPi and SNr just interchangeable (except that maybe SNr is more involved in "oculomotor behavior" and GPi in "skeletomotor behavior")? If that were true, the observation that pVLPFC may interact rather preferentially with SNr has little functional punch. Moreover, everyone seems to write about GPi and SNr as though they're interchangeable - separated by the internal capsule by some evolutionary mishap, and the SNr simply more involved in oculomotor behavior. With respect to that latter point, I'll quote from <a href="">Shin &amp;<br /> Sommer, 2009</a>:</p> <blockquote><p>"When we began our study, the direct pathway through GPi and the indirect pathways through GPe had not been ruled out as oculomotor circuits; to our knowledge they simply had not been studied (with one exception: Kato and Hikosaka 1995)."</p></blockquote> <p>In fact, SNr and GPi can be differentiated in a number of ways. As extensively described by <a href="">Romanelli, Esposito, Schaal and Heit, 2005</a>, the SNr does not receive the same highly topographic input as GPi does, and as such represents a major departure from the highly topographic organization of the rest of the basal ganglia. Indeed, the SNr has been argued to be far more integrative or associative. Here I might as well just quote from <a href="">Kaneda, Nambu, Tokuno &amp; Takada 2001</a>:</p> <blockquote><p>It has long been believed that the GPi and SNr belong to a single entity that is split rostrocaudally by the internal capsule (Parent 1986). In this view, the two structures are likely to play exactly the same role in the processing of information along the cortico-basal ganglia loop. However, in terms of the parallel versus convergent rules of information processing, the present work provides anatomical evidence that the mode of dealing with corticostriatal motor information from the MI and SMA through the striatopallidal and striatonigral projections is target-dependent, such that the parallel rule governs striatopallidal input distribution, whereas the convergent rule determines striatonigral input distribution. This strongly implies that the arrangement of the striatopallidal system closely reflects the organization of the corticostriatal system, while that of the striatonigral system does not. It has also been reported that the firing pattern of SNr neurons is less affected in parkinsonian monkeys than that of GPi neurons, suggesting their functional differences in motor behavior (Wichmann et al. 1999).</p></blockquote> <p>In other words, we can't just conflate GPi and SNr, with the exception of domain (skeletomotor vs. oculomotor). Moreover, the intrinsic organization of these structures is quite different: GPi maintains segregation [i.e., follows the "parallel" rule of Kaneda et al] whereas SNr is more convergent) AND the inputs to these regions are quite different (with GPi afferents originating from motor and dorsal prefrontal cortex, and SNr afferents originating from orbital and lateral prefrontal cortex, and perhaps pVLPFC predominantly). </p> <p>As mentioned in the above Kaneda et al quote, GPi is more strongly implicated in Parkinson's and movement disorders. In contrast, the role of the SNr is widely considered to be more attentional, associative or sensory in nature. For example, it is more often implicated in so-called "sensory gating" than "motor gating" of the kind commonly thought to characterize dorsal prefrontal cortex. For example, as compared to GPi, SNr has <a href="">an abundance of visual (but not merely oculomotor) responses and a relative paucity of reward-related responses</a>.</p> <p>Perhaps the most compelling demonstration of this difference in function is <a href="">Wichman et al 1999</a>, who showed that administration of the toxin MPTP (which kills dopaminergic cells in the substantia nigra, which are concentrated in the pars compacta segment) had actually less of an effect on substantia nigra pars reticulata firing than on GPi firing! This is surprising given that SNr neurons are thought to be modulated directly by the SNc neurons, and yet the effects are far more pronounced in the structure on the other side of the internal capsule, the GPi. </p> <p><strong><big>Summary: Cytoarchitectural and Connectivity of pVLPFC</big></strong></p> <p>pVLPFC is preferentially interconnected with the MDmf nucleus of the thalamus and contains large layer V neurons, which seem in large part to support direct corticothalamocortical "positive feedback" loops in prefrontal cortex. pVLPFC also contains large layer III pyramidal cells which project, via caudo-nigro-thalamic projections, back to the MDmf, through the substantia nigra pars reticulata. This connectivity pattern is distinct from other areas of PFC, notably from the more dorsal sector with which pVLPFC is sometimes lumped, insofar as those more dorsal prefrontal regions may more strongly interact with the other major output nucleus of the basal ganglia - the internal segment of the globus pallidus. The functional significance of this distinction is not yet perfectly clear, but does not solely reflect specializations for oculomotor vs. skeletomotor behavior in the SNr and GPi respectively. Instead, it appears that the nature of information processing in the SNr is substantially more associative or convergent than the more segregated somato/corticotopic that occurs in the GPi; it may also be more sensory (or, at least visual) in nature than motoric. This claim is paralleled by a reduced involvement of the SNr in Parkinsonian phenomena relative to the GPi, and the SNr's comparatively greater involvement in phenomena like sensory gating and visual processing.</p> </div> <span><a title="View user profile." href="/author/developingintelligence">developinginte…</a></span> <span>Thu, 11/17/2011 - 06:43</span> Thu, 17 Nov 2011 11:43:18 +0000 developingintelligence 144076 at Modus Tollens, Modus Shmollens! When people commit a fallacy so absurd that it's only recently been given a name. <span>Modus Tollens, Modus Shmollens! When people commit a fallacy so absurd that it&#039;s only recently been given a name.</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Suppose - rather reasonably - that soups which taste like garlic have garlic in them. You observe two people eating soup; one of them says to the other, "There is no garlic in this soup." Do you think it's likely that the soup taste like garlic?</p> <p>If you said yes, then congratulations! You've just committed a logical fallacy (from the premise "if p then q" and "not q," you have inferred p) so absurd that it's only very recently been given a name. But don't feel bad - this absurd inference, known as <em>modus shmollens</em>, can actually be elicited from a majority of adult human subjects when the situations are just right.</p> <!--more--><p>One such situation was demonstrated by <a href=";cpsidt=18708280">Bonnefon &amp; Villejoubert in 2007</a>. They point out that, conversationally, human speakers are likely to make negative statements when they will correct the erroneous inference of a listener. That is, <strong>unless there is a good reason to believe (for example) that it might be snowing</strong>, there is little reason to state that it is <strong>not</strong> snowing. </p> <p>In this example, why might a speaker believe that it might be snowing? One straightforward possibility is that both the speaker and listener have access to some other information - information we might call "p" - that supports the inference that it is snowing - which we might in turn call "q". So, in a case where a speaker <em>does</em> bother to say that it is not snowing, or that a soup doesn't taste like garlic (i.e., "not q"), one might intuitively guess that p is in fact true. Indeed, why else would the speaker bother to negate q?</p> <p>Bonnefon &amp; Villejoubert gave 60 young adults a series of situations just like this, which varied in whether the conditional "if p then q" premise was explicit in the situation or merely implicit, and whether the categorical "not q" premise was framed as an utterance by a human speaker or merely a fact of the world. In the situation where both the conditionals were explicit and the categorical premises were utterances, 55% of undergraduates actually endorsed the modus shmollens inference with high confidence. In a second experiment, the number was even higher - 75% of undergraduates endorsed the patently absurd modus shmollens inference.</p> <p>To their credit, Bonnefon &amp; Villejoubert do not tout this behavior as a new logical fallacy. Their view is much richer. They view their work as deriving from an infamous and often-criticized schism in psycholinguistics research, where "core" psycholinguistic phenomena are investigated independent of what are viewed as merely "pragmatic" phenomena which do not reflect a core language system. The obvious criticism of such an approach is that psycholinguistic theories which do not actually work in practice can be redefined so as to refer only to a small subset of situations where putatively "core" processes can be observed, and all other mere "pragmatic" phenomena swept under a rug. Bonnefon &amp; Villejoubert suggest that for such an approach to be viable, we must take those pragmatic phenomena seriously as well, and begin to derive novel, falsifiable predictions based on them. As such, their demonstration of the problematic modus shmollens inference represents not merely a surprising and counterintuitive addition to the list of logical fallacies regularly committed by humans, nor merely insight into the context dependence of such fallacies, but also represents a more comprehensive approach to psycholinguistic theorizing.</p> </div> <span><a title="View user profile." href="/author/developingintelligence">developinginte…</a></span> <span>Wed, 11/16/2011 - 04:14</span> Wed, 16 Nov 2011 09:14:56 +0000 developingintelligence 144075 at Greater Performance Improvements When Quick Responses Are Rewarded More Than Accuracy Itself. <span>Greater Performance Improvements When Quick Responses Are Rewarded More Than Accuracy Itself.</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Last month's Frontiers in Psychology contains a fascinating study by <a href="">Dambacher, HuÌbner, and Schlösser</a> in which the authors demonstrate that the promise of financial reward can actually reduce performance when rewards are given for high accuracy. Counterintuitively, performance (characterized as accuracy per unit time) is actually better increased by financial rewards for response speed in particular.</p> <!--more--><p>The authors demonstrated this surprising result using a flanker task. In Dambacher et al's "parity" version of the flanker, subjects had to determine whether the middle character in strings like "149" or "$6#" were even or odd. "149" is an incongruent stimulus; for trials of that type, the correct answer is typically produced more slowly than, say, cases like "$6#" (a neutral stimulus) - where there is no conflict arising from the characters that "flank" the middle character. Three critical manipulations were made to this relatively well-understood task: </p> <p>1) The task progressed such that subjects were instructed to respond within 650ms to each of the first 192 trials. On the next 192 trials, subjects had to respond within 525ms. And, on the final 192 trials, subjects had to respond within 450ms. Correct answers within the deadline were associated with a gain of 10 points, and incorrect answers within the deadline were associated with a loss of 10 points. From this, Dambacher et al could calculate "speed-accuracy tradeoffs" - an estimate of the extent to which speed and accuracy are balanced across time - and so-called "accuracy-referenced reaction times" (ARRTs), or the reaction times that would be expected to yield a given level of accuracy. </p> <p>2) Subjects were told that responses which fell outside of the deadline were associated with either a loss of 20 points (the so-called "Deadline Punished" condition, or DlP) or no change in points (the so-called "Deadline Not Punished" condition, or DlNP). Keep in mind that DlNP is effectively incentivizing accuracy - a subject can maximize his or her gains merely by waiting until he/she is perfectly certain of a response. In contrast, the DlP condition primarily incentivizes response speed - any response within the deadline is far far better than no response.</p> <p>3) The total accumulated points were either related to the amount of money they would receive at the end of the experiment (the "money" condition) or these points were merely symbolic of their performance, with no bearing on the compensation they'd receive (the "symbolic" condition).</p> <p>In describing the results, I will often refer to the "accuracy-adjusted reaction times" simply as performance. (This is reasonable shorthand, given that AART's can be understood to reflect "accuracy per unit time", which is about as pure a measure of "performance" as you can get in psychology.) The results:</p> <p>Regardless of whether "misses" of the deadline were associated with a loss of points (i.e., the DlP condition) or no net change (i.e., the DlNP condition), performance was better for neutral than incongruent flankers. This is not surprising - there's clear additional difficulty in these incongruent trials, and so there's lower accuracy per unit time - less "bang for the buck," to use that bizarrely-lude colloquialism - in that condition.</p> <p>Second, regardless of whether misses of the deadline were associated with loss of points or no net change, performance was better with shorter deadlines. This effect could seem odd: there is a decreasing benefit to accuracy as time elapses (or alternately, "decreasing bang for your buck"). Of course, this is not terribly strange; since people are rarely accurate 100% of the time even in <em>unpaced</em> tasks, an effect of this kind of almost a certainty.</p> <p>Most strangely, however, is how the DlP and DlNP conditions differ. Monetary rewards (as compared to merely symbolic ones) actually <strong>reduced</strong> performance when misses of the deadline were associated with no net change in points. In other words, the use of monetary incentives actually yielded reduced accuracy-per-unit-time <em>when subjects were incentivized solely to get things right</em>; symbolic "points" were better in that case. Conversely, the use of real incentives yielded greater accuracy-per-unit-time when subjects were incentivized to get things right <em>and</em> respond quickly - that is, when misses of the deadline were penalized, in addition to errors within the deadline.</p> <p>Why might performance deteriorate when you're incentivizing people to perform well at their own pace, and yet improve when you're incentivizing them to respond correctly within a deadline?</p> <p>One straightforward possibility is simply that you have to reward <em>both</em> accuracy <em>and</em> speed to see an accuracy-per-unit-time benefit from monetary incentives over symbolic ones. Yet a second experiment demonstrates this not to be the case: performance was still worse for monetary rewards when errors and deadline misses were punished, but with a greater loss for the former than the latter. Here Dambacher et al have encouraged people to both respond correctly <em>and</em> to do so quickly, and yet accuracy-per-unit-time is still reduced with real incentives!</p> <p>But they didn't stop there. In a third experiment, subjects were rewarded solely for correct responses within the deadline, and not punished for either errors or misses. Here performance <em>was </em>improved by monetary rewards: accuracy-per-unit-time was once again greater for monetary rather than merely symbolic rewards. This manipulation doesn't particularly incentivize accuracy - there is nothing more to be lost by an incorrect answer than by one that is too late. Rather, it primarily incentivizes committing a response within the deadline, on the chance that it might be correct.</p> <p>The results are slightly less confusing when we move out of the deceptively "black-and-white" realm of these logical considerations, and instead think in more psychological terms. If response speed is not particularly important to you - that is, it's not associated with a "loss", which we know (thanks to Kahneman &amp; Tversky) are more salient than "wins" - then you may be inclined to "check" your planned response before committing it. You might also be inclined to engage in such response monitoring when errors are disproportionately punished but response deadlines aren't. </p> <p>If such response monitoring is in some sense "wasted effort" (either because correct trials undergo too much slowing in this response monitoring process, or because incorrect responses are simply unlikely to be identified as such), then you're going to see diminishing returns on accuracy across time - that is, less accuracy-per-unit-time, or less bang for your buck. Equivalently, if response speed is important, or if errors aren't associated with a loss, then you can just abandon your tendency to engage in inefficient response monitoring processes. As a result, you get better accuracy per unit time, and more bang for your buck. </p> <p>Viewed from this perspective, it makes sense that incentivizing response speed leads to better "accuracy-per-unit-time" than incentivizing accuracy itself. But things get really interesting when we consider how these incentive effects relate to activations previously observed along the medial surface of the prefrontal cortex during tasks with incentive manipulations. </p> <p>For example, Kouneheir et al observed dorsomedial activations in response to an incentive manipulation that could be viewed as disproportionately punishing errors (which, according to the findings above, should engage greater response checking or action monitoring). Indeed, a distinct literature implicates dorsomedial prefrontal cortex in processes of this kind, variously called "<a href="">action monitoring</a>," "<a href="">action outcome monitoring</a>", "<a href="">conflict monitoring</a>," or "<a href="">error-likelihood prediction</a>." While it has been recently argued (by <a href="">Grinband et al</a>; <a href="">see also</a>) that dorsomedial PFC is strongly responsive to time-on-task - that is, the longer a subject takes to respond, the stronger the activation in these areas - such observations are not necessarily incompatible with the idea that dorsomedial areas are subserving some kind of response "checking" process.</p> <p>Could Dambacher et al's task shed some light on this domain? One possibility is that dorsomedial PFC activation could track the "diminishing returns" on accuracy that one receives as a function of increasing reaction time, particularly in cases where deadline misses are not punished. If this kind of effect was observed, it might differentiate what we could call a "modal hypothesis" of dorsomedial function (something like response checking) from more recent ideas - both the "pure" time-on-task effects pointed out by <a href="">Grinband et al.</a>, and possibly also the "negative surprise" calculations that underlie <a href="">recent incarnations of the action outcome monitoring hypothesis by Alexander and Brown</a>. Alternatively, dorsomedial activation may be more related to response conflict per se, in which case one might expect less cingulate activity with diminishing returns of time on task with accuracy (e.g., if there is increasingly less conflict resolution occurring per additional unit of elapsed time. Finally, it is possible that a rostrocaudal gradient in dorsomedial prefrontal activation could be observed in a task of this kind, although I think the form of these gradients is currently too underconstrained by the extant literature for clear hypothesizing (e.g., <a href="">Kouneheir et al</a> vs <a href="">Venkatraman et al</a>).</p> </div> <span><a title="View user profile." href="/author/developingintelligence">developinginte…</a></span> <span>Tue, 11/08/2011 - 05:22</span> Tue, 08 Nov 2011 10:22:32 +0000 developingintelligence 144074 at Novelty May Dynamically Rearrange The Prefrontal Hierarchy <span>Novelty May Dynamically Rearrange The Prefrontal Hierarchy</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Owing to the low signal-to-noise ratio of functional magnetic resonance imaging, it is difficult to get a good estimate of neural activity elicited by task novelty: by the time one has collected enough trials for a good estimate, the task is no longer novel! However, a recent J Neurosci paper from <a href="">Cole, Bagic, Kass &amp; Schneider</a> circumvents this problem through a clever design. And the design pays off: the results indicate that the widely-hypothesized anterior-to-posterior flow of information through prefrontal cortex may actually be reversed when unpracticed novel tasks need to be prepared and performed. This result could have profound implications for our understanding what aspects of the prefrontal "division of labor" are dynamic based on abstract task features like novelty.</p> <!--more--><p>The study itself is a tour de force. Cole et al used a task where the subject's actual behavior is a combination of three independent factors: what kind of semantic judgement they will have to make to a pair of stimuli, what fingers they will ultimately use to respond, and what logic they will use in determining the precise stimulus-response mapping. A specific example will help: subjects would first be instructed they needed to make a judgment about whether items are sweet or not (the semantic judgment), that they will respond with their left hand (the response demand), and that they will respond with the index finger if both items are congruent (i.e., both either sweet or not sweet) but with their middle finger if incongruent (this is the stimulus-response mapping rule). They would next be presented with a series of trials, each consisting of a pair of words; in this example, if they saw "apple" and "grape" the correct response would be to respond with the index finger. Subjects also got practice with other semantic judgments (whether items were green or not, loud or not, etc) other stimulus-response rules (whether one and only one item matches the feature of interest; whether the second item matches the feature of interest; or whether the second item does not match the feature of interest), and other response demands (with the right, as opposed to left hand). </p> <p>The bottom line here is that after practicing a few examples, Cole et al could use novel combinations of these demands to produce 60 completely novel tasks in the scanner - enough to allow a reliable estimate of the hemodynamic response to such novel tasks - and contrast that with the hemodynamic response to more well-practiced tasks built from the same basic demands.</p> <p>The results showed that the dorsolateral prefrontal cortex (DLPFC) was more active when subjects were being instructed on what novel task to perform than when being instructed on what more well-practiced task to perform. Conversely, an area more anterior to this (so-called "anterior prefrontal cortex" or APFC) was more active during this instruction phase for the well-practiced tasks, relative to the novel ones. Incredibly, this double dissociation <em>reversed</em> during the performance of the first trial of any given task, such that APFC was more active for the novel tasks than the well-practiced ones, but DLPFC more active for the well-practiced than the novel tasks. These results were then replicated in a second experiment, using a magnetic rather than hemodynamic measure of neural activity (via magnetoencephalography, or MEG).</p> <p>The use of MEG had an additional advantage; its superior temporal resolution enables a finer-grained estimate of how fluctuations in activity in these areas may mutually influence one another. Through two different forms of effective connectivity modeling (<a href="">Granger causality</a> and <a href="">phase slope index</a>, or PSI) Cole et al demonstrate that the causal influence is from DLPFC to APFC during the encoding and performance of a novel task. Practiced tasks, by contrast, were associated with a complete reversal of these effects, with APFC primarily influencing DLPFC activation during preparation and performance.</p> <p>These results are somewhat discrepant with some hypotheses regarding the operation of hierarchical systems capable of this kind of "dynamic reconfiguration." Consider the view of cortico-striatal loops as hierarchically arrayed, such that prefrontal areas support the active maintenance of information that is increasingly "abstract" (e.g., perhaps in terms of <a href="">policy abstraction</a>) as one moves anteriorly in PFC. A correspondingly hierarchical set of striatal areas may flexibly gate this information (see <a href="">here</a> for evidence in support of this view, and <a href="">here</a> for a model). These models could predict that DLPFC would become more active during the instruction phase of a novel task because that area will track the constituent parts of the upcoming task - its stimulus-side processing (the semantic judgment), its stimulus-response mapping (the response rule), and its response-side processing (which hand to use). Some of this information may then be "shuttled" to APFC, so that APFC can guide subsequent performance based on this relatively abstract response policy (i.e., together the rules specify how to respond, but not exactly what response should be made). The problem here is that Cole et al actually find primarily <em>bottom-up</em> influences from DLPFC to APFC even during <em>performance</em> of the task - precisely the time where the models would seem to predict that <em>top-down</em> influences should predominate.</p> <p>I should say that these models are extraordinarily complex, and it is difficult to predict what they will do without actually running them. It is therefore worth considering what kinds of processing within these models could give rise to the Cole et al results, before claiming that the models are really fundamentally in conflict with this study. </p> <p>At the same time, it is also worth being very clear about what Cole et al found. In only a fairly restricted set of cases did they really see an asymmetric interaction between APFC and DLPFC. During instruction of a practiced task, APFC had significantly more influence on DLPFC than the converse only at one time point during the instruction phase; the other two time points were associated with no significant differences in directionality (and one of these time points seems to go in the <em>opposite</em> direction). Similarly, during instruction of a novel task, DLPFC had significantly more influence on APFC than the converse only at one out of three time points, and again one of the other two appears to go in the opposite direction. Finally, during performance of these tasks, there was no significant asymmetry in the directionality for either novel or practiced tasks, and these effects were only reported for the first (of three total) trials. </p> <p>There are other important caveats here as well. Some of the univariate results are descriptive of comparatively tiny swaths of cortex - in some cases just a few dozen voxels. This is really a double edged sword, and so it's not quite fair to criticize Cole et al on these grounds; had they observed these patterns in large areas of cortex it would appear to be rather nonspecific and uninformative, but now that they've found these patterns in a very restricted area, one can argue that their results do not capture general principles of frontal function. </p> <p>In summary, the results are intriguing, but their representativeness and consistency is worth of consideration. In addition, they seem to have an uncomfortable contrast with some hierarchical models. In principle, these results could prove to be the key to hierarchical frontal lobe function, and prompt a revision of extant computational models. On the other hand, a few replications and extensions of these results would probably be an important first step.</p> </div> <span><a title="View user profile." href="/author/developingintelligence">developinginte…</a></span> <span>Mon, 09/12/2011 - 04:41</span> Mon, 12 Sep 2011 08:41:59 +0000 developingintelligence 144073 at (Possibly) Dissociable Prefrontal Effects of Target and Target Class Probability <span>(Possibly) Dissociable Prefrontal Effects of Target and Target Class Probability</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>How do we detect important items in our environment? This crucial capacity has received less attention than one might think, and a number of extremely basic issues remain to be explored. For example, it has long been known that target probability has profound effects on the recruitment of the prefrontal cortex (such that lower-probability targets are associated with greater recruitment of both dorsolateral and ventrolateral prefrontal cortex), it has been unclear whether this pattern arises due to the general probability of the class of "targets" or whether it's more stimulus-specific.</p> <p>An elegant new Neuroimage paper by <a href="">Hon, Ong, Tan &amp; Yang </a>addresses this question across two experiments. As it turns out, they observe a suggestive difference between the dorsal and ventral subregions of lateral frontal cortex, such that the former area appears to be sensitive to the probability of individual targets whereas the latter area abstracts across individual targets and responds more clearly only to the probability of target items in general.</p> <!--more--><p>In their first experiment, Hon et al presented 17 subjects with a series of letters, one after the other; subjects were asked to press a button in response to only two letters in particular (e.g., A and B) but to ignore all others. In some blocks of trials, these target letters appeared with 25% frequency; in another set of blocks, these targets appeared with 50% frequency. Block order was counterbalanced, and a simple "manipulation check" confirmed that, as expected, there was no differential hemodynamic activation between the two target letters within either block.</p> <p>What Hon et al did observe was an increased hemodynamic response in both dorsolateral and ventrolateral prefrontal cortex during the blocks where target letters occurred less frequently. While it's possible that this reflects a change in the hemodynamic responses to distractors rather than targets, and while no low-level baseline was used to check this possibility, both the authors and prior research suggest that the changes observed here are more specific to neural recruitment to targets. </p> <p>(Note: it's also possible the increased target frequency effectively led to an increased <em>sustained</em> recruitment of PFC, which would thus appear to reduce transient responses to targets when more frequent by increasing the hemodynamic response during distractors as well. Once again, a low-level baseline would have been useful here).</p> <p>Hon et al's second experiment, however, is where things get much more interesting. As in the first experiment, 19 subjects were asked to press a button in response to one of two target letters embedded within a sequential stream of other letters. Unlike the first experiment, the probability of a target (i.e., <i>either</i> letter) appearing was constant across all blocks of trials; these blocks differed only in whether one of the two target stimuli was more probable, <strong>given that a target would occur</strong>.</p> <p>In this case, dorsal lateral prefrontal cortex showed a stronger response to the less frequent target letter. By contrast, ventrolateral prefrontal cortex showed no such effect. Unfortunately, the critical test of this regional difference was not reported (see <a href="">Niewenhuis</a>'s recent paper for more on this). The authors made this mistake a second time, too: while individual differences in dorsolateral and ventrolateral prefrontal cortex (dlPFC and vlPFC respectively) recruitment were more significantly correlated in the first experiment, they were not significantly correlated in the second, and yet the authors didn't run the crucial test for a <strong>difference</strong> between these correlations. </p> <p>In the end we have a paper that is suggestive of a difference between ventrolateral and dorsolateral prefrontal cortex with respect to the abstract class to which a particular stimulus/response event belongs (to which ventrolateral prefrontal cortex is most sensitive) or to the probability of an individual target stimulus (to which the dorsolateral prefrontal cortex is most sensitive). Even if the crucial tests for regional differences had been presented, there would still be some unaddressed alternative possibilities for what's going on here. </p> <p>One possibility is that ventrolateral prefrontal cortex is sensitive to the frequency of a particular response; since the response was always the same for either of the two targets, manipulations of their frequency would fail to affect VLPFC activation. Note this alternative runs contrary to recent <a href="">theorizing</a> that VLPFC may be more intimately related to stimulus- rather than response-related processing (in contrast to DLPFC, which is thought to be more related to response processing or stimulus-response mappings). </p> <p>Similarly, it is possible that VLPFC more coarsely distinguishes the identity of these stimuli than DLPFC, in which case it would be less capable of showing a difference owing to their differing frequency. Once again, this possibility is perhaps unlikely given that VLPFC is the seat of Broca's area, and thus is historically more intimately related to linguistic processing than DLPFC; if anything, the letter stimuli used here would be expected to be more finely coded in VL than DLPFC. (Notably, the responses of VL and DLPFC were numerically stronger on the right hemisphere; this would seem to suggest greater specialization for the right hemisphere in this kind of target detection task - consistent with much prior work on the role of a <a href="">right-hemispheric vigilance system</a> - given that left hemisphere regions are normally more responsive to stimuli like letters and words.)</p> <p>In fact, there are reasons to doubt that Hon et al observed a signfiicant difference between VL and DLPFC in this experiment at all. Clearly, it is difficult to interpret a null effect; but it's even harder to interpret data when the appropriate statistical test wasn't even run in the first place. The current study appears also to contradict previous work (by <a href="">Casey et al</a>) that showed VL, but not DL prefrontal cortex was positively correlated with target frequency, although that previous work is also difficult to interpret (and was really focusing on a distinct subregion of VLPFC - Brodmann area 47 vs. 44/45 in Hon's work.)</p> <p>In conclusion, Hon et al present suggestive though far from definitive evidence for a possible dissociation between dorsal and ventral lateral prefrontal cortex in the responsiveness to the frequency of individual target stimuli and/or specific stimulus-response mappings (for which DLPFC is most responsive) or to the frequency of the abstract class to which particular stimuli respond, independent of their individual features (for which VLPFC is more selectively responsive). There are alternative possibilities that seem to contradict recent theorizing, and indeed the results can be seen to contradict much earlier work; at the same time, many of the critical tests weren't run, so it's impossible to say whether these contradictions are real.</p> <p>My $0.02? I'd speculate that they're right about VLPFC - particularly right VLPFC - but I don't think there's yet any study with definitive proof of this idea. Currently there are stronger computational and theoretical reasons to believe Hon et al's results about VLPFC - particularly with respect to concepts like policy and context/state abstraction - than there is empirical data. More on the computational/theoretical underpinnings in the next few posts...</p> </div> <span><a title="View user profile." href="/author/developingintelligence">developinginte…</a></span> <span>Mon, 09/12/2011 - 04:23</span> Mon, 12 Sep 2011 08:23:44 +0000 developingintelligence 144072 at The surprising cognitive abilities of crows <span>The surprising cognitive abilities of crows</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>A really excellent <strike>PBS</strike> CBC (thanks m5) documentary on the surprising cognitive abilities of crows:</p> <object width="512" height="328"> <param name="movie" value="" /><param name="flashvars" value="video=1621910826&amp;player=viral&amp;chapter=1" /><param name="allowFullScreen" value="true" /><param name="allowscriptaccess" value="always" /><param name="wmode" value="transparent" /><embed src="" flashvars="video=1621910826&amp;player=viral&amp;chapter=1" type="application/x-shockwave-flash" allowscriptaccess="always" wmode="transparent" allowfullscreen="true" width="512" height="328" bgcolor="#000000"></embed></object><p style="font-size:11px; font-family:Arial, Helvetica, sans-serif; color: #808080; margin-top: 5px; background: transparent; text-align: center; width: 512px;">Watch the <a style="text-decoration:none !important; font-weight:normal !important; height: 13px; color:#4eb2fe !important;" href="" target="_blank">full episode</a>. See more <a style="text-decoration:none !important; font-weight:normal !important; height: 13px; color:#4eb2fe !important;" href="" target="_blank">Nature.</a></p> <p>See also how crows might be trained to do something a little more lucrative:</p> <!--more--><!--copy and paste--><object width="446" height="326"><param name="movie" value="" /><param name="allowFullScreen" value="true" /><param name="allowScriptAccess" value="always" /><param name="wmode" value="transparent" /><param name="bgColor" value="#ffffff" /><param name="flashvars" value="vu=;su=;vw=432&amp;vh=240&amp;ap=0&amp;ti=261&amp;introDuration=15330&amp;adDuration=4000&amp;postAdDuration=830&amp;adKeys=talk=joshua_klein_on_the_intelligence_of_crows;year=2008;theme=animals_that_amaze;theme=tales_of_invention;theme=what_s_next_in_tech;theme=evolution_s_genius;event=TED2008;&amp;preAdTag=tconf.ted/embed;tile=1;sz=512x288;" /><embed src="" pluginspace="" type="application/x-shockwave-flash" wmode="transparent" bgcolor="#ffffff" width="446" height="326" allowfullscreen="true" allowscriptaccess="always" flashvars="vu=;su=;vw=432&amp;vh=240&amp;ap=0&amp;ti=261&amp;introDuration=15330&amp;adDuration=4000&amp;postAdDuration=830&amp;adKeys=talk=joshua_klein_on_the_intelligence_of_crows;year=2008;theme=animals_that_amaze;theme=tales_of_invention;theme=what_s_next_in_tech;theme=evolution_s_genius;event=TED2008;"></embed></object><p> This latter video tells a great story, but as <a href="">John Hawkes has pointed out</a>, there is perhaps some confusion about how successful Josh Klein's more lucrative experiments actually were.</p> </div> <span><a title="View user profile." href="/author/developingintelligence">developinginte…</a></span> <span>Wed, 10/27/2010 - 15:12</span> Wed, 27 Oct 2010 19:12:39 +0000 developingintelligence 144071 at Why The N2 Indexes Conflict Monitoring, Not Response Inhibition <span>Why The N2 Indexes Conflict Monitoring, Not Response Inhibition</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>Sometimes, ground-breaking studies don't get the attention they deserve - even from experts in the field. One great example of this is an elegant study by <a href="">Nieuwenhuis et al.</a> from CABN in 2003; in it, they conclusively demonstrate why a particular event-related potential - the negative-going frontocentral deflection at around 200ms following stimulus onset, aka the "N2" - reflects the detection of response conflict, and not the demand to inhibit a response.</p> <!--more--><p>This would seem to be a tough distinction to demonstrate - after all, the demand to inhibit something would be expected to strongly covary with response conflict. This is true of the canonical N2 paradigm, the Go/NoGo task, in which an N2 is elicited on infrequent "NoGo" trials in the context of far more-frequent trials that require a response ("Go" trials). Nieuwenhuis et al simply reversed the probabilities - that is, they made the "Go" trials infrequent and "NoGo" trials much more common. In this case, "Go" trials involve conflict between responding and the dominant behavior (which is the act of <strong>not responding!</strong>), but no response inhibition.</p> <p>The results demonstrated that the N2 was elicited by the infrequent trial type, regardless of whether it involved stopping a dominant response or merely initiating a response. This consistent with the idea that the N2 reflects the detection of conflict, independent of whether a dominant response must be inhibited. Moreover, source localization techniques identified that the neural generator of the N2 was in the anterior cingulate cortex, regardless of whether it was elicited by infrequent Go or infrequent NoGo trials. The anterior cingulate has been modeled computationally as a conflict monitor, which is conceptually consistent with these observations. This contrasts with other areas, primarily those in the lateral frontal cortex, which are currently thought to be more crucial for response inhibition.</p> <p>The authors did notice that infrequent NoGo trials elicited a larger N2 than infrequent Go trials - an asymmetry that might suggest the N2 reflects a combination of conflict monitoring and inhibition-related functions. To the contrary, the authors argue that the task instructions to respond as quickly as possible made conflict greater on the infrequent NoGo trials than the infrequent Go trials. This assumption seems all the more plausible given that the source of these two N2s was essentially identical, arguing against some kind of mechanistically distinct function for the two event-related potentials.</p> <p>Where the paper really shines is in a discussion of how darn sensible this argument really is N2. Just a few examples:</p> <p>1) If the N2 were responsible for conflict monitoring but not response inhibition, an enhanced N2 would be expected to infrequent trials that involve no overt behavior - indeed, that's been <a href="">previously observed</a>.</p> <p>2) If the N2 were responsible for conflict monitoring but not response inhibition, then a reduced N2 should be observed on trials in which the previous trial was of the same type. This was observed in the current study as well <a href="">one published previously</a>.</p> <p>3) If the N2 were responsible for response inhibition, it would be expected to have a source in the lateral prefrontal cortex, rather than the anterior cingulate (which has been associated with primarily evaluative functions in a <a href="">variety</a> of <a href="">previous</a> <a href="">research</a>).</p> <p>Case closed? Not quite, for unclear reasons. Many of those who cite this study in fact do so while arguing for an inhibitory explanation of the N2 - seemingly unaware that the study they've cited refutes their position. This is not a fluke study: the conclusion that the N2 is not specific to response inhibition has also been reached independently, including a 2004 study by <a href="">Donkers &amp; Boxtel</a>, as well as a 2006 study by <a href="">Azizian et al</a>. Unless one redefines response inhibition to include all situations in which capacities like conflict monitoring might be necessary, including those where there is no response to inhibit (and what is response inhibition when there's no response to inhibit?)... it seems pretty inaccurate to consider the N2 as reflecting response inhibition.</p> </div> <span><a title="View user profile." href="/author/developingintelligence">developinginte…</a></span> <span>Fri, 09/24/2010 - 08:35</span> Fri, 24 Sep 2010 12:35:20 +0000 developingintelligence 144070 at Machines Learn How Brains Change <span>Machines Learn How Brains Change</span> <div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"><p>In last week's Science, <a href="">Dosenbach et al</a> describe a set of sophisticated machine learning techniques they've used to predict age from the way that hemodynamics correlate both within and across various functional networks in the brain. As described over at the <a href="">BungeLab Blog</a>, and at <a href="">Neuroskeptic</a>, the classification is amazingly accurate, generalizes easily to two independent data sets with different acquisition parameters, and has some real potential for future use in the diagnosis of developmental disorders - made all the easier since the underlying resting-state functional connectivity data takes only about 5 minutes to acquire from a given subject. </p> <p>Somehow, their statistical techniques learned the characteristic features of functional change between the ages of 7 and 30 years. How exactly did they manage this?</p> <!--more--><p>First, they started with three data sets of resting-state <a href="">BOLD</a> activity; the first consisted of 238 resting-state scans from a 3T scanner from 192 individuals between 7-30 years of age. The second was of 195 scans from 183 subjects aged 7-31 years, each scan being an extraction of "rest" blocks from blocked fMRI designs which were then concatenated, having initially been acquired on a 1.5T scanner and a different pulse sequence than the first dataset. The third data set was 186 scans of 143 subjects aged 6-35 performing linguistic tasks, with task-related activity regressed out, using the same pulse sequence as the second dataset.</p> <p>All the data was transformed to a single atlas and sent through a standard artifact-removal pipeline; next, activity in each of 160 10-mm spherical ROIs was calculated for each image in each scan, with the ROIs determined by a series of five meta-analyses the authors undertook on data of their own (wow!). The full cross-correlation matrix of correlations of ROIs across time was then calculated (yielding 12,270 correlations <em>for each scan</em>) and z-transformed.</p> <p>Next they take this massive correlation matrix and use a <a href="">support vector machine</a> (SVM with soft margin, including a radial basis function "kernel trick") to classify each timeseries as belonging to a child (7-11 years old) or an adult (24-30 years old), tested with leave-one-out-cross-validation. They kept only the highest-ranked 200 features of the trained SVMs for further analyses (a process of recursive feature elimination didn't really help, so they just stuck with 200). Across all validations, the same set of 156 features consistently ended up in the top 200, and were used for visualization of the feature weights. In this step they could classify adults vs. children at 91% accuracy.</p> <p>They next used support vector regression to predict, based on the retained 200 features, the age of the subject in the scanner. Predicted ages were converted into a "functional connectivity maturation index" which had a mean of 1.0 for ages 18 to 30 (we'll come back to this), and revealed beautiful curves you've no doubt seen elsewhere by this point:</p> <p><img src="" alt="i-045d6e5ac9e006d6a6e5cc2cf7d463ab-DosenbachCurve.jpg" /></p> <p>The best-fitting line here is actually either the <a href="">Pearl-Reed</a> (gray line - used in other contexts to model the growth of human populations in settings with limited resources) or the <a href="">Von Bertalanffy</a> (black line - used to model the growth of animals). The same basic effects were replicated on all three data sets.</p> <p>The rest of the paper is mostly dedicated to visualizing <strong>what</strong> exactly it was that the SVMs were basing their surprisingly accurate predictions. It turns out that twice as much of the predicted age-related variance was explained by functional connectivity that decreased with advancing age as by that which increased with age. Moreover, decreasing connectivity was more common among nearby regions, whereas increasing functional connectivity tended to occur among more far-flung regions (similar to the local-to-distributed shift <a href="">discussed previously</a>). Functional connections that increased with age were more aligned in the anterior-posterior dimension than those that decreased with age; the single most age-discriminative set of ROIs was the "cingulo-opercular" network (also <a href="">discussed previously</a>), and the most age-discriminative individual ROI was the right anterior prefrontal cortex. </p> <p>If all that wasn't complicated enough, here's a glimpse of the paper's money shot:</p> <p><img src="" alt="i-1fbbc1298fb8bfb211db1014c19fd132-DosenbachMoney.jpg" /></p> <p>Obviously, this is an incredibly impressive set of results with real-world value. But what are some of the potential pitfalls here?</p> <p>One is that the classification actually took place in higher-dimensional space (&gt;200 dimensions, as I understand it), meaning that the results are dependent on interactions of changes in functional connectivity among and within the 156 features visualized above. This kind of thing is not easily captured in the way the results have been visualized.</p> <p>A second thing to be wary of is the conversion of chronological age to the predicted brain maturity index. I'm not following why exactly this conversion was necessary, but I assume it was due to a fall-off in the classifier's accuracy for predicting the age of subjects who are, in reality, between the ages of 18 and 30. This likely indicates that functional connectivity asymptotes in its sensitivity to change in functional connectivity around that time. In other words, it's likely not capturing whatever "wisdom" a 30 year old might have that differentiates them from an 18 year old. </p> <p>(Assuming such a thing actually exists, it seems like it's not "in" the functional connectivity data. On the other hand, some of their data sets may have under-sampled the older part of the age distribution - perhaps wisdom just takes statistical mega-power to detect.)</p> <p>These caveats aside, it's really beautiful work, and I believe it will really help real people really soon (TM). That's far more than can be said about most of the work being done in this area, which is far more theoretical in nature.</p> </div> <span><a title="View user profile." href="/author/developingintelligence">developinginte…</a></span> <span>Wed, 09/15/2010 - 03:20</span> Wed, 15 Sep 2010 07:20:16 +0000 developingintelligence 144069 at