What if we got the organization of prefrontal cortex all wrong - maybe even backwards? That seems to be a conclusion one might draw from a 2010 Neuroimage paper by Yoshida, Funakoshi, & Ishii. But that would be the wrong conclusion: thanks to an ingenious mistake, Yoshida et al have apparently managed to "reverse" the functional organization of prefrontal cortex.
First things first: the task performed by subjects was very tricky. Yoshida et al asked subjects to sort three stimuli, which were presented simultaneously. Each of the three differs from each other in three ways: number of vertices (triangle, square, pentagon), brightness (dark, medium, high) and size (small, medium, large). After subjects sorted these stimuli, they then gave subjects another three stimuli to sort. Only then did the experimenters inform subjects whether they were right or wrong. Clearly, you could spend a long time searching for the magic conjunction of 2 sorting rules - and if you ever found it, you may not even realize it, because Yoshida et al's feedback was probabilistic and the rules were transient: feedback was invalid 10% of the time, and the experimenters switch the rule if you sorted according to it three times in a row (even if you get 3 consecutive trials of [invalid] negative feedback!). I'm not sure I've ever encountered a more diabolical design.
Yoshida et al made things slightly easier for subjects, if not for readers, because subjects were rewarded in terms of their sorting according to only one of six rules total, which were selected from the universe of possible rules given this stimulus set. Critically, these six rules were arranged hierarchically into two groups, such that EITHER the ascending vs. descending characteristic of sorting was relevant (both sorts ascending, both sorts descending, or one of each) OR a particular combination of features was relevant (one sort by brightness, and the other by vertices; one sort by brightness, and one by size; or one sort by vertices and the other by size). And, the order of the two sorts never mattered.
This can get baffling, so I think a concrete example will help communicate what Yoshida et al were looking for. Ignoring the 10% of trials where feedback was invalid, let's assume you're told that you were wrong on your first trial, where you sorted according to brightness (ascending) & vertices (descending). You can't know if you were wrong because the way you sorted the first three was right, and the way you sorted the last three was wrong, or whether it was the other way around. So what do you do? You might think that perhaps the current rule is defined by features, so you'd next try brightness (ascending) & size (descending). (This is called a lower-order rule switch, because you're behaving as if brightness is partially right but you should have size instead of vertices for the other sort.) Unfortunately, if it turns out that sort was wrong too, you've got one more search to make. But what if that's wrong too?
At this point, if you understand the hierarchical rules, then you should give up on sorting by features, because you can logically exclude all the feature rules in the experiment. (Thus you'd do what's called a meta-rule switch). If you can remember that you've only tried ascending & descending orders so far, you have two more rules to try: sort by ascending twice (on any two features), and then by descending twice, and then if you still haven't found the answer you have to start all over again: you unfortunately got invalid feedback somewhere along the way.
The data are analyzed as though this "hierarchical rule search" process is what subjects do to learn the task; we'll get into that later. But it turns out there's an optimal way to sort that requires no hierarchical rule search at all, and doesn't require the assumption of perfect memory. This optimal way is to vary both the feature you attend to and the order in which it's sorted simultaneously: since only one or the other is relevant at any given time, manipulating both lets you "cover twice the ground in half the time." In fact, with valid feedback, only three sorts are necessary to find the rule: brightness-ascending & vertices-descending; brightness-ascending & size-ascending; vertices-descending & size-descending.
One might assume that with enough training and familiarity with the task, subjects would adopt this simple, optimal strategy. Indeed, Yoshida et al claim that subjects were trained to perform well in a separate session before the fMRI scanning. But how do we tell what strategy subjects adopted, the optimal one, or the hierarchical method involving meta-switches and lower-order switches?
The authors inferred these "latent states" using a Bayesian model with a number of implausible but interesting assumptions (e.g., infinite memory and serial search). They are interesting because, as far as I can tell, they actually preclude the model from discovering the optimal rule I describe above! That is, the configuration of assumptions effectively "bake in" the claim that subjects search hierarchically. It's interesting to speculate on how this might affect the conclusions of the work: if subjects were testing both rules simultaneously, then what would the model say they are doing? I have an answer, but lets' see the results first.
The model is used to maximize the marginal likelihood fit of three parameters to behavior: the subject's estimates of the reliability of feedback, the subject's estimates of the probability that a meta-rule switch would occur given positive feedback, and the probability that a lower-order rule switch would occur given positive feedback. Through magicBayes rule, the model reproduces 81% of the trial-by-trial choices made by subjects. (Keep in mind this is a measure of model fit that reflects explicit and automated parameter tuning; further, it reflects fit on the training set, much like traditional statistics and very unlike typical neural network model fits).
If the model's assumptions are valid, it can tell us exactly what hypothesis is being tested on each trial: meta-rule switch or lower-order rule switch. So, Yoshida et al plop these estimates for each trial and subject into an fMRI analysis. Lower-order rule switches activate the right frontopolar cortex, ACC and insula, whereas meta-rule switches activate the left frontopolar cortex, ACC and left DLPFC. The only differences between lower-order and meta-rules emerged in right BA 10 - which was more strongly activated for lower-order rule switches than meta-rule switches.
Wait, what? BA10? So ... doesn't this conflict with just everything we know about hierarchical tasks?
Indeed, this anterior area, which should be activated for high-level hiearchical processing, is substantially less active for the meta-rule switch than all other conditions. Activity is lowest approximately when the hemodynamic response should peak (~-2PSC at about 4 seconds in). And it's not just a fluke; Yoshida et al observe other signatures we'd expect of lower-level hierarchical processing: more posterior areas are more active for the meta-rule than lower-order rule switch (including PPC[BA7], posterior DLPFC[BA 9/46], and superior IFG[BA45]). And none of this depended on negative vs positive feedback. ...So what on earth is going on here?
This is where we come back to the optimal strategy that it seems their model can't discover. Let's say you're performing the Yoshida task; given your learning over hundreds of trials, you've begun to suspect that simultaneously varying both dimensions is the best strategy. Thus, you do this most of the time, until maybe after lots of negative feedback you decide you can't reliably track both dimensions simultaneously and resort to a more methodical strategy where you vary only one dimension at a time. By this reasonable (though admittedly introspective) account, simultaneously varying both dimensions should be pretty common, since it leads to positive feedback relatively rapidly; resorting to just a single dimension should therefore be rare. The model predicts exactly the opposite: it estimates that subjects believe lower-order rule switches are 5 times as probable as that for a meta-rule switch, and that subjects are more likely to perform meta-rule switch than lower-order switch after lots of negative feedback.
Thus, it seems like the model is reliably miscategorizing a relatively advanced hierarchical strategy (simultaneous variation along both dimensions) as the lower-order one, and vice versa, which leads to the "reversed results" observed via fMRI. Any Bayesians want to take a crack at determining whether their model would indeed categorize these strategies at levels *below* chance, given the existence of a third, unmodeled strategy?
- Log in to post comments
Worst ever description of a psychological experiment and significance of it's result.
Pfft. I think this experiment's a toughie... and you're just not up for it :)
It's hard to describe how an ingenious mistake leads to the complete reversal of expected results. But if you can do better, I'm sure many readers would appreciate a "tl;dr".