Decisions can be hard: the conflict you face in any decision can be increased if option A is not that much better than option B, or if option A is newly worse than option B. And then there are are just bad decisions, maybe hard only in retrospect. As illustrated by a 2009 J Neurosci article from Mitchell, Luo, Avny et al it seems that dorsal areas of the prefrontal cortex might help guide us in making tough decisions, whereas a ventrolateral prefrontal area might just alert us only after a bad decision was made.
To show this, they administered a reinforcement learning task to subjects undergoing functional magnetic resonance imaging. On every trial, subjects see a pair of fractals; after selecting one of the two fractals, they are told how much money they win as a result of that choice. There were 4 distinct pairs: two of the pairs underwent a reduction in reward differential across the rest of the experiment (e.g., such fractal A vs fractal B were initially worth 95 vs 5 points respectively, but only 55 vs 45 points later), whereas the other two underwent an increase in reward differential (from 55vs45 to 95vs5). In the final block of trials, one of the pairs undergoing a reduction in reward differential reversed (the one that was worth 55 was now worth 45 points, and viceversa), and one of the pairs undergoing an increase in reward differential reversed (the one that was worth 95 was now worth 5, and vice versa).
Behavior in this task showed what you'd expect: subjects made lots of errors in the first block (when they had no idea which fractal to choose); errors increased in the subsequent blocks for the pairs which underwent decreases in reward differential (subjects perhaps became less certain of what was the right answer), and decreased for the pairs that underwent an increase in reward differential (subjects become more certain of what was the right answer). Behavior was a little more interesting when half the pairs reversed: subjects actually made more errors when the high reward differential pair reversed relative to the one that didn't, with a trend in the opposite direction for the low reward differential pairs. It's as though subjects who knew a pair was increasing in reward differential over the course of the blocks actually had increased confidence in their choice and needed several more errors until they could overcome it.
dmPFC and dlPFC activation were negatively correlated with the changes in reward differential, but IFG wasn't (even at a reduced threshold...). And only dmPFC and dLPFC showed increased activity when large reward differentials reversed and responses were correct, relative to when small differentials reversed and responses were correct. In contrast, all three regions showed increased activation during reversal errors relative to selections that were correct under the reversed rules. A conjunction analysis showed that the same regions of dLPFC and dmPFC were responsive to both reversal errors and changes in reward differential.
The authors suggest that dmPFC and dlPFC may act as guides in the process of deciding among conflicting options, even when the optimal decision hasn't changed, but certainly when it does. In contrast, IFG acts as more of the kind of backseat driver we all hate: it seems to tell us we responded suboptimally, but doesn't give much warning leading up to that point.
Personally, I think the IFG's role in this task is a little different than what I outlined above. But I just submitted a paper on that - it took 3 years to produce, and is responsible for most of the intermediate silence on this blog. When it gets accepted, I'll have a series of posts describing the more helpful and proactive role the IFG probably has in guiding behavior.