It could be argued that any single level of scientific analysis is at once too simple (since there are always important emergent phenomena at higher levels) and also too complex (poorly-understood phenomena inevitably lurk at lower levels). If I wanted to kick the sacred cow of science again, as I did yesterday, I’d suggest that parsimony can be a misleading principle here too: whatever data is used to evaluate a theory may not include phenomena occurring at other levels of analysis that may be relevant to the theory. So inevitably, the simplest theory (i.e., the one with the fewest assumptions) that fits the available data is generally an oversimplification of the true state of reality, which is itself always more complex than the available data.
These dual shortcomings between the too-simple and the too-complex are very apparent in the cognitive neuroscience of higher level cognition, which compounds these problems across multiple levels of analysis. On the other hand, when viewed from an integrative or reconstructionist perspective, findings at each level can usefully constrain hypotheses at intermediate levels. Below I describe these conflicting approaches in a sample domain of higher-level cognition – planning – and discuss ways that theories at multiple levels of analysis might mutually inform and constrain one another.
The Cognitive Neuroscience of Planning
The coordination of thought and behavior to achieve goals is a complex and heterogenous process, involving intentions, foresight, strategy formation, cognitive branching, error-related processing, and other high-level constructs. However, a broad understanding of these constructs is still missing: they’ve been only narrowly operationalized.
For example, it is clear that planning involves the pursuit of a goal state, and goals are indeed a prominent feature of many theories applicable to planning. But while successful and unsuccessful plans often differ in terms of how well-formed or feasible the goals actually are (a plan cannot succeed if the goal is impossible or under-determined), this level of complexity is completely absent in typical cognitive planning tasks, which tend to have predefined goal states (e.g., Tower of Hanoi).
Similarly, strategy-use is an important aspect of planning, but some laboratory tasks involve explicit strategy training, possibly because of a theoretical tradition that views differences in strategy-use as a “nuisance variable” rather than a topic of intrinsic interest. Finally, both strategy-use and goal-formation require foresight, a capacity rarely discussed in the human planning literature (a full-text PsycInfo search for both terms returns less than 20 peer-reviewed journal articles from the last 10 years).
Thus there are two central but opposing demands for the cognitive neuroscience of planning
1) From a clinical perspective, these investigations are uselessly over-simplified: they fail to address precisely those capacities that are impaired by frontal damage, including the process of goal formation and the generation of problem-solving strategies.
2) In contrast, these cognitive constructs are hopelessly complex from the neuroscientific level of analysis, in that they fail to specify whether and how these subcomponents of planning emerge from the brain.
This “double bind” between the overly complex and the overly simple might result from the opposing drives toward scientific reductionism on one hand and real-world applications on the other. This dilemma might be avoided with the complementary approach of reconstructionism which seeks to reconstruct large-scale phenomena from simpler underlying components. Indeed, computational modeling (a central method of reconstructionism) has already provided rudimentary accounts of planning, such as required in water maze tasks.
However, future work will need to establish whether similar models can account for more complex and sequential planning processes, including those involved in the formation of goals and “subgoaling” strategies that are so important for the clinical applications of cognitive neuroscience. For example, reinforcement learning (RL) could be viewed as a kind of “foresight mechanism” for predicting future rewards and identifying adaptive behaviors, but this requires some previous coupling between the current features of the environment and the delivery of reward. In contrast, the situations requiring plans are precisely those where reward delivery is contingent on the performance of a novel action sequence (otherwise, habit would suffice).
This poses several problems for RL accounts of planning: Q1) How does an organism recognize the opportunity for reward in a circumstance where that reward has not yet been delivered, nor been delivered in the past? Q2) How does the organism identify the action sequence of goals and subgoals required to trigger the delivery of reward?
Obviously, any merely verbal psychological theory is not convincing without a computational implementation, and even then such models can be easily derided for their “representation problem.” The representation problem suggests that the hard work is actually done by the modeler when labeling the input and output units, and when designing the training set.
A novel solution to this “representation problem” – and possibly a novel approach for reconstructionism in general – is to provide children with the same inputs and outputs that are used in the training of a particular network, but in the form of instruction, training, or toys. If children benefit from this in the same way a particular network does, this provides fascinating (and importantly, applicable) evidence that the so-called “representation problem” has been appropriately solved by that network.
In summary, the cognitive neuroscience of planning is perforated by explanatory gaps between the neurobiological, cognitive and clinical/behavioral levels of analysis. Two complementary reconstructionist approaches may help to provide an integrated theory of planning across these levels of analysis: first, new biologically-plausible models of planning should focus on sequential processes, like the origin of the impetus to form a plan as well as subsequent subgoaling processes; second, converging evidence for these models might be acquired in developmental work by “training” children in the same ways as computational models.
The case of “planning” in higher-level cognitive neuroscience is just one illustration of the difficulties related to the reduction of complex natural phenomena to simpler underlying components. Those individual components may fail to reflect emergent phenomena at higher levels, and may yet themselves emerge from multitudes of constituent elements or processes occurring at yet lower levels of analysis. So explanatory power and parsimony are at least partially in the eyes of the beholder, yielding yet another problem for using them to evaluate various theories.