Developing Intelligence

Originally posted on 12/16 2006:

The term “executive function” is frequently used but infrequently defined. In attempting to experimentally define executive functions in terms of their relationship to age, reasoning and perceptual speed, Timothy Salthouse reviewed the variety of verbal definitions given to construct of “executive function.” Although these differ in terminology and emphasis, they are clearly addressing a similar concept:

“Executive functions cover a variety of skills that allow one to organize behavior in a purposeful, coordinated manner, and to reflect on or analyze the success of the strategies employed.” (from this book)

“Executive functions are those involved in complex cognitions, such as solving novel problems, modifying behavior in the light of new information, generating strategies or sequencing complex actions” (Elliott, 2003).

“Executive functions include processes such as goal selection, planning, monitoring, sequencing, and other supervisory processes which permit the individual to impose organization and structure upon his or her environment” (Foster, Black, Buck, & Bronskill, 1997, p. 117).

“The executive functions consist of those capacities that enable a person to engage successfully in independent, purposive, selfserving behavior” (Lezak, 1995)

Given such a wide variety of definitions, Salthouse notes that it is not surprising to see a correlation between executive function (EF) and intelligence (g). But as with any measure, its correlations depend on how it is measured – and executive function, due in part to its overly broad definitions – is measured in many different ways. In fact, a measure is often considered to index executive function simply if it has subjective “face validity.”

Salthouse argues that psychometric techniques for establishing validity – i.e., a detailed investigation of EF’s correlations with other measures – could help this sad state of affairs. If there are in fact distinct sources of variance underlying performance on complex tasks that are not accounted for by variation in age and “non-executive” processes (such as visual skill, speed of processing, etc), then executive function may be a valid construct.

Specifically, if the tasks thought to measure EF have unique predictive value of a participant’s age, above and beyond the predictive value conveyed by other non-EF measures, then it appears to have good construct validity. Likewise, if the EF measures do not share variation with non-EF measures, then it also appears to have good construct validity. In Salthouse’s own words:

“The rationale was that if the target variables represent something different from the cognitive abilities included in the model, then the variables not only should have relatively weak relations to those abilities but also should have significant unique (direct) relations with an individual-difference variable such as age if they are reliably influenced by another construct, such as executive functioning, that is related to age.”

In pursuit of this goal, Salthouse analyzes data from over 7,000 adults on a variety of tasks. The most important findings from the study are reported next, with the methodological details of this study included at the end of the post in italicized text.

The results showed that many putative measures of “Executive Function” are strongly related to reasoning ability (as measured through Raven’s Progressive Matrices) and processing speed (as measured through extremely simple tasks involving replacing number words with digits, etc). The vast majority of putative executive function measures did not share variance with age that was not also present in the simpler tasks. What does Salthouse conclude from these results?

Salthouse’s First Conclusion: These findings are “inconsistent with the interpretation that [Executive Function] represents a distinct construct” from the other non-executive measures.

This conclusion is problematic for several reasons. First of all, it is arguable that every task involves some amount of executive function, whether it is coordination, planning, strategizing, inhibition, or any of the variety of processes mentioned in the definitions of executive function reviewed at the beginning of this post. Therefore, it is unreasonable to expect to find a task in which there is no relationship with executive function (except, perhaps, simple reaction time measures, which were not included here).

Second, performance on any given task includes variance that is incidental to the construct thought to be measured by that task. Salthouse clearly appreciates this fact in the case of the nonexecutive measures (reasoning, processing speed, etc) which is why latent variables are constructed from these measures. The same thing holds for measures of executive function, and yet no latent variables were constructed for these measures. This results in a decrement in statistical power to detect unique age-related variance in executive function measures.

Salthouse’s Second Conclusion: EF measures may be of little use for the measurements of individual differences, since many nonexecutive tasks seem to measure the same things and have superior reliability/sensitivity.

In contrast to the conclusion above, this conclusion may indeed be accurate insofar as executive measures are often difficult to administer and have relatively low retest reliability. However, the issue of sensitivity – how well EF measures can detect things like brain damage, functional outcome, age, or other individual differences – is not clearly addressed by this paper (although this paper would suggest that EF measures may be more sensitive than many traditional psychometric tests). It is true that lower reliability may result in lower sensitivity, but this is not necessarily the case.

Tests of executive function may also have lower specificity than other tests – i.e., low performance on EF tests may reflect poor executive functioning or impairments in the processes on which executive function acts. Although this is generally a disadvantage, the fact that a single test might detect deficits in a variety of processes may be advantageous for situations in which cognitive function needs to be rapidly assessed (i.e., at the scene of an accident).

Salthouse’s second conclusion is reminiscent of Arthur Jensen’s claims in “Clocking the Mind” about the high correlations of simple and choice reaction time measures with IQ. Reaction time measures have several advantages compared to EF measures, in particular their relative simplicity and the fact that they do not rely on task novelty. However, it remains to be seen whether executive functions mediate the relationship of simple reaction time to IQ, or whether these represent distinct contributions to intelligence.

Related Posts:

Intelligence and Executive Function
Under The Rug: Executive Functioning
The Rules in the Brain
Localizing Executive Functions in Prefrontal Cortex
Clinical Neuropsychology and Executive Function
Factor Analyses of Executive Function Impairment Due to Brain Injury
Theory of Mind, Working Memory and Inhibition

Below are the construct variables used in Salthouse’s structural equation modeling analysis on a group of 300 adults, along with the tasks used to measure them:

  • Measures of Executive Functioning
    • Wisconsin Card Sorting Test
    • Letter, Category and Alternating Fluency Tasks
    • Connections Test (a variant of Trail Making)
  • Measures
    • Synonym Vocab
    • Antonym Vocab
    • Wechsler Adult Intelligence Vocab Subscale
    • Woodcock-Johnson Psycho-Educational Battery–Revised Picture Vocab
  • Measures of Reasoning Ability
    • Ravens Progressive Matrices Set II
    • Shipley Institue of Living Scale – Abstraction Subscale
    • Letter Sets
  • Measures of Spatial Processing
    • Spatial Relations
    • Paper Folding
    • Form Boards
  • Measures of Memory Performance
    • Free Recall
    • Paired Associates
    • Logical Memory
  • Measures of Processing Speed
    • Letter Comparison
    • Pattern Comparison
    • Digit Symbol

In the structural equation models, each of the non-executive construct variables was permitted to correlate with each other, as well as with age, while none of the underlying measures themselves was permitted to correlate with anything except for the construct it was purported to measure. Each of the executive construct variables was then examined to see whether a) it shared unique variance with age relative to the nonexecutive constructs, and b) whether it had significant loadings from non-executive measures. For each putative EF measure this was the case, with most loading on reasoning or perceptual speed ability.

This analysis was then repeated with a variety of different measures collected from over 7,000 adults. The factor loadings from thsi much larger sample were very similar to those in the smaller sample, reported above. Leaving aside for the moment the particular patterns of correlations discovered, the general finding was that no putative measure of executive functioning showed unique variation with age that could not be predicted by variation in “nonexecutive” tasks (with the sole exception of “Anti Cue,” a type of anti-saccade task). The executive measures included in this larger analysis were Ruff Figural Frequency, Tower of Hanoi, Sort Recognition and Proverb Interpretation from the Delis-Kaplan Executive Function System, Trail Making, Stroop Color Word, switch costs from a task involving either “odd/even” and “greater/smaller than 5″ judgments, RT and accuracy from the “Reading with Distraction” task, Anticue, computation span, listening span, N-back, Keeping Track task, Matrix Monitoring, and the Running Memory task.

Comments

  1. #1 zadeh79
    January 3, 2008

    Whatever’s going on in the PFC that regulates attention and inhibition has to be understood. Whether you consider these factors involved with ‘executive function’ is something else.