The intelligence test is badly named. The main problem is that we should be talking about intelligence tests in plural, so that the IQ test is merely one of the many measures we use to assess our innate mental skills.
Unfortunately, despite the best efforts of Howard Gardner, Robert Sternberg and others, the IQ test remains the singular test of individual cognitive ability. The mysterious entity that it measures – g, for general intelligence factor – is still seen as the dominant variable in determining the intellectual performance of our brain. (G was first coined, in 1904, by the psychometrician Charles Spearman, who noticed that the grades of young kids were correlated across seemingly unrelated subjects.) The first thing to say about g is that it’s an incredibly robust statistical phenomenon. This means that the same person will get a similar score on an IQ test at the age of 12, 20, and 50. Furthermore, his score will correlate nicely with his academic performance, at least in certain subjects. For instance, a 2007 study by psychologists at the University of Edinburgh found that general intelligence accounted for 58.6% of the individual variance in math performance, 48% of the variance in english, and 18.1% of the variance in art and design.
Of course, that still leaves a lot of variance unaccounted for, even in those academic subjects, like math, that are supposed to depend on the very mental skills measured by IQ tests. This helps explain why Lewis Terman, the inventor of the Stanford-Binet IQ test, eventually became frustrated with his measurement. Terman spent decades following a large sample of “gifted” students, searching for evidence that his test of intelligence was linked to real world success. While the most accomplished men did have slightly higher scores, Terman eventually concluded that other factors played an even more important role. He argued that one of the most fundamental tasks of modern psychology was to figure out why general intelligence is not a more important part of achievement: “Why this is so, and what circumstances affect the fruition of human talent, are questions of such transcendent importance that they should be investigated by every method that promises the slightest reduction of our present ignorance.”
In order to understand the limitations of general intelligence, at least as presently defined, it’s important to delve into one of the of the great themes of modern psychology, which is the essential role of the unconscious. While Freud associated the unconscious with the unspeakable urges of the id, we now know that our mental underworld is actually a remarkable information processing device, which helps us make sense of reality.
This has led to the dual process model of cognition, in which the mind is divided into two general modes. There is Type 1 thinking, which is largely unconscious, automatic, contextual, emotional and speedy; it turns out that most of our behavior is shaped by these inarticulate thoughts. (Consider, for instance, what happens when you brake for a yellow light, or order a dish on a menu as soon as you see it, or have an “intuition” about how to approach a problem.) And then there is Type 2 thinking, which is deliberate, explicit, effortful and intentional. (Imagine an amateur chess player, contemplating the implications of each potential move.) Needless to say, intelligence tests excel at measuring Type 2 thought processes, which is why the standard IQ test largely relies on abstract puzzles and math problems, and correlates with working memory performance.
The end result is a growing contradiction between how we define intelligence – it’s all about explicit thought and g – and how we conceptualize cognition, which is inextricably bound up with Type 1 processes. (In other words, we currently measure intelligence by excluded the vast majority of the information processing taking place inside our head.)
Furthermore, this obsession with individual variation as measured by g has meant there’s been virtually no investigation of individual variation when it comes to the output of the unconscious, or the speed/efficiency of Type 1 thinking. We’ve assumed that the subterranean brain – this primal, Pleistocene supercomputer – is virtually uniform and universal, and runs the same stupid software programs in everyone. Here’s a sample excerpt from a recent review on Type 1 thinking: “Continuous individual differences [in unconscious mental processes] are few. The individual differences that do exist largely reflect damage to cognitive modules that result in very discontinuous cognitive dysfunction such as autism or the agnosias and alexias.” In other words, the variance that matters exists in Type 2 thinking.
In recent years, however, this assumption has begun to fall apart. There’s a growing body of evidence that reliable differences exist in Type 1 thinking, and that these differences have consequences. This helps explain why even the most mundane features of Type 1 thinking – such as serial reaction time – significantly correlate with math and verbal scores on the ACT. Other studies have found that performance on a variety of implicit learning tasks – the kind of learning that takes place in Type 1 – were significantly associated with academic performance, even when “psychometric intelligence,” or g, was controlled for. In other words, not every unconscious works the same way.
This view of Type 1 thinking as an individual “ability” with meaningful individual differences is the subject of an important new study, “Implicit learning as an ability,” in Cognition led by Scott Barry Kaufman, a psychologist at NYU. The scientists began by measuring implicit learning performance on 153 adolescent students. Sure enough, they found reliable differences between subjects, so that some students were consistently better at “automatically and implicitly detecting complex and noisy regularities in the environment”.
Here’s where the data gets really interesting: These individual differences in unconscious processing correlated with academic performance on a wide range of subjects, from foreign language to math. In other words, students who did better on the seemingly mindless implicit learning task were also better at conjugating French verbs, even when controlling for the effect of “psychometric intelligence”. This clearly demonstrates that much of our intellectual variation has nothing do with the intellectual skills we measure and valorize. Instead, our intelligence is deeply influenced by all sorts of subliminal talents that we can’t control, influence or directly access. Here’s the conclusion of the Kaufman paper:
The pattern of variables that are and are not related to implicit learning is suggestive of conclusions about the structure of human information processing, consistent with the idea that there are two relatively independent systems by which individuals analyze and learn about regularities in their experience. Further, these results suggest that the investigation of individual differences in implicit cognition can increase our understanding of human intelligence, personality, skill acquisition, and language acquisition specifically, as well as human complex cognition more generally.
Needless to say, these results raise plenty of important questions. The one I’m most interested in is whether or not these Type 1/implicit learning skills can be improved over time. While numerous studies have demonstrated that experience can improve the performance of the unconscious in extremely specific contexts – a Nascar driver will have better driving reflexes – there has been little research into the malleability of the overarching Type 1 system. If these unconscious/implicit learning skills turn out to be teachable, then we suddenly have an entirely new way to educating the brain and improving cognitive performance.