I’m not sure whether it was prompted by James Watson’s little outburst (for which he has apologized “unreservedly”) or just serendipity, but Cosma Shalizi offers an exhaustive demolition of the idea of a single general intelligence factor:

Anyone who wanders into the bleak and monotonous desert of IQ and the

nature-vs-nurture dispute eventually gets trapped in the especially arid

question of what, if anything, *g*, the supposed general factor of

intelligence, tells us about these matters. By calling *g* a

“statistical myth” before, I

made clear my conclusion, but none of my reasoning. This topic being what it

is, I hardly expect this will *change* anyone’s mind, but I feel

a duty to explain myself.

To summarize what follows below (“shorter sloth”, as it were), the case

for *g* rests on a statistical technique, factor analysis, which works

solely on correlations between tests. Factor analysis is handy for summarizing

data, but can’t tell us where the correlations came from; it *always*

says that there is a general factor whenever there only positive correlations.

The appearance of *g* is a trivial reflection of that correlation

structure. A clear example, known since 1916, shows that factor analysis can

give the appearance of a general factor when there are actually many thousands

of *completely independent* and *equally strong* causes at work.

Heritability doesn’t distinguish these alternatives either. Exploratory factor

analysis being no good at discovering causal structure, it provides no support

for the reality of *g*.

It’s long, and comprehensive, and involves math, so it’s not for the faint of heart. It is, however, an excellent explanation of how statistical analysis can lead smart people astray.

*Related*