I am not afraid of admitting my own areas of ignorance. The human body of knowledge is enormous and no one can possess all of it, or even be moderately familiar with all of it. The only shame is in pretending otherwise.
This admission fundamentally shapes my personal approach to the whole Hockeystick/Dendrochronology/Michael Mann brouhaha, which continues to this day despite MBH98 having receded into the rather distant past, in scientific research terms. I have to rely more on networks of trust and take a more removed view of it all and generally park that paper and that famous graph in the “pending full acceptance” bucket.
Well, here comes an opportunity for me to refine my “who to believe” assessments of competing claims of statistical expertise.
Further to the continuing and still amusing “conspiracy to call them conspiracy theorists” episode involving Stephan Lewandowsky, he has a new post responding to some more “auditing” from Steve McIntyre. His response, after an overview of what exploratory factor analysis (EFA) is, includes this passage:
Applied to the five “climate science” items, the first factor had an eigenvalue of 4.3, representing 86% of the variance. The second factor had an eigenvalue of only .30, representing a mere 6% of the variance. Factors are ordered by their eigenvalues, so all further factors represent even less variance.
Our EFA of the climate items thus provides clear evidence that a single factor is sufficient to represent the largest part of the variance in the five “climate science” items. Moreover, adding further factors with eigenvalues < 1 is counterproductive because they represent less information than the original individual items. (Remember that all acknowledged standard criteria yield the same conclusions.)
Practically, this means that people’s responses to the five questions regarding climate science were so highly correlated that they reflect, to the largest part, variability on a single dimension, namely the acceptance or rejection of climate science. The remaining variance in individual items is most likely mere measurement error.
How could Mr. McIntyre fail to reproduce our EFA?
Simple: In contravention of normal practice, he forced the analysis to extract two factors. This is obvious in his R command line:
In this and all other EFAs posted on Mr. McIntyre’s blog, the number of factors to be extracted was chosen by fiat and without justification.
Remember, the second factor in our EFA for the climate item had an eigenvalue much below 1, and hence its extraction is nonsensical. (As it is by all other criteria as well.)
But that’s not everything.
When more than one factor is extracted, researchers can rotate factors so that each factor represents a substantial, and approximately equal, part of the variance. In R, the default rotation method, which Mr. McIntyre did not overrule, is to use Varimax rotation, which forces the factors to be uncorrelated. As a result of rotation, the variance is split about evenly among the factors extracted.
This seems a pretty straightforward, black or white, right or wrong issue. Is it? Is Stephan correct? Is there any possible constructive response for McIntyre? Statisticians please respond!