Prompted by a fight at wikipedia over whether PC is an astrophysicist, a meterologist, a meterological consultant, or something else, I looked at “weather action”s website for his proofs of success (ah, for those who don’t know, PC claims to predict UK weather a year or so in advance via a “solar weather” technique whose details are obscure, since he won’t publish them. The solar-weather link is unclear, as indeed is his method of predicting solar a year in advance. It makes him inclined to disbelieve CO2-GW (since its all solar, guv) and he got into TGGWS confusing weather and climate).
And I find: Forecasts with proven skill. The betting I can’t verify; the Wheeler paper (JASTP, 2001, p28-34) I can read (according to WoK, its never been cited). It is full of caveats: firstly that forecasts are intrinsically hard to verify; second that due to ambiguities the only thing verified was a yes/no to a gale anywhere in the lowland UK. So PC’s first point (predicting major storm, flood or freezing) is wrong: it was only gales that were looked at. OTOH the four most notable storms were predicted, and the 5th was missed by 48h (PC counts this as “predicted” and the paper includes it).
The main result of the paper is a table of probabilities for the chances of the forecast being better than chance. This is complicated by the strong annual cycle in the data: there are very few gales in summer. So the measure of “skill” they use includes a credit for forecasting no gale, and no gale occurring: but this is a trivial task for the summer months. The result is (to my eye) a slightly strange set of probabilities of being achieveing these results by chance: 0.0001 for all-year; 0.008 for excluding summer; and 0.19 for winter (October to March). Which is to say, not statistically significant (by the usual 5% test) for winter; but highly significant for all-year. I rather suspect that this may be an artefact of the statistics, but I can’t be sure (should the results really vary that much for period-considered?).
The paper concludes “The results provide little evidence to dismiss the observed success rates as being attributable to mere chance…”. But of course thats exactly what people continue to do, for a variety of reasons: mostly the mistrust of the technique (as being largely unknown, and implausible in the bits that are known); partly reluctance to accept an outsider; partly (I guess) being suspicious of the statistics used (in the paper, not by PC). Its a bit of a shame more verification isn’t done; there may be reluctance on both sides.