There is a paper by Roe and Baker out in
Nature Science arguing that Both models and observations yield broad probability distributions for long-term increases in global mean temperature expected from the doubling of atmospheric carbon dioxide, with small but finite probabilities of very large increases. We show that the shape of these probability distributions is an inevitable and general consequence of the nature of the climate system. Predictably enough it will get misinterpreted, and indeed Nature itself leads the field in doing so. See-also Grauniad.
Ah, but before I should go further, let me point out that this isn’t my thing. I’m guessing, and may be wrong and/or confused.
OK, so what goes on. Well, we can start by saying that dT = l.dR (1), where dT is the change in temperature, dR is the change in radiative forcing, and l is a constant feedback factor. R+B assert, and I’m going to believe them, that in the absence of feedbacks l is well known to be about 0.3 K/(W/m2) (they note in the SOM that its 0.26 just considering Stephan-Boltzman). The interesting point is including feedbacks. We’ll call l-without-feedbacks l0. We then *assume* that the feedbacks are proportional to dT, and look like extra radiative forcing, so instead of dR we have dR + C.dT for some constant C. Plugging this into our equation (1) we get dT = l0.(dR+C.dT) = l0.dR + f.dT (2), if f = l0.C. f is going to be the thing we call the feedback factor, and is important. (2) can be solved for dT in f as dT = l0.dR/(1 – f) (3). Or, more simply, dT is proportional to 1/(1 – f). Obviously, there is space to quibble all these simple feedback models, particularly if the changes are large (as R+B say in the supporting online material (SOM) “Now let dRf be some specified, constant, radiative forcing (here, due to anthropogenic emissions of CO2). dRf is assumed small compared with F and S so that the changes it introduces in system dynamics are all small and linear in the forcing…” But of course dRf *isn’t* small, nor is S, in the cases they consider). But put all that aside for now.
This, to a large extent, is the sum total ot the paper: since dT ~ 1/(1 – f), if you let f get close to 1 then you get large dT. As RC points out, R+B aren’t the first to explore this area, but they may be the first to put this quite so clearly.
And indeed R+B *do* let f get close to 1. Indeed, they let f be normally distributed, with mean about 0.7 and SD about 0.14. Which means (of course) that f has a non-zero probability of being greater than or equal to 1 (which is the unphysical state of infinite gain). For f > 1, dT becomes negative in eq (3), and the situation is totally unphysical (amusingly none of the RC commenters have noticed this).
How do R+B solve this little quibble? By ignoring it completely, at least in the main text, though they do discuss it in the SOM (last section). In fact, formally, I think you can: it amounts to truncating the pdf of f at 1 (and then renormalising, of course). But I think this should be taken as a little hint that a normal pdf on f is not appropriate.
But I’m not at all sure this is a quibble: its actually the heart of the entire matter: the shape of the pdf you assume for f gives you the pdf for dT. If you choose as R+B do, you get a long tail in dT. If you chose a different pdf for f, you could get a short tail. Indeed, you can chose your pdf for dT, and derive the pdf on f. Its a two-way process and its not at all clear (to me at least) why you should give primacy to f, nor is it clear that this is at all the “standard” way to do it. [Looking further, the SOM refs Allen et al for a Gaussian in f (though it gets the URL wrong). But its not clear to me exactly what in A et al is being ref'd. And they follow that up with "A natural choice is that all feedbacks are equally likely", which is quite opaque to me].
Continuing onwards, R+B is a transforming pdfs from one space to another paper. But, we don’t really have enough data points to constrain the pdfs to a particular form, at least as far as I can see. Thus we have to make a choice of pdf, given only a few data points, even if we were reasonably sure of the mean (also unclear). Thus asserting that this tells you anything about the real world is dubious. OTOH, as RC points out, if you believe Annan+Hargreaves 2006, we *do* have good grounds to believe that dT (for 2*CO2) is less than 6. Which would then be good grounds for believing that f is well away from 1. Which doesn’t make R+B wrong, except in their choice of pdf for f. OTOH they may be saying something useful, which would be that going via f to calculate your dT is going to lead to a long tail (so don’t do it?).
Oh, and last off, Allen is still saying all-equally-likely stuff in the commentary, which is of course impossible.
Talking point: R+B address the skewness issue (ie, the tail of long dT) in the SOM. And end up saying that fixing this “…would require that f diminish by about 0.15 per degree of climate change”. This requires you to believe the results of the previous section, which I haven’t struggled through, but it also appears to run us into the “Bayesian problem”, which is that these pdfs aren’t properties of f, they are expressions of our ignorance about f. f has (if you believe these simplified models) a given value; we just don’t know what it is. So I’m not sure how having f diminish can be a problem in any physical sense. Or to say another thing, I can’t see why declaring that our pdf for f is going to be uniform on [0.4,0.7], say, is a problem. In which case, of course, there is no tail on dT.