The uncertainty of uncertainty

There is a paper by Roe and Baker out in Nature Science arguing that Both models and observations yield broad probability distributions for long-term increases in global mean temperature expected from the doubling of atmospheric carbon dioxide, with small but finite probabilities of very large increases. We show that the shape of these probability distributions is an inevitable and general consequence of the nature of the climate system. Predictably enough it will get misinterpreted, and indeed Nature itself leads the field in doing so. See-also Grauniad.

For a general take, you'll want to read RealClimate (of course) before reading my quibbles. Hopefully JA will weigh in on this too. Because I am very dubious of it. [Update: he has]

Ah, but before I should go further, let me point out that this isn't my thing. I'm guessing, and may be wrong and/or confused.

OK, so what goes on. Well, we can start by saying that dT = l.dR (1), where dT is the change in temperature, dR is the change in radiative forcing, and l is a constant feedback factor. R+B assert, and I'm going to believe them, that in the absence of feedbacks l is well known to be about 0.3 K/(W/m2) (they note in the SOM that its 0.26 just considering Stephan-Boltzman). The interesting point is including feedbacks. We'll call l-without-feedbacks l0. We then *assume* that the feedbacks are proportional to dT, and look like extra radiative forcing, so instead of dR we have dR + C.dT for some constant C. Plugging this into our equation (1) we get dT = l0.(dR+C.dT) = l0.dR + f.dT (2), if f = l0.C. f is going to be the thing we call the feedback factor, and is important. (2) can be solved for dT in f as dT = l0.dR/(1 - f) (3). Or, more simply, dT is proportional to 1/(1 - f). Obviously, there is space to quibble all these simple feedback models, particularly if the changes are large (as R+B say in the supporting online material (SOM) "Now let dRf be some specified, constant, radiative forcing (here, due to anthropogenic emissions of CO2). dRf is assumed small compared with F and S so that the changes it introduces in system dynamics are all small and linear in the forcing..." But of course dRf *isn't* small, nor is S, in the cases they consider). But put all that aside for now.

This, to a large extent, is the sum total ot the paper: since dT ~ 1/(1 - f), if you let f get close to 1 then you get large dT. As RC points out, R+B aren't the first to explore this area, but they may be the first to put this quite so clearly.

And indeed R+B *do* let f get close to 1. Indeed, they let f be normally distributed, with mean about 0.7 and SD about 0.14. Which means (of course) that f has a non-zero probability of being greater than or equal to 1 (which is the unphysical state of infinite gain). For f > 1, dT becomes negative in eq (3), and the situation is totally unphysical (amusingly none of the RC commenters have noticed this).

How do R+B solve this little quibble? By ignoring it completely, at least in the main text, though they do discuss it in the SOM (last section). In fact, formally, I think you can: it amounts to truncating the pdf of f at 1 (and then renormalising, of course). But I think this should be taken as a little hint that a normal pdf on f is not appropriate.

But I'm not at all sure this is a quibble: its actually the heart of the entire matter: the shape of the pdf you assume for f gives you the pdf for dT. If you choose as R+B do, you get a long tail in dT. If you chose a different pdf for f, you could get a short tail. Indeed, you can chose your pdf for dT, and derive the pdf on f. Its a two-way process and its not at all clear (to me at least) why you should give primacy to f, nor is it clear that this is at all the "standard" way to do it. [Looking further, the SOM refs Allen et al for a Gaussian in f (though it gets the URL wrong). But its not clear to me exactly what in A et al is being ref'd. And they follow that up with "A natural choice is that all feedbacks are equally likely", which is quite opaque to me].

Continuing onwards, R+B is a transforming pdfs from one space to another paper. But, we don't really have enough data points to constrain the pdfs to a particular form, at least as far as I can see. Thus we have to make a choice of pdf, given only a few data points, even if we were reasonably sure of the mean (also unclear). Thus asserting that this tells you anything about the real world is dubious. OTOH, as RC points out, if you believe Annan+Hargreaves 2006, we *do* have good grounds to believe that dT (for 2*CO2) is less than 6. Which would then be good grounds for believing that f is well away from 1. Which doesn't make R+B wrong, except in their choice of pdf for f. OTOH they may be saying something useful, which would be that going via f to calculate your dT is going to lead to a long tail (so don't do it?).

Oh, and last off, Allen is still saying all-equally-likely stuff in the commentary, which is of course impossible.

Talking point: R+B address the skewness issue (ie, the tail of long dT) in the SOM. And end up saying that fixing this "...would require that f diminish by about 0.15 per degree of climate change". This requires you to believe the results of the previous section, which I haven't struggled through, but it also appears to run us into the "Bayesian problem", which is that these pdfs aren't properties of f, they are expressions of our ignorance about f. f has (if you believe these simplified models) a given value; we just don't know what it is. So I'm not sure how having f diminish can be a problem in any physical sense. Or to say another thing, I can't see why declaring that our pdf for f is going to be uniform on [0.4,0.7], say, is a problem. In which case, of course, there is no tail on dT.

More like this

"Ah, but before I should go further, let me point out that this isn't my thing. I'm guessing, and may be wrong and/or confused."

You certainly are: you've got a link to RealClimate with an empty href, meaning it points to the current page.

[Oops thanks, now fixed. If you're interested, it was because I wrote it before the RC post came out -W]

By Mark Hadfield (not verified) on 28 Oct 2007 #permalink

James Annan has been quite quiet lately, I take that as a sign that he's working on something, maybe looking at this stuff specifically.

IEHO a lot of this relates to how ignorant you want your prior to be. The purists insist that nothing is known. In that sense, you can't object to non-physical choices, since if you know nothing, you don't know the boundaries either (see Annan vs. Frame death match). I prefer using theoretical estimates for the prior to be matched against the data. You could, if you wish use the theory to estimate the bounds and then pick a uniform prior within the bounds.

"I can't see why declaring that our pdf for f is going to be uniform on [0.4,0.7], say, is a problem."

So you would be declaring that f might be 0.41 (at least, this value is as likely as any other between 04. & 0.7) but couldn't possibly be 0.39. Do you have any reason for believing this?

[No. It was only an illustration -W]

By Mark Hadfield (not verified) on 29 Oct 2007 #permalink

William,

I asked Gerard Roe to provide some thoughts in response to your concerns. What he wrote is what I've been trying to explain. To my mind, one of the most important points he makes is that f does *not* have as you put it "a given value". It is inherently a probability distribution. Hopefully this will clear up some confusion on whether assumes f is Gaussian is useful (my view) or misleading (your view).

Best,

Eric

[Hi Eric. Thanks for the response, and from R. I'm on a course now, so can only reply soon if its boring (hopefully not). The question about f having a value or a distribution is a good one; I still think it has a value but unknown -W]

"I think Connoley is a little off base. Fundamentally we come from the physics side of things, and most studies are coming from the observation side of things. You are right - it would have to be really nonGaussian to remove the tail. And if it were, the problem of reducing uncertainty in the response gets even worse - you have piled up all the probabilities into some ungainly shape (that result is given in the text, and demonstrated in the SOM)

There are two contributions in the paper. The first is the equation, and the second is a test of the equation against different studies. Strictly, Gaussian must be wrong at some level of detail, but it is also almost certaintly good enough to make the point. And given the result above, it is likely to be an optimistic estimate of how much progress can be made.

We actually test the 'climateprediction.net' results against the GCM results of Soden&Held, Colman. The GCM feedback studies show uncorrelated feedback parameters. The appropriate way to combine them then is to do a sum of squares. When we do that we get back the climateprediciton.net results, a strong suggestion that that is also what the Hadley GCM is doing.

Another way of thinking about it is at the level of individual parameters in climate models. Those model parameters always have a distribution of values, and quite possibly for any given parameter not Gaussian. Each of those parameters can be cast as its own feedback parameter (though there will definitely be some correlations that come in and muct be accounted for). But with enough random distributions, Central Limit Theorem kicks in and you would expect that as the number of parameters increases you converge towards a Gaussian pdf in f.

To get rid of the tail, you could do something drastic with the pdf of f (which is unlikely to be true, and leaves you with a greater inability to make progress). You could also do something to how f changes with the system response. If feedbacks are not linear (for example the -ve feedbacks get stronger with T and the +ve feedbacks get weaker), then the transfer function is not 1/(1-x) any more. In the SOM we calculate what the transfer function would have to be to eliminate any of the stretching of the high side tail. It would take ridiculously huge changes in feedback strength as a function of climate state, far larger than suggested by models, and far larger than we have physical theories for. That is in the SOM.

It remains somewhat unclear whether uncertainty in f is real or a result of ignorance. It is the sum of all feedbacks, not just a single number (some of the work Myles Allen has done has unhelpfully thought about it as a number). If the climate system is chaotic (which is somewhat has to be) f is truly only meaningfully characterized as a probability distribution. The framework of climate sensitivity is also fundamentally a linear one, which is also unlikely to be true. Climate sensitivity does not exist - it is just a model of nature, and nature is always going to be a fuzzy version of that model. "

Tried to comment yesterday and botched it -- just to say that the paper is in Science...

I see some inconsistency in the view that f is intrinsically a random variable and not a number. Consider a planet with a solid, perfectly emitting, surface, without water: just a rock ball. The only feedback is the black-body radiation feedback. For some reason Roe and Backer do not consider this as feedback, and treat this as the 'reference climate sensitivity', but formally this is a feedback of the same nature as the others, it is singled out just for formal reasons (see Bony et al or Gregory et al).
In this situation one can calculate exactly the feedback parameter by calculating the derivative of the black-body radiation law. Actually Roe and backer give a number for this in their paper, without any uncertainties attached.
The fact that the atmosphere contains water and the Earth is a more complex system than a rock ball does not change the problem conceptually, it just makes the estimation of the feedbacks a more complicated thing. The probability distribution, rather a likelihood, for f just describes this complexity

Viento.

Fair enough. But the point is that f can very likely change for a lot of very non-linear reasons, making it essentially probablistic. It may not be quite like the electron around the nucleus (which is truly probablistic at a very fundamental level), but in practice it may amount to the same sort of thing.

[But the electrons *mass* isn't probabalistic. Isn't there a fundamental problem if we can even agree if f has a well defined value or not? Perhaps we're at the early 20th C pre-QM stage? -W]

By Eric Steig (not verified) on 30 Oct 2007 #permalink

Disclaimer I haven't read Roe and Baker, I probably don't know what I am talking about and I am probably unjustifably over-interpreting things and should probably add a lot more disclaimers.

>"Oh, and last off, Allen is still saying all-equally-likely stuff in the commentary, which is of course impossible."
I don't know where you see this. What I see in 'Call off the Quest' is:
"There are even more fundamental problems. Roe and Baker equate observational uncertainty in f with the probability distribution for f. This means that they implicitly assume all values of f to be equally likely before they begin. If, instead, they initially assumed all values of S to be equally likely, they would obtain an even higher upper bound."

[Sorry, I'm baffled. Why isn't "This means that they implicitly assume all values of f to be equally likely before they begin" obvious nonsense? As a pdf its impossible, and physically R+B don't believe in f > 1 -W]

What I see in this should warm James heart rather than being still more of the same disagreement per James. I don't know to what extent Roe and Baker demonstrate gaussian like observational uncertainty in f. However saying this cannot be converted to into a pdf without having a prior is exactly what I would expect James to say. There might be a slight quibble about whether it is possible to have a prior where all values of f are equally likely and still be able to apply Bayes theorem but this does seem to me to be Allen and Frame singing from James hymm sheet.

It certainly seems Allen and Frame are inferring something rather than 'still saying'.

If you think that is the end of what Eli calls a 'death match' then I am afraid not quite. According to James the dispute is about this and he may see this a vindication of his view. However from Allen and Frame view most of the problem is James misreading of Frame et al 05 and the point James is trying to make is largely accepted. Accepting it in this perspectives piece doesn't really end a 'death match' but does show that the argument is not about what James claims the argument is about.

[A+F may be doing their best to "frame" this in terms of them being sadly misunderstood, but does anyone believe that? -W]

Viento,

I think the idea is that climate sensitivity is relative to the reference state you chose. If we consider the reference state you describe, as a rock ball, and the outgoing long wave radiation as a negative feedback, the reference state has an infinite climate sensitivity. That is, there is no way for the reference state to get rid of a radiative perturbation, and the reference climate state continues to increase in temperature indefinitely, as there is no way to get rid of the radiative perturbation. The reference state is infinitely unstable, and it is thus difficult to preform any feedback anaylsis on it.

I think using the Earth radiating as a blackbody is a natural reference state for this study. It resolves the problem addressed above as the reference state is now stable since it can get rid of radiative perturbations by increasing its temperature and, therefore, outgoing longwave radiation. Furthermore, it represents the zeroeth order energy balance of the climate system (incoming solar radiation equals outgoing longwave radiation). Lastly, the physics is rock solid, Plancks law is undeniable.

>"[Sorry, I'm baffled. Why isn't "This means that they implicitly assume all values of f to be equally likely before they begin" obvious nonsense? As a pdf its impossible, and physically R+B don't believe in f > 1 -W]"

Do you accept that
1. Observational uncertainty in f does not imply this can be taken as a pdf because the *probability* also depends on the prior?
and
2. That Allen and Frame are saying this?

[I'm sayong something rather more basic: that the quoted statemnet is wrong. It is wrong because "all values of f", from -inf to +inf, being equally likely, is impossible. Do you agree with that? And if so, what do you think the quoted statement is supposed to mean? -W]

If the observational uncertainty is known and the pdf is known then do you accept that there must be some prior that does the conversion?

If so, how would you describe the shape of prior required for there to be no difference in shape betwen obserational uncertainty and probability (preferably using as little space as "all values of f to be equally likely before they begin")?

(Perhaps this is a stretch but I did put in an overinterpreting caveat as well as others.)

I would prefer to see the effects of at least a couple of different plausibly different priors to see the effect this has. However prefering and expecting to get may be rather different things.

>"[A+F may be doing their best to "frame" this in terms of them being sadly misunderstood, but does anyone believe that? -W]"

I think I have made it clear enough that I do believe that and I have had much more discussion with James than Myles and Dave. Presumably you mean who else?

Aaron,

I am still not convinced, but probably I will have to think about it longer. The concept of reference state and reference climate sensitivity is just formally useful because one wants to derive an expression of the gain 1/1-f. But the whole argument could be just expressed in terms of the feedback parameter lammda and the climate sensitivity S, without any reference state. With several feedbacks, several lambdas, the sensitivity would be just the inverse of the sum of lammdas.
In this framework, I can think of a rock planet, with just the black-body feedback as I explained before. In this case I cannot talk about any pdf for lambda or for S (ok, glossing over the latitude dependence of the mean T). My point is whether or not S or lambda are described by a pdf just for formal reasons or there is indeed intrinsically a physical need to treat them as random variables.
I think, in a linear set-up, which is the only set-up in which S is meaningful, those pdfs just describe our lack of knowledge. In other words, if we had a collection of twin-earths, each one of them would have the same S as our Earth, exactly the same, and not a distribution of S's

>"[I'm sayong something rather more basic: that the quoted statemnet is wrong. It is wrong because "all values of f", from -inf to +inf, being equally likely, is impossible. Do you agree with that? And if so, what do you think the quoted statement is supposed to mean? -W]"

*All* values of f from -inf to +inf are not equally likely but that is due to what f is rather than the distribution. Is it possible to talk about any distribution where all values are equally likely? Well yes I think it is possible to talk about it. Would that be a *probability* distribution? No, you cannot get probabilities for any particular range unless the answer is always 0. Can you use it as a prior and use Bayes Theorem? No I have already said that. Does a prior have to be a *probability* distribution rather than just vague wild guess at relative likelyhoods? Thinking about the units, I would feel safer saying that strictly yes the prior used does have to be a probability distribution. Which of these questions did you want answering when you asked "Do you agree with that?" or have I still failed to answer?

What is it supposed to mean? I think I would prefer them to have said:

In order to get such a similarly shaped pdf, they have to have (at least implicitly) assumed a prior before they began where all reasonable values of f are similarly likely (ie a flat-ish shape of distribution).

That is a bit longer but I should accept that they could have got most of it by adding reasonable and changing equally to similarly. On this thread, it seems to me everyone is talking about distributions over reasonable values and only you are talking about unreasonable values of f. I think it is so obvious that it is meant to be over reasonable values that this shouldn't need to be explicitly stated. If you do insist it should be stated then it is a minor matter.

I tend to regard increased radiation from a hotter body to be part of the system rather than a feedback.

This is probably very silly but I hope you don't mind me asking.

Suppose you modelled a spherical rock without water, ice or atmosphere and also a spherical rock without atmosphere but with ice and some weird process where increases in temperature causes ice on surface to melt and disappear into the rock while temperature falls caused more ice to form on the surface.

Presumably the ice albedo feedback will depend on the extent of ice coverage. Would it be possible to to figure out some relationship of how the feedback changes with ice coverage? If this was possible would this relationship mean anything for Earths climate?

What I am trying to ask is would it be sensible to model climate sensitivity not as an unknown constant but as a variable that varied with ice coverage in a manner that would be expected from (a better version of) this virtual planet modelling?

There would still be uncertainty about the value of the variable and I am guessing the uncertainty if reduced at all would only be by a negligable amount. However if we are sure that at a time in the past the feedback was less than 1 and there was less ice coverage now than then, could this help us to be sure that the feedbacks are less than some smaller value, say 0.9 now?

A problem with Bayesian stuff is that you have to have a way of picking a prior. Pure ignorance is not a good place to start because then you will overweigh unlikely outcomes (see the Frame and Annan death motel, two ideas check in but only one checks out), but you have to be sure not to include any of the information you are going to test the prior against in the prior and you have to make sure that the prior includes the entire range of all possible outcomes. In essence this means that the ideal prior will exclude all impossible outcomes. This is not trivial.

Some time ago, I pointed out that the prior might include negative climate sensitivities. James responded that observational data ruled this out. Fair enough, but he was using (I think) some of the same data to test the prior against which is a type of double dipping.

While I don't disagree much with what Eli says, I think I'd prefer to say that physical implausibility rules out negative sensitivities. Not that it actually matters if the prior assigns non-zero probability to negative values. But using the observation that we exist, to set a plausible prior, is not really equivalent to double counting the detailed measurements that we have made.

Besides, when the prior is taken from historical texts like the Charney report, it's a fair bet that it does not depend too heavily on the numerous observations that have been made subsequently :-) People who complain vaguely about double-counting the data somehow never quite manage to address this specific point (I don't mean you in particular, but rather one or two prominent climate scientists...).

The problem is that the new data itself may be biased by previous results. A pernicious example could be bias in the method of measurement, for example, borehole data appears to be biased a bit below that of dendro data (or visa versa). IEHO working towards optimal priors is a major part of any B analysis. I take your point about physical improbability, and that is a better way of restricting a uniform prior (which I dislike on other grounds), but even there you might find data analyses which use the same restriction to constrain their transformation of raw measurements to (in this case) temperatures.

William, you replied to me saying "[But the electrons *mass* isn't probabalistic. Isn't there a fundamental problem if we can even agree if f has a well defined value or not? Perhaps we're at the early 20th C pre-QM stage? -W]"

I don't think this is a problem. Everyone will agree that f CANNOT have a well defined value, in the sense of having a constant value. f can and will change as the boundary conditions change, obviously. For example, we all know sea ice albedo is a big positive feedback. It goes away when sea ice is all gone. (And it must change in favlue as sea ice declines). Every other contributor to f must behave like this. Modeling this precisely is impossible so there *has* to be some probability distribution. That doesn't mean what Roe and Baker assumed (gaussian) is right, but I have yet to see a good argument that it is a bad start. The older work of Schlesinger actually reaches the same conclusion, effectively.

["Modeling this precisely is impossible so there *has* to be some probability distribution" - I don't think this is a correct argument. That there is some initial-condition uncertainty on f is true, I should think. But if we're thinking of this in terms of the variation provided year-to-year by our current climate, it is small. Quite how small I wouldn't know - maybe 0.1 oC. Almost all the pdf in f is reflecting our ignorance of f, not its true variation -W]

By Eric Steig (not verified) on 04 Nov 2007 #permalink

I am sure that William is correct here.

Eric, consider the following: I have a copy of the MIROC model here. From all the published material, you could not possibly replicate its behaviour exactly, even if I told you the specific parameter values I used. Would you say that the sensitivity of this model version is a probability distribution?

Let's say I do a 100-year hindcast (and some paleo time-slices), give you the equivalent of the obs we have of the real climate system, and ask you to estimate the model sensitivity. In this case, it is surely clear that the uncertainty in your estimate is merely an expression of your ignorance, not some intrinsic uncertainty in the system (because there is none).

Knowing that I'm way out of my area... I can't help to ask some questions.

I just cant see how Eric is wrong here, if the ice goes the CO2 reflection will change? Different outgoing radiation will effect the sensitivity? Ok it ideally will get a number, a new number but at different stages different numbers... a distribution if you like. I can't however (as I showed at Annans blog) think of anything that will have a huge difference. It feels like the CO2 (and other greenhouse gasses) cycle is the big uncertainty?

[The initial state of the climate is bound to affect the CS a bit, but this isn't really part of its meaning, in that CS is supposed to be a useful concept because it doesn't change that much. At any event, if we're interested in the CS between *now* and 2*CO2, then the ice state isn't a problem: its here now, and it won't be then, so why should it affect the CS, why should it contribute to the uncertainty.

Stepping back a bit, how can there possibly be this level of ambiguity over the definition of so important and oft-studied a concept? Are we like the old philosophers, wasting words arguing over the meaning of infinity, when we haven't even defined it? -W]

James, and William,

You're getting way too esoteric on something that is quite simple!. I'm not talking about "initial condition" uncertainty. Perhaps you think I am because I made a reference to Lorenz. The point is that the sensivity must necessarly depend on the boundary conditions, and as the boundary conditions (e.g. sea ice) change (as part of the feedback) then so does the sensivitiy. This is what Roe and Baker are talking about (mostly). An example point is that using e.g. the last 100 years to get at sensitivity won't necessarily give you the right number to use for the next 100 years. Of course this is what is really nice about James's work on using both ice age and modern data to constrain sensivity. I readily concede that point, and would really like Roe and Baker to address it in future work. But again, I am NOT talking about initial conditions, and nor were Roe and Baker, so that part of James's original post on this is quite beside the point.

[We may all be missing each others points. OK, so: as I understand it, CS can be considered in several ways: as a characteristic of the planet in general (in which case, its clearly subject to variation in the way you describe); as a "from here to 2*CO2" (in which case it isn't); or as a "tangent to the curve, ie instantaneous value" (in which case again it isn't).

Clearly, the cp.net stuff is considering here-to-2*CO2, so problems with sea ice state don't arise. And R+B explicitly state that hey are considering preind to 2*CO2; so the same is true.

So yes, your point can in certain circumstances be true, but not in this case -W]

By Eric Steig (not verified) on 05 Nov 2007 #permalink

I don't know I might be over simplifying things here but... at any given time between two times the CS is a number (ok it might change a little depending on natural variations in the several earths example) however at the same starting point going for longer or shorter time the CS would be slightly different (e.g. sea ice)... a distribution. In some way I guess that is what is said with the 2-4.5 CS?

And this new discussion is about the possibility of larger CS due to possible variations en the far past. Thinking off CS as a distribution might be wrong sins it seams as at any given time it should not change much? Which I think could be interpreted quite wrong in media... So that would mean that for all that we know now it is a number most probably around 3 and we don't have any hard evidence for it to show big changes in the next say 100 years.

And know I just reads W:s comments above... so well... hmm... I'll be back...

"Modeling this precisely is impossible so there *has* to be some probability distribution."

This seems to me to indicate that Eric is splitting this up into ignorance that might be overcome and ignorance that cannot be overcome. James and William are writing off the ignorance that cannot be overcome as small perhaps .1C. I would have thought that whether that is reasonable depends what you include in that. Precise timings of el nino cycles seems likely to be in a cannot be overcome category.

What else will affect it? One thing I suggest affects the cs from *now* to 2*CO2 may depend on whether we are currently committed (or not quite committed) to loosing the arctic summer sea ice. It may be possible to know this with a perfect model but is it reasonable to suggest a perfect model is something we could gain? Eric seems to think there could be lots of other things like that. If we don't know what they are how can we assess whether they have large effects or not? James doesn't seem to think this is a problem. Getting a perfect model seems like a major problem to me and until we get much closer to a much better model it almost seems like arguing over how many angels can dance on a pinhead.

If rather than this intangible split you classify everything as our ignorance that could be avoided with a truely perfect model than can even cope with chaos then the CS from an equilibrium state to a 2*CO2 equilibrium state is a number not a pdf.

Is that anywhere close to the state of play in this discussion?

[I think so. Now the game is to reconcile that with R+B and other papers... -W]