An Inconvenient Truth

This post is just to get you to read James Annans post about: An Inconvenient Truth. Which is his (& Jules) attempt to get his paper about climate sensitivity published. Since the paper is sound, and sensible, and very clear and readable, and of clear wide interest, the question is why isn't it published?

The best available answer seems to be that some people don't like his rather clear demonstration that a lot of the talk about high climate sensitivity (hands up CP.net and Stern) is nonsense.

Another possibility is that he isn't well known enough: this is the sort of basic paper that someone eminent in the field (sorry J+J) gets to publish and everyone subsequently references. It reminds me of a wonderful cartoon of a business meeting, all men, bar one woman. And the chair is saying: "thats an excellent suggestion Miss Smith. Perhaps one of the men here would like to make it?".

[Some of the comments on this are getting a little heated; I've deleted one that was trolling. Please be polite]

More like this

JA is bored with climate sensitivity - because he knows the answer, 3 oC, and he may well be right. But other people don't seem to have realised. And (via James again, I think) I ran across Tung and Camp on climate sensitivity, and Knutti et al.. They too think its 3 oC (well 2.8 +/- 0.9; and about…
Suppose you read a press release that started... Bleak first results from the world's largest climate change experiment and continued Greenhouse gases could cause global temperatures to rise by more than double the maximum warming so far considered likely by the Inter-Governmental Panel on Climate…
I was at the NCAS conference today (since it was in Cambridge it would have been impolite not to go). Tim "Da Man" Palmer spoke about, ermm, sort of a merge of NWP and climate scales. But thats not the point... the point is that he showed a stratification of the Staniforth CP.net PDF in terms of…
Tuesday morning has at least 4 sessions I could have been interested in. Leave EPICA for later and start off (cos I happen to pass the room) with Latif on MOC; which to me provides more evidence not to worry about it. Thence into the climate sensitivity session, which is packed. Matt Collins talks…

Didn't make up your mind whether to buy humanities essay paper or to create that by yourself? I could propose to get homework help at the papers writing corporation, if you had no time.

If you want to get a success, you should create the properly done speeches essay paper. The best homework help essays can be a right issue for custom essay paper performing, I do think.

Perhaps the most appropriate comment can be found over at Dean Dad's Blog by Doc...

"Statistical analysis can be fascinating, but it's just one way of answering specific types of questions. (One of the most interesting books I've read in the last 5 years is "The Lady Tasting Tea: How Statistics Revolutionized Science in the Twentieth Century," by David Salsburg, and it certainly does not make, or try to make, statistical analysis into the only way to deal with questions.)

(And then you have the Bayesians, who are just batsh*t crazy.)"

Thanks.

Jules says that's no cartoon, that was the life she left behind in the UK (and I can confirm she is right - often enough, it was along the lines of "That's an excellent suggestion jules. Perhaps James would like to make it?").

Just trying to be a better amateur reader, may I ask some spectator questions here? I don't want to plop into James's blog thread directly, but I've been trying to understand this by searching on phrases from the discussion.

So -- I think I understand (from James, earlier) that climate sensitivity in degrees C is:
== assuming at baseline, CO2 stable, global average temperature stable, the planet in radiative equilibrium,
== assuming then that CO2 increases by, e.g. 2x, then stops increasing,
== after some time lag, temperature also stops increasing, the climate system returns to equilibrium, radiating as much heat from the atmosphere as the sun provides. At that time, a new average temperature is measured.

Climate sensitivity is the change in average temperature over that time.

Is that right?

And the "priors" argued about are the feedback number -- which is an expression of the range of possibilities -- at which point someone has to make a political decision, perhaps choosing from a list like:

=== nothing else happens, before equilibrium is reached, allowing enough time for us to actually control emission and stabilize at 2x, or

=== nothing worse than past climate changes for 2x happens (rate of increase makes no difference), so with hard effort we could stabilize at 2x, or

=== current CO2 rate of increase causes extreme feedbacks, like the PETM event, so policymakers should declare plans for a Moonbase and Mars colony and make plans to evacuate high government officials and those who can pay, and prepare to lock down the rest of the country ....

Hmmmm.

Where have I gone wrong here?

[Sensitivity, yes. Priors, no. The point about Bayesian statistics is that you start from what you know (the prior), and you then form a new opinion based on some knowledge. In the case of climate, that knowledge is (almost always) something like "X is A, +/- B" (where +/- is really a probability density function). That knowledge then modifies the prior to form the posterior distribution. So... you want to start from a prior that encodes "don't really know, guv" and add your knowledge on. But what is "don't really know?". There turns out to be no such thing. And yet you have to choose something. But with vaguely realistic choices, exactly what you choose doesn't matter too much -W]

By Hank Roberts (not verified) on 09 Dec 2006 #permalink

And the fact of it that Annan overhypes and you seem to miss is --- the uniform prior is probably not that bad of a guess; and the expert prior is often the spurious one, as we see Annan can facilely do away with those high sensitivities he hates. But when you get right down to it, the differences aren't really that huge, you may get a little "right-skewed lump" of higher sensitivities (still at very low prob). So Annan is basically overhyping things dramatically, which I suppose is why he gets rejected, not that there's some horrible big conspiracy to keep high sensitivities (which is silly since the "RC conventional wisdom" seems to be in pooh-poohing high sens, so that Crucifix & Allen & Hegerl & Frame & probably Forest too are really the "renegades" fighting for the cause of "real science").

[Carl - you don't help yourself by exaggerating. James points out problems with the uniform priors - firstly that there are many; secondly that they encode quite implausible information (U[0,20] that there is a 75% chance of the cl sens being over 5, for example).

More importantly, I think there *is* strong resistance to the idea of removing the high sens. A chance of high sens was the only thing that interested the media about CP.net; without that there would be little of interest from the project. Stern needs a high sens to get high damage estimates. So while I don't believe in a "horrible big conspiracy" I do believe... well, in something like the approach to intelligence before the stupid Iraq war: everyone knew the desired result, and it was delivered -W]

Back in the 70s, folks in molecular physics took the following approach, which I, with due deference (that is a somewhat ambiguous statement), might recommend.

The started with the best theory that they had (which was none too good and in the many cases where it was nothing at all, one started with a random statistical result) and used that as the prior, with they carried out the Baysian analysis of the observational results. If the differences were small, that validated the theory, and gave confidence in the experiments. On the otherhand, it was possible in some cases to interpret the differences. This was abandoned when theory progressed to the point it was not needed and simple, direct comparisons were appropriate (I know those are fighting words). What you get from this sort of approach is not only an improved pdf, but also a measure of your theoretical ignorance (the surprisal).

It seems to me that a Bayesian analysis could start with an ensemble of the best climate models pdfs for climate sensitivity to create a prior and then carry out the kind of analysis that Annan and Hargreaves did on observations and measurements. Those who claim that the models have no validity, would, of course, not be satisfied, but they can create priors based on their ideas, if they wish.

> start from what you know (the prior), and you then
> form a new opinion based on some knowledge

Ok, what is the 'prior'? Maybe this explanation helps, best I found today:

"... the pragmatic approach: If we have, or can get, an appropriate, informative Bayesian prior, we will use it.
...
" Usually this prior is in the form of a probability density. It can be based on physics, on the results of other experiments, on expert opinion, or any other source of relevant information. Now, it is desirable to improve this state of knowledge ....."

from: http://www.statisticalengineering.com/bayes_thinking.htm

So is the 'prior' here a probability density curve for what climate sensitivity might be under today's conditions? And the disagreement is about the shape of the probability density chosen?

I'm trying to get this at the fifth grader level before getting into the who's right, first what they're looking at, then what they're seeing, then what they agree is there. After that, on to the disagreements.

[No no no. The "prior" is the information you start with. Its the PDF you have *before* you start to apply whatever knowledge your experiment has gained you. Ask again if taht isn't enough -W]

By Hank Roberts (not verified) on 09 Dec 2006 #permalink

Well it seems that it's Annan (& WC?) who think only they are right; and any deviation is lambasted as heresy. The authors of the paper which Annan cites as the bane of his existence (i.e. uniform priors) just want to show an alternative to such Bayesian fundamentalism. Some people (e.g. Chris Forest) use both uniform & "expert" priors; and the world hasn't crashed to a halt yet for some reason...

William, thanks, I realize I'm asking for words and this may not be explainable without math.

The "prior" is the "information" before the "experiment" -- so for this?

The "experiment" is -- the current anthropogenic greenhouse gas increase? I understand we're trying to decide on the range of probabilities for that final result.

And the "information before the experiment" is -- in each author's publication on climate sensitivity --- the author's estimate of what the past climate sensitivity may have been, as a probability curve, a "likely in this range"?

The only public AGU event in SF is the Thompsons' lecture on correlations of many ice cores. Does that improve the "information before the experiment" by adding details on temperature and CO2 changes correlated from many sites?

Thanks for your patience; I don't expect a simple answer, I'm hoping the questions are useful to those trying to explain what they're doing.

By Hank Roberts (not verified) on 10 Dec 2006 #permalink

Am I correct in understanding that the basis of the logic requires warming to be followed by CO2 but does not consider the alternative view expressed from time to time - that CO2 follows warming?

Indeed I am probably wrong to phrase the question in that way since consensus view seems to accept that the warming of things may well release more CO2 - though how much and what effect it will have is, as ever, open to opinion and perhaps even debate.

But can CO2 level increase be both the cause and the result of warming effects? If so what is the catalyst at the start of the process? Is it likely to re-occur (or re-apply if that is a more suitable way to think of it) during the process? If so what might the effects be?

I will admit that I don't necessarily believe that covering such matters would help to get the paper published. In fact probably the exact opposite.

Might score a few headlines though. And it seems that for all the world (though possibly with science excepted) any publicity is good publicity.

By Grant Perkins (not verified) on 10 Dec 2006 #permalink

Hank, I think that is why the approach that I suggested is useful, use the theoretical knowledge you have that is independent of the measurements to construct the prior, and apply that to the observational data.

As I understand it, the reason James objects to Frame is that by assuming a uniform probability of the result being between 0 and 20 C they are actually assuming things that we KNOW to be false. Moreover, why 0 to 20, why not -10 to +10? If I wanted to check Frame I would start by using different uniform priors, centered differently and having different widths, and then look how the pdfs changed. In that sense, a useful prior might be no change for any change in the concentration of CO2, it is ignorant in the naive sense. OTOH, I am a Rabett so what the hell do I know.

Thanks Eli and William. I'll follow on in the new topic on this -- clarifying that the Frame paper starts by assuming anything from 0 to 20 is equally likely is a big help.

Grant, I know your questions are among the basic ones, answered in considerable detail at RealClimate -- see the right sidebar for their hilight links.

Back to Eli and William, is it fair to say that the "prior" then isn't meant as "past history" so much as "what we thought up til yesterday"?

The 'prior is based on, what, some combination of climate history work; theory about how climate has worked in the past, plus assumption about how the current episode may proceed?

Does the 'prior' include the author's assumptions, and observations, about the ingredients of the "experimental treatment" --- the added anthropogenic factors? higher rate of change of CO2 increase, longterm continuing downward trend of stratospheric ozone, particulates from coal?

Or do those latter factors show up in data from the "experiment" -- the observations being made now --- to compare to the "prior"?

Another phrasing -- are Frame and James disagreeing based on exactly the same data sets and theories?

Or is Frame looking at things that might go wrong with anthropogenic contributions, more severe than what's happened in the deep past, and James looking at the longterm climate record, so they're disagreeing by choosing different original facts and assumptions?

Or am I banging my head on facts and is the real issue one of pure mathematical approach, and it doesn't matter which facts are assumed?

Feel free to take the reply to the new thread where it can stay focused, I appreciate your starting a focused topic. Just tying up loose thoughts here where they got started.

By Hank Roberts (not verified) on 10 Dec 2006 #permalink

Eli,

One of the things we did in the recent ms is to test out diffferent upper limits, showing results from U[0,10], U[0,20] and U[0,50] (U[0,100] and U[0,500] are shown in this pdf). It is easy to show that with this sort of (rather typical) data set we used there, the posterior is strongly dependent on the upper bound of the prior (when a uniform prior is used). In fact, in the limit of U[0,N] as N tends to infinity, the posterior is actually unchanged from the prior - eg, the posterior 5-95% probability limits are 0.05N-0.95N!

Hank,

Thanks for the response. I have read of the various hypotheses about the relationship(s) between CO2 and temperature and it occurs to me that whichever way you look at it the result is much the same but the timing, and any attempted corrective action which humanity may come to agreement about, could be different.

For example if the feedback effect leads the temperature change then, whether any action is effective or not, if it is felt that the feedback can be managed in some way that makes sense (i.e. has some chance of being workable and acceptable globally) then it is worth looking for a containment strategy approach.

On the other hand if the CO2 rise lags temperature change, but is perceived as the key measure, the effort could be unnecessary (though would obviously lead to claims of great success should temperatures stabilise or even fall back) and the economic effect perhaps disadvantageous in other ways.

For example if temperature stopped rising at all but CO2 continued upwards. Would the economic rationale be to continue to spend for some considerable period of time implementing all sorts of additional controls based on CO2 management with a view to ensuring that the success was consolidated?

If the consolidation effort was focused on plans for ameliorating the effects of an increasingly heated world ( to obliquely reference Lovelock) but the inhabitants experienced a couple of decades or more of rapid cooling, would the solutions be suitable? Could the economy afford to fund anew a different set of solutions?

In my mind I have an image of a house built on stilts to defend against sea-level rise but it is covered by several feet of snow and the stilts are bukling under the weight ...

Presumably it would not be too difficult to assess the sensitivity for both scenarios. What I was suggesting was that to do this and include the results in the paper might be both more interesting for publication editors and more enlightening as a scientific exercise aimed at extending the basis of knowledge for political decisions.

It may not matter of course. Politicians are not really interested in the science in my view, only the controls it allows them introduce.

By Grant Perkins (not verified) on 10 Dec 2006 #permalink

Hank,
AGU is really easy to sneak into, so if you want to attend some non-public sessions, a bit of wile and brazeness should get you in the door- and if not, borrow a badge from someone who takes a day off to sightsee.

So cynically, why I have a received job offers from 5 modelling groups worldwide; 3 of whom want to do cp.net-style experiments? Is it just because they want to promote high sensitivities and they don't have the wisdom of James Annan & the blogosphere? ;-)

[Because this stuff is wildly fashionable and made Nature and the press? No wonder people want to do the same thing again... err... -W]

Anyway, I'd say that anything over 3K sens nobody really knows what the hell would happen, what sort of tipping points & "feedback loops" would come into play etc. So James cavalierly chopping things off with his "not-quite-so-expert prior" is not really scientific. It just seems to be a few desparate "scientists" trying to make a name for themselves and/or trying not to look like Greenpeace hippies.

[If over 3K will do, then fine. But you're still not actually engaging with James argument for over 4.5K -W]

So cp.net is wildly fashionable? Interesting, since nobody else is doing it! It seems the "wildly fashionable" thing is to CONTINUE wasting taxpayers money on "grid computing" -- which has shown nothing but quirky prototypes. But thanks for thinking we're "wildly fashionable"; I'll use that at my next interview tomorrow!

My point with the over 3K is that from everything I've heard in my forays over the last 4 years, nobody really knows what to expect in a future climate that has risen a few degrees ("clathrate guns" may pop up etc). So to cavalierly "chop off" high sensitivities, for seemingly no other reason than not to sound "alarmist" or like a hippie, just doesn't cut it with me. But I can see how it would be an appealing hypothesis throughout the field, which makes me think Annan just hasn't done a good job since it seems the blogo-consensus as well as the scientists would tend to be on his side in some zeal to play down what they perceive is "alarmism" (i.e. just talking about high sensitivities is "alarmism" these days).

[There is a logical disconnect in all this. "we don't know what > 3oC would be like" is a totally different Q from "what is the chance of >3oC". And all this cavalier stuff... you're *still* not actually engaging with the arguments. How about a nice simple one: U[0,20] encodes the idea that S is 3* as likely to be greater than 5, than less than 5. It is therefore unreasonable (I don't believe it; neither do you; neither does anyone else). Do you agree or not? -W]

I think the burden of proof is on Annan to demonstrate why his concocted expert prior is the way to go. From the rejections & comments, it seems that he has not done this job adequately, no matter how much he smarmily claims "but it's so easy!" That is the simplest answer; not appeals to the blogosphere and claims of some sort of conspiracy against him. And the results either way aren't earth-shattering. So it really is much ado about nothing and speaks volumes about Annan's ego & careerist desparation rather than any contribution to climate science.

[Pointlessly offensive, but also avoiding the question, which I thought was fairly simple: what do you think of a U[0,20] prior? Is it defensible in terms of the information it encodes? -W]

I think a uniform prior is fine. In the face of the truly unknown, best to keep all options open. That's really the conservative approach oddly enough, isn't it? Well I haven't seen anything that demonstrates otherwise, and perhaps that's what the reviewers & editors are thinking?

And again, the differences aren't earth-shattering -- so you get a slightly fatter "tail" with a teensy-bit more higher sensitivities, whoopee! It sounds more like you guys are trying to reverse-engineer something that would sound more palatable to policy wonks & skeptics (just look how it fits in with Motl's dramatic 1.0 +/- .5 sensitivity ;-).

[OK, progress. "a" uniform prior is fine you think. That still leaves you the problem of selecting which one you use. And I notice you've still evaded my (James really) question: do you think a prior that asserts that P(S>5) = 3 * P(S<5) is believable?

As to whooppee... it seems to be its rather the other way round. Read the press coverage of the CP.net results: *you* need the possibility of high S to come out of the process -W]

Err, no, our modelling efforts have shown a fairly typical PDF, with a few interesting high sensitivity runs. We don't "need" anything -- we're just showing what tens of thousands of people have helped us produce [deletia -W]

[Lets try to keep the thing on track - your avoidance of the question (see several prev) is now becoming quite painful. If you have no answer, the honest thing would be to say so, and we can drop the issue -W]

A more meaningful question for you, rather than asking a computer guru such as myself, is why hasn't Annan successfully demonstrated to numerous reviewers this "easy" thing (as you both seem to think it)? You can pick the sensible answer ("his reasoning is faulty", "his argument is convincing", "they just think he's a jerk") or you can leap to what you want it to be -- the far-fetched notion that the climate community is "keeping da Wikipedia-men down!"

[I think you're failing to read the reviews: some of the reviewers are saying "of course this is correct, we all know that". The things you think are so difficult are apparently too obvious to be worth publishing to some.

But your avoidance of the question about U[0,20] is now glaring. Is it a difficult question? An easy one? Come on, do you have an asnwewr or not? -W]

Your U[0,20] red herring is pretty funny. As far as the "reviewers agree" -- we only have Annan's word that the reviewers really are agreeing with him. And he's so emotionally involved in knocking down his doppelganger, it's obvious he can't be trusted. Can you really trust someone who flames reviewers & editors of major science publications, and posts ersatz review summaries online?

Carl,

If you dare to ask Dave Frame, I'm sure he will confirm that 2 reviewers (out of a total of 4) explicitly called for publication of our Comment-and-Reply after some minor revisions. Of course I don't claim these as direct endorsements of our argument but the immediate issue is not whether one or two people openly take our side at a first glance but whether our comments are publishable and whether credible arguments against our claims are presented over time. If I'd had nothing but people telling me I was wrong (and explaining why) I'd hardly have kept trying for so long, and I certainly wouldn't be publicising it all over the internet if there was a refutation. Who knows, maybe a reviewer will read my blog...

I don't believe anything I have said about the reviewers can reasonably constitute a "flame". If you had any knowledge of the scientific process you would realise that disagreements are not rare, and s long as people address the issues, they generally stay fairly civil.

BTW pretty funny that yesterday you criticise me for supposedly posting reviews, and when I point out that I didn't, you decide that the problem now is that I've only posted ersatz summaries...

Quite the comedian, aren't you?

Some time ago, Eli pointed out that the type of analysis done by Annan and Hargreaves only applies to the parts of the parameter space explored by the data. This does NOT justify the kind of choice that Frame, et al make, but mearly says that in addition to troubles that are somewhere on the map, there may be Tigers out there. Since models are based on known situations, I am not quite sure that claiming they can give insight into situations that are not encompassed by them is a very good strategy.

I've often thought that a major problem with Bayesian analysis is deciding how much information should go into constructing the prior and how much into the data against which the prior is tested. Since all answers to this are arbitrary, the only thing that I can come up with is similar to what A&H did, try several and see if the choice affects the final pdfs. Then you are left with analysing that trend (this is a Brit blog, as a kindness to the host use s and ph here)

> re Lab Lemming's "Hank, AGU is really easy to sneak into..."

Perhaps it was then, but it's not any more - they were _assiduously_ checking badges yesterday.
Alas.