Bayesian stuff for the completely incompetent

Having had a couple of comments on this, I realise that some of the required background on Bayesian statistics is waaaay over some peoples heads. This is probably no fault of theirs. Let me make some faint attempt at explanation, and James can correct me as needed, and doubtless Lubos will leap in if I leave him an opening.

The issue (at least in this context) is the updating of "prior" information in the light of new information. Prior information means (at least nominally) what you knew about, let us say, the climate sensitivity S before you tried to make any plausible estimates of it. If you want the maths (which is not hard) then James post applies, as does the rejected paper. In fact let me copy it here, perhaps it will help: f(S|O) = f(O|S)f(S)/f(O). f(S) is the *prior* probability density function (PDF) for the climate sensitivity S. f(S|O) is the *output*: the PDF of S, taking into account the observations you've just made, and the prior distribution.

So the output depends on your obs (so you would hope); and on what you knew before, the prior. The problem is that it is hard or impossible to construct a truely sensible "prior" (as James points out; but I don't think its particularly contentious). You can construct a pretend "ignorant" prior by asserting that a uniform value between 0 and 20 is ignorant (U[0,20]). But (again, as James points out) this means that your prior thinks it is three times as likely that S is greater than 5 than it is less than 5. No one believes that. Inevitably, you are using some of your knowledge in constructing the prior. But hopefully not the same knowledge as you use to sharpen it up aferwards.

So the only hope is that your obs are sufficiently well constrained that the prior doesn't matter too much. If your obs were S=3 (with absolute precision), this would be true. But the obs are S=3+/-1.5, where the +/-1.5 hides various different PDFs, so the prior matters. But still, if you apply enough obs then it still doesn't matter too much, as each one sharpens up the result more. And if you do this, you end up rejecting any reasonable chance of a high climate sensitivity.

Did that help at all?

More like this

True, if you have a complete ensemble of observations or if the observations that you have, have no noise, but if that is not the case, you need to start with the best possible prior.

Oh yeah, we animals of the fields prefer innocent to incompetent. Why waste time in statistics classes when you can munch on a carrot? But then you are one of those batsh*t Bayesians.

Let me more or less subscribe to this introduction to Bayesian priors by William.

On the other hand, I don't see anything wrong with using one of the measurements - the simplest one - to determine your priors, and the remaining observations to be used for Bayesian inference.

For example, take the naive extrapolation of the previous century or two to get a central value and a width, and start with a prior that is the Lorentz distribution with this central value and this width - because the normal distribution is arguably overly punishing for the extremes.

For example, with central value of T=1C and the half-width being 0.5C, the function would be

p(T) = 1 / 2 pi [(T-1)^2 + 0.25]

Sorry if the normalization constant is wrong. With this function, the probability distribution is 64 times smaller at T=5 than it is at the central value T=1. But the distribution doesn't die off too quickly so if the truth happened to be much higher (or negative), the observations would have a chance to get it anyway.

I got the central value as a qualified guess from the evolution in the last 100-150 years, and the error is comparable.

Best
Lubos

Ok. I actually think I understand what Lubos said -- the "previous century or two" would be the 1800s and 1900s?

"... the naive extrapolation of the previous century or two to get a central value and a width, and start with a prior that is the Lorentz distribution with this central value and this width"

Where do other authors go to get their starting points? Ice core records?

[Lubos's idea is reasonable in principle though wrong in detail: you'd want to use the past few decades not century for the trends; and you'd want to appreciate the difference between equilibrium and transient change -W]

By Hank Roberts (not verified) on 10 Dec 2006 #permalink

Hank,

Where do other authors go to get their starting points? Ice core records?

For the most part, they assert (incorrectly) that there is such a thing as an "ignorant" prior that contains no information itself and thus allows them to derive an answer from the observations alone. Further, they claim that "the uniform prior" is such an ignorant prior. As a result of these errors, they start out from a prior that asserts that S is likely to be very large, and (as importantly) unlikely to lie in the standard range of 1.5-4.5C. If one then analyses a small set of noisy observations to update this prior, one finds that - ta-dah - we are still left with a worryingly high probability of extraordinarily high sensitivity (although a much much lower probability than we started with). Cue Nature papers, headlines in the press, and politicians and activists saying we are past the runaway tipping point of no return...

I am really not making this up, it is exactly what has been going on in the field for the last several years, and the point of our manuscript is merely to point out the error. Of course, this is embarrassing and inconvenient for many, but that in itself doesn't make it wrong!

James

Dear Hank, I think that your question is a good one. Of course, the whole controversy in the Bayesian approach can be blamed on the priors. And if there is a controversy, and in this case there clearly is, it is hard to fix it.

There is no God-given prior. More precisely, there are many god-given priors but the different gods don't agree with each other and this disagreement is projected onto their religious warriors.

If you start with a prior that the temperature change will be any number between -200 and +500 Kelvin with a uniform distribution, which is essentially what some of the people do, the finite amount of Bayesian inference depending on the observations can still keep the probability that the temperature change will exceed +100 Kelvin, or any ridiculous number you choose, too high. It is analogous to the vendor machine example

http://motls.blogspot.com/2005/12/bayesian-probability.html

where the vendor machine steals an infinite amount of money in average, if you use a certain naturally looking prior.

On the other hand, there is also a risk that if you choose the Bayesian prior to be too narrow or too quickly decreasing, you may just dogmatically abandon some possibilities that may still be correct. That's why I also feel that the gaussian prior or too narrow an interval could be a bad starting point unless you're pretty much sure that the gaussian interval (-3 sigma, +3 sigma) can't really be wrong.

I think that James already agrees with me today that in the ideal case, the results must become independent of the details of the prior - in which case the statistical reasoning may be interpreted as a frequentist one - otherwise the results remain controversial. Didn't they, James?

Any conventional measurement or indirect measurement or calculation that one uses to get a more accurate idea about the sensitivity - or any other observable or set of observables or probabilities - should be in principle able to create a gaussian and eliminate the extreme possibilities with a rapidly decreasing probability distribution.

But such a decrease shouldn't be inserted into your prior if the prior - your expectation - is based on too vague arguments simply because you could create a dogma that is not satisfied by Nature.

Finally, I believe that it is still a minority in the climate science who are using this elephant statistical method. Others are using other big methods like PCA that they usually don't fully understand which is why more experienced statisticians can quite easily deconstruct the work of these climate scientists.

This is helping. I'd love to see someone compile what various authors are willing to say in print about which particular facts and assumptions they chose to start with.

Is there a place in this logic where an author says "now, we're not recreating a past natural experiment, because today's experiment adds:
--the rate of increase in CO2 'x' times faster ;
--chlorofluorocarbons keep catalyzing ozone, as the stratosphere cools;
--ozone creation is below average during the next solar cycle"

Does everyone involved assume, as the brokers say, "past results are not predictive of future performance" because new factors in the experiment have no precedent?

By Hank Roberts (not verified) on 10 Dec 2006 #permalink

I've moved over here now as the noise on the JEB post is a little high.

I think I'm starting to get the arguments now. I can't work out what priors James thinks should be applied - but maybe one that already applies probability to the outcomes?

So if the climate sensitivity was 2.5, would we need to bother trying to avoiding it? I suppose that's a whole other can of worms.

Finally, has anyone opened a book on climate sensitivity?

[If it was 2.5 (I think most people would argue for a central value of 3, which is close) then we still have a problem to worry about. Its just that with higher values you can get more immeadiately exciting problems -W]

Let me repeat that I believe that most authors don't really have to talk about priors because they just make the old-fashioned measurements or simulations. You repeat an experiment or simulation to get a particular number N times where N is large, make statistics of the data, and announce your result to be X plus minus Y where Y includes statistical and (your estimated) systematic errors. Once science becomes quantitative, things are known within normal distribution. Only when things are known very inaccurately, more detailed information than the normal distribution is needed.

For example, in particle physics, we have supersymmetry that has some parameters that can ultimately be determined but are unknown today, both theoretically and experimentally. So we can use existing experiments to rule out pieces of the parameter space at some confidence level. That's similar to the climate sensitivity high end estimates. Once SUSY is really seen, if it will be, we will be able to calculate and measure the parameters in the old-fashioned way, as numbers plus minus errors. Once this occurs, you don't need to talk about any priors. The only prior is that SUSY is true which we will have a lot of evidence for.

What Lubos describes is the normal practice in physical sciences. Unfortunately. In places where experiments are not repeatable (describe this universe, give two examples), noisy, complex and other things which describe complex, medical and social science sampling you have to be a lot better at statistics.

I said unfortunately, because often a better statistical treatment can provide more information, for example the differences between the theory (prior) and the experiment as provided by a Bayesian analysis are an important measure.

I think I follow that.

Okay, and stuff like this
http://www.nytimes.com/2006/12/11/science/11cnd-arctic.html?hp&ex=11658…
makes no difference in the calculation, because we assume that the rate of change is different but the changes are the same, only faster?

What puzzles me is if unexpected changes from the anthropogenic event are happening, can we say there's low likelihood of big surprises?

[If you're thinking of things like methane clathrate release, or greenland melting, those are irrelevant to the precsent discussion, because we're talking about the climate sensitivity to a given GHG forcing -W]

It seens like arguing that in theory, we can overfill a bathtub and the water will just go out the drain, only faster, because it's deeper, so when we shut off the inflow, it will still all go down the drain --- past results predict future performance of the climate system -- but when the kid does it, the water overflows onto the floor.

Still trying to understand how the actual experimental data are added, and whether any new data could change the calculated climate sensitivity --- if so, what could change it?

[Each probability PDF overlays (multiplies) the others. If you don't know what a PDF is, you need to look it up, as its important, you wont understand anything without that -W]

Or is it really that no matter what happens in the short run, by the time the planet's back at equilibrium, sensitivity will turn out to be about the same as it always has in the deep past, and human rate of change won't make any difference in the end?

Simmer vs. high heat -- does it change only the cooking rate, or also the cleanup required?

Far too far into words here, they're all I am competent with. Y'all bring the numbers back in please and try to tie them to something we can point to?

By Hank Roberts (not verified) on 11 Dec 2006 #permalink

Or perhaps I've just clambered up to understanding this point made long since, about what's included?

"Some time ago, Eli pointed out that the type of analysis done by Annan and Hargreaves only applies to the parts of the parameter space explored by the data."

By Hank Roberts (not verified) on 11 Dec 2006 #permalink

Re William's inline responses -- I do know what a probability distribution is.

Now -- is it "events that did happen during past CO2 doublings" [whatever happened] -- that are irrelevant to the present discussion?

I think that's what Eli said -- that this statistic assumes the range of things that happened in the past, the same way each time the 'experiment' happens?

I thought that in the deep past, the fastest doublings of GHG took several thousand years ---- and the ocean circulates efficiently during that time span, so the ocean heats up in parallel with the atmosphere. Is a doubling of GHG in two centuries assumed to behave the same way? Or is that not an issue?

I'm trying to get clear why melting Greenland or burping methane are irrelevant? Because the atmosphere and upper ocean can't possibly heat up enough, even without having the ocean circulation taking heat away from the atmosphere and upper ocean? So we can't possibly get a transient heat spike at the rate GHG is being doubled, as long as we stabilize at 2x?

[You must understand why burping methane is irrelevant. Its because we're talking about the climate sensitivity. Which is the equilibrium change expected from 2*CO2. We're ***not*** talking about the expected T change to 2100 or any other date. Dont go on until you've understood this point. CFCs are the same -W]

(Those weren't what I was wondering about - I was wondering about new factors like chlorofluorocarbons -- but since you point to Greenland melting and methane burps, I'm trying to focus on those first to understand why they are irrelevant to the experiment)

By Hank Roberts (not verified) on 12 Dec 2006 #permalink

What rules out something like a methane release happening during the period while GHG is increasing --- one that contributes toward the total that ends up with the doubling ---before GHG levels stabilize and then temperature stabilizes, and the experiment is then over?

The fact that it hasn't happened in the past so couldn't happen in the experiment?

[Hank, you're not listening. You don't understand the *definition* of cl sens. As I said, "Its because we're talking about the climate sensitivity. Which is the equilibrium change expected from 2*CO2". OK? Read it *carefully*. Cl sens is what happens if you double CO2 and let the system come into equilibirum, keeping that CO2 level -W]

By Hank Roberts (not verified) on 12 Dec 2006 #permalink

It's not that anything in the Bayesian calculation rules a clathrate release out, or in, it's just that what the calculation is trying to do is find the pdf for a doubling of CO2 and it does not say anything about how that happens.

To the extent that the functionality of CO2 doubling resembles that of greenhouse gas forcing in general, the calculation would be applicable, but since methane forcing is (roughly) linear with concentration and CO2 forcing is logarithmic, the CO2 Bayesian doubling pdf would at best be approximate.

What I was pointing out is that the calculation is only valid for what it was designed to do, and there are other things out there which are not included, including clatherates.

Let me try to make this question really clear. Are there possible sources that are ruled out as contributing to the experimental doubling?

I realize I'm sounding dumber and dumber, that's OK, I know a lot of folks don't understand what you all are talking about here, and I'm trying to be the goat because I think it's important to get somehow this whole thing into simple words. You're answering at a level of abstraction and I'm trying to get to words that describe things.

We have the human experiment -- adding CO2 from fossil fuels --- and that the experiment is to watch atmospheric CO2 increasing until it doubles, and halt human emissions, then wait until equilibrium, and that's the experiment ---- whatever else happens up until equilibrium?

I said 'temperature' and I meant 'the planet is back in radiative equilibrium at doubled GHG, and the temperature has stabilized, and we can tell how much of a change in temperature happened. Is this what you're considering the end point of the experiment when sensitivity is a simple difference in temperature?

Suppose that we stop adding CO2 once it's doubled. The treatment is over, and we're waiting for the result --- but, while we're waiting for equilibrium, somehow _other_ changes alter GHG levels again? Like

Would that mean the experiment is invalidated and has to be done over? Or would that be part of the experiment?

[Once again. We double CO2. GHGs are then fixed. They are not allowed to change. Methane burps are not permitted -W]

By Hank Roberts (not verified) on 12 Dec 2006 #permalink

If a methane burb started the temperature increase would be increased but that is due to extra GHG forcing not because the sensitivity number is wrong. There could be hazzards ahead like melting permafost releasing methane. This would mean more CO2 doublings compared to what might be expected for a given set of anthropogenic emissions. Sensitivity is more like the braking time when driving a jugganaut. So, we are not trying to measure the probability of a hazzard appearing just the response/braking time. It is therefore possible to cling to the precautionary principal even if sensitivity is found to be low.
>Far too far into words here, they're all I am competent with. Y'all bring the numbers back in please and try to tie them to something we can point to?
Not sure a probability is something 'we can point to' but...
If:

One author starts with a prior showing a 38% chance of sensitivity greater than 6.2C and ends up with a posterior distribution with a 5% chance of a sensitivity greater than 6.2C.

Another author starts with a prior showing a 5% chance of sensitivity greater than 10C and a 15% chance of sensitivity greater than 6C and ends up with a posterior distribution with a 5% chance of a sensitivity greater than 4.2C.

Only with perfect data can we rule out sensitivities above a certain level. Given our information is imperfect, does it seem like the data is doing its best to reduce the probability (38% ->5% and 15%->~1%) ? If so, does a 38% probability of sensitivity greater than 6.2C seem like an extraordinarily high level to assign in the prior? Note, if you are thinking it would be more appropriate to start with a prior that only assigned 5% to a sensitivity higher than 6C then you are probably double counting the data that can be obtained from the observations and you will end up being overconfident.

Where did the second author's prior come from. He used what was thought at the time of the Charney Report (1979) and the data used is subsequent to this. Do you see any double counting in that?

Adam, thanks! I had read that but hadn't looked back.

James in his main article wrote:

"two groups have found some observational (historical) evidence of a significantly positive carbon cycle feedback, meaning that any particular emissions pathway will lead to a higher atmospheric CO2 concentration than was previously expected. I will assume for now that this science is solid enough ....
"... Climate sensitivity is traditionally defined as the equilibrium temperature rise associated with a doubling of atmospheric CO2. As such, it is completely independent of questions about the origins of that CO2 ...."

and

"... "If we were to consider an emissions pathway that would result in a steady-state doubling of CO2 under the assumption that the carbon cycle feedback does not exist

[That's what James does, as I understand it]

" then the resulting temperature change in the real world (accounting for this feedback) would be likely to be in the range 1.6-6.0C""

Is this a fair statement?

So I guess my question is --- does James's approach _require_ "the assumption that the carbon cycle feedback does not exist" or could it still be done, changing that assumption?

By Hank Roberts (not verified) on 12 Dec 2006 #permalink

I understand what the definition says. I'm asking why it's defined that way (not complaining, trying to understand).

Bear with me one more time? I promise, this is my last attempt to ask the question this go-round.

I read this:

[Hank, you're not listening. You don't understand the *definition* of cl sens. As I said, "Its because we're talking about the climate sensitivity. Which is the equilibrium change expected from 2*CO2". OK? Read it *carefully*. Cl sens is what happens if you double CO2 and let the system come into equilibirum, keeping that CO2 level -W]

[Once again. We double CO2. GHGs are then fixed. They are not allowed to change. Methane burps are not permitted -W]

I am listening. I'm asking if the definition is required by the mathematics.

Carbon cycle feedbacks were not permitted when "Climate Sensitivity" was defined, why?

--- Because carbon cycle feedbacks were not known (at the time the definition was written) to occur as part of an increase to 2x? But albedo and water vapor feedbacks are included (is that because they were then known to occur as part of a 2x change?)?

Or,

--- Because feedbacks mathematically are not allowed in the primary measured thing, the GHG/carbon? Does the math disallow specifically a limited, known carbon feedback as part of the sequence of events leading to attaining the 2x end level?

I'm done, best try. I'll watch and read and try to get clearer on this before asking about it again.

[OK, maybe I misunderestimated you. Sorry. So now we know *what* the cl sens is, the question is *why* is it useful? Because its a basic standard measure. You can (at least in theory) calculate it for any GCM - so it allows you to compare different model estimates. Many observational estimates of climate change can also be translated into this measure -W]

By Hank Roberts (not verified) on 12 Dec 2006 #permalink

GHG are what you can measure. We don't know the feedbacks precisely so it is better to leave that to a different measure.

If it takes 600Gt of carbon to double CO2 level this can arise from emissions of 1200GT and 600Gt absorbtion by oceans and no feedback or by 1000Gt of emissions, 500Gt of absorbtion by ocean and 100Gt of carbon cycle feedbacks.

James isn't saying there is no carbon cycle feedback, he is saying it is a different department problem.

For the jugganaut, the danger depends not only on the stopping distance but also visibility, and probability of hazzards on the road.

Sensitivity=stopping distance
Speed=rate of increase of GHG
visibility = our knowledge of the potential problems and ability to see them coming in advance
probability of hazzards on the road= probability of sleeping giants

So the answer is feedbacks are specically allowed and add to the danger but that doesn't affect sensitivity as it is defined. Why is it defined like that? Because that is the obvious way to do it because we measure GHG.

Sensitivity is useful as a starting point for calculations. If GHG increase by 41% (square root of 2) then the equilibrium warming would be half the sensitivity.

If instead, you defined something else (say equilibrium emission effect (eee)) as the equilibrium forcing in response to 100Gt of emissions then this would not be as easy to use as you would have to apply conversion factors to take account of ocean absorbtion and carbon cycle feedbacks before you could properly scale the figure for a different size of forcing. Also eee wouldn't be roughly constant ie from 280ppmv eee would be one figure but from 380ppmv it would be a different figure. Sensitivity is roughly constant from different CO2 levels (within reason) because that is the way it works.

I would have thought it might be quite useful to define something like eee so that if you discover that the feedbacks are say twice as large as previously thought then the effect could be quoted in terms of its eee. However, I suspect that if the media were given a choice between quoting a doubling of the feedback effect or a 10% increase in eee then they are going to choose to report the doubling not the 10% increase.

"Adam, thanks! I had read that but hadn't looked back."

Yes, it's easy to forget that what's been discussed before (even if part of the discussion). I'm glad I did looked back as it helped shape my questions. I had come to conclusion, but reading the last few replies I'm not sure that it's correct, so I'll put it here and let people pull it to pieces:

Climate sensitivity (in this sense, I assume it can apply to any forcing even if by another name, and giving another result?) is a measure of of the temperature change caused by doubling CO2, and the associated *climate* feedbacks. It was defined as such because this keeps the forcing mechanism constant - otherwise you're trying to hit a moving target. Also, the climate feedbacks can be measured and known, or at least that was what was thought when the concept was developed.

However, GHG burps or whatever are the same as digging up coal and burning it - not a climate feedback but part of the change in the forcing (there's nothing climate-wise to say that there's xGt of methane ready to bubble up, that's also dependant on other factors) so you need to disregard it.

This is also because GCMs don't/didn't model these carbon feedbacks either, so you don't need it to compare their results (and they didn't need to model it to get reasonable results for the past).

This means that if climate sensitivity is 3C, then it is quite possible that if we managed to run to an emissions scenario that got us to exactly 560ppm, the resultant *actual* temperature change could easily be higher or lower than that, dependant on whether any carbon cycle feedbacks were positive or negative (and thus the resultant CO2 level would change). This is, unless the emissions scenario was able to accurately predict the feedbacks and take them into account.

Apologies if the above reads like my version of a methane burp.

Lubos suggested using a Lorentzian (rather than a Gaussian) as a long tailed prior. I think he is on to something, much alon g the lines that James pointed to about using priors that assign equal probability to unlikely and likely limits.

That being said, both Lorentzians and uniform priors are symmetric, and the measured/calculated pdfs are skewed to higher temperatures. I am a bit worried that the pdf generated by Annan and Hargreaves is a lot more symmetric than all of the underlying data. Is this an artifact of the uniform (but better bounded than Frame) prior? Would it not be a better idea to skewed prior such as T (exp T^2/sigma^2)+ K (normalization left as an exercise).


"Its because we're talking about the climate sensitivity. Which is the equilibrium change expected from 2*CO2". OK? Read it *carefully*. Cl sens is what happens if you double CO2 and let the system come into equilibirum, keeping that CO2 level -W

"Equilibrum" in this context excludes all of the longer term responses, such as changes in deep ocean temperature and ice sheet growth or decay. LGM to preindustral CO2 change was less than half of 2X, and the temperature change was roughly 6C. This might seem to imply a climate sensitivity of more than 10C. Yet the best estimates of climate sensitivity over this period seem to be in the range of about 3C +- 1C, as much of the change is due to the slow feedback paths.

It is important to remember the climate sensitivity will underestimate the climate change over periods of time much longer than 50 to 100 years.

By Phil Hays (not verified) on 14 Dec 2006 #permalink

Ok. And Lubos says that climate sensitivity is around 1 degree C, because the absorbtion bands for CO2 are saturated

Is that because, by the definition, he is not considering the feedback where the added CO2 moves into the upper atmosphere, where at lower pressure its absorbtion bands are no longer saturated?

[Lubos is probably getting his numbers from some dodgy technique. We all know about sat of the bands -W]

By Hank Roberts (not verified) on 14 Dec 2006 #permalink

>technique
He blogged:

http://motls.blogspot.com/2006/05/climate-sensitivity-and-editorial.html

"If you assume no feedback mechanisms and you just compute how much additional energy ... will be absorbed by the carbon dioxide .... 1 Celsius degree or so for the climate sensitivity."

Excluding all feedbacks, no clouds, vapor, band saturation, stratosphere temp?

Humor me? I'm trying to imagine a universe in which what he says is true for him as he experiences it. Is simply twinning each CO2 molecule, popping a duplicate out next to it by magic, the way to get to "1" -- nothing else changes?

[I'm a bit baffled why you are taking his hand-waving seriously. With no feedbacks the answer is something like 1.5 oC I think, though I don't have the numbers to hand - W]

By Hank Roberts (not verified) on 14 Dec 2006 #permalink

That seems like just what he did, Hank, apparently copying from Lindzen. Another method for getting low (although slightly higher) sensitivity is practiced by Pat Michaels, who does it basically by straightlining the current trend.

I wanted to add based on some of the discussion above that I think part of the confusion about sensitivity is because for quite awhile it was thought that double CO2 was plausible as the level where we would end up. Due to a combination of foot-dragging by most and a substantial scaling up of planned emissions by some (mainly China and India), it's looking more and more like we will zoom right through doubling to something significantly higher. My completely unqualified guess is maybe 700ppm if we get serious within the next ten years. And of course anything along the lines of a methane burp could make that level seem like a distant dream.

A related subject that I'm very curious about is some sort of calculation of the real commitment to warming. Jim Hansen says we are presently committed to a further .5C of warming based on emissions to date, but of course keeping it to that level is an impossibility. It seems like it should be doable to come up with a more realistic "pipeline" figure based on maximum feasible steps being taken to reach a stable GHG level. This doesn't sound quite like any of the current IPCC scenarios, but perhaps someone will correct me on that. In any case, I wonder what that number would be. Jim implies that it would be low enough to prevent a major ice sheet melt, but that's starting to sound like extreme optimism.

By Steve Bloom (not verified) on 14 Dec 2006 #permalink

William, it's not that I'm paying more attention to Lubos's number than to yours, or James's. I'm asking whether it's possible for a non-expert to understand the assumptions made -- by asking what they were.

You mentioned earlier that some people's sensitivity numbers included water vapor and cloud feedback assumptions, assuming they were known. (As they were known at the time, whenever and whatever those were.)

Lubos says he includes no feedbacks at all.

From way, way up here in the cheap seats, far from the stage --- it all _looks_ like handwaving. Please, this is not a complaint, not an accusation, not a statement about anyone's professional competence.

Blogging is still an amateur sport.

I'm just asking --- is there any way to create something like a table, with each study's sensitivity result, and a list of the numbers attached to whatever feedbacks if any were included?

[I guess one could. I'm not sure that Lubos's are a "study" though - I don't see where his "1" comes from. There is some IPCC discussion at http://www.grida.no/climate/ipcc_tar/wg1/345.htm -W]

And is it safe to assume that if everyone in this field did exactly the same calculation, using the same numbers, they'd get the same result?

It's -- repeatability. Can this be described in such a way that someone else can take the same information and get the same result?

Using all three blackboards, for the poetry majors, without any "now it's obvious we can skip from here to here ..." jumps? By the numbers?

----> A "no" is a sufficient answer. I realize this might not be even possible.

[Its usually evaluated from GCMs. So no, you'd get different answers -W]

By Hank Roberts (not verified) on 15 Dec 2006 #permalink

AIUI if you only consider the radiation changes alone, you get about 1C. Including the water vapour (clausius clapeyron) gives about 2C. The biggest unknown is due to clouds which could in theory be positive or negative.

Phil Hays mentions ice sheets melting adding to the warming in the longer term: although the growth of ice sheets could certainly add to any cooling (as at the LGM) they aren't big enough now for more melting to add much to the warming (although TBH I've not seen a calculation of this effect). Feedback from sea ice melting is definitely included in the standard calculation (and this will generate a nonlinearity in the response eventually - once the sea ice has gone, there can be no more albedo feedback from its ongoing reduction).