Oh dear, oh dear, oh dear: chaos, weather and climate confuses denialists

Its shooting fish in a barrel, of course, but you must go and read Another uncertainty for climate models – different results on different computers using the same code [WebCitation].

The issue here is a well-known one - it dates back to Lorenz's original stuff on chaos. That trivial differences in initial conditions, or in processing methods, will lead to divergences in weather forecasts. The (entirely harmless) paper that has sparked all this off is an Evaluation of the Software System Dependency of a Global Atmospheric Model by Song-You Hong et al. and sez

There exist differences in the results for different compilers, parallel libraries, and optimization levels, primarily due to the treatment of rounding errors by the different software systems.

This astonishes the Watties, as though it was a new idea. To them I suppose it is. But it's exactly what you'd expect, within a numerical weather prediction framework (though I'd expect you not to care within NWP. If differences in optimisation level have lead to error growth large enough to see, I'd have expected uncertainties in initial conditions to have grown much more and made the whole output unreliable). I don't think you'd expect it within a climate projection framework, at least atmospheric-wise. You might expect more memory from the ocean. JA and I have a post on RC from 2005 that might help, originating from a post on old-stoat by me where I was playing with HadAM3.

In the comments at WUWT Nick Stokes has done his best to explain to the Watties their mistake - but AW has just rubbed out NS's comments, because they were too embarrassing.

There's an important distinction to make here, which is that climate modelling isn't an Initial Value Problem, as weather prediction is. Its more of a Boundary Value Problem, with things like GHGs being the "boundary"s. Or at least, that's the assumption and that is how people are approaching it (RP Sr disagrees, and you could discuss it with him. Except you can't, becasue he doesn't allow comments at his blog. RP Sr is too wise to value anyone else's opinion). Potentially, there's an interesting debate to be had about whether climate modelling can indeed be considered largely free of its initial conditions. But you can't start such a debate from the level of incoherent rage displayed at WUWT.

Refs

* Initial value vs. boundary value problems - Serendipity
* Chaos, CFD and GCMs - Moyhu, 2016.

More like this

we global warming deniers are not confused. We still say yuor global warming crud is a ponzi scheme designed to fund the new world government and nothing less. The CIA has now been caught using $613,000 to help fund weather wars via HAARP and other methods, so technically man made weather is real, so if you wish to stop it, go after the CIA, the Air Farce and HAARP, not cars and coal.

By Kevin Sanders (not verified) on 27 Jul 2013 #permalink

@Sanders You are either a mental case or a troll.

By TheGoodLocust (not verified) on 27 Jul 2013 #permalink

I already felt like banging my head against the wall when I read the title. The mistake that was coming was obvious to anyone with a science background.

It is fully okay to scientifically illiterate; there are also many topics I am not knowledgeable about.

What is already a little weird is that such people start making public claims about scientific topics; I would personally prefer to keep such discussions private.

What is beyond comprehension is that such people are so sure of their nonsensical ideas that they aggressively attack scientists and often claim those scientists to be conspiring against humanity.

At least I lack the vocabulary to describe this.

By Victor Venema (not verified) on 27 Jul 2013 #permalink

TGL says:

"@Sanders You are either a mental case or a troll."

Or, most likely, a poe. Think outside the box, TGL.

By metzomagic (not verified) on 27 Jul 2013 #permalink

I thought a "poe" was a type of troll.

By TheGoodLocust (not verified) on 27 Jul 2013 #permalink

'I thought a “poe” was a type of troll.'

Not necessarily. By definition, a poe is indistinguishable from a fundamentalist posting in earnest. It is often a regular poster just being extremely sarcastic/cynical.

If you put a smiley at the end of it, that ruins all the fun :-)

By metzomagic (not verified) on 27 Jul 2013 #permalink

Kevin is serious in what he says, as foolish as his comments may be. He has long displayed his conspiracy theories, paranoia, and racism, at Greg Laden's blog (others as well). The comment here is rather tame for him.

Towards the topic of this post: I thought one of the assertions of the denialists was that climate scientists were all in cahoots. How do they reconcile that with the 'discovery' mentioned here?

Victor Venema:

"I already felt like banging my head against the wall when I read the title. The mistake that was coming was obvious to anyone with a science background."

Or, in the theory and practice of implementing compilers used by people doing, among other things, scientific computing, which is how I spent my life in the 1970s and 1980s. "Oh, rounding differences?" Um, yep!

I'm going to go scan the paper, it's probably useful in that measuring differences in results that come up on different platforms, optimization levels, etc can help one understand how defensively the code has been written in order to minimize rounding problems.

I do hope all those Wattsonians are now so scared shitless that they'll refuse to step into a modern airliner ...

Hmm, comment in moderation, probably because I said "scared s*******", I bet.

Only the abstract is available free, too bad.

Considering that the effects of rounding on computer weather models was important to the development of chaos theory (back in 1961 or so...), how is the initial paper even surprising?

Worth revisiting this post, perhaps....

[I didn't see that at the time. After some though I guessed model restarts, though I was dubious they would have made that much difference, but I can believe restarts-with-hardware changes perhaps -W]

By James Annan (not verified) on 27 Jul 2013 #permalink

Since the "butterfly effect" is negligible in real world climate because it is drowned out by macroscopic scalars (sorry, Lorentz), it follows that it should also be negligible in computer climate models. If not, it's a very basic error in the models -- add it to the pile.

Forgive me for not keeping up on the maths, bur rather than trying to value the predictability of the climate model, wouldn't a Monte-Carlo simulation of the climate hindcast be more valuable for bounding it?

For example, take the model that was developed in 2000 which most likely had a fit with error in a Monte-Carlo and compar it to the same model in 2005 running hindcast.and 2010 hindcast.

Trying to map the model to fit observation is a good first guess but there's no guarantee that the observation is "typical". This is a well known phenomena in engineering and you don't chase the observation, rather you evaluate the observation and the probability that it is within the estimated normal variance of expectation. Six sigma concepts are based on this and rules of thumb is that a single observation > 3 sigma, or 2 out 3 greater than 2 sigma or 7 out of 8 greater than 1 sigma -would mean that the estimate of normal variance is incorrect. so taking this to a climate model, running many monte-carlo simulations should give a variance and a mean. If observations fall outside rule (3 sigma, 2 sigma, 1 sigma above), either the mean is wrong (if they are all on the same side) or the variance/sensitivity of the model is wrong. the boundary is the monte carlo simulations with various ICs and inputs. I wouldn't particularly care about any single run in the monte carlo, but thousands of runs should yield mean, std dev for the model. Errors can be made assuming the measured previous values are "typical" but in reality, the real climate should have about the same variance as the model and the comparison would be to the orthogonal components (i.e. the std dev allowed is a sum of squares boundary of both the model and observation). Is this not a correct interpretation?

RE: WC and JA at Rc vs Pielke Senior (PS)

Except you can't..

[So I shall have to post it here instead.]
Its quite an interesting topic, but a brief look at the link (above) to PS has left me none the wiser. Perhaps another link?

Consider the juxtaposition of his last sentence with the paragraph which precedes it. I admit that I have not read the earlier articles in the series but I should have thought the issue of sensitivity to slight changes in something or other must be treated at a mathematical or numerical level* not at the level of comparing the output with observations? Is that wrong?
----------------
*. That means comparing models with models which is an activity which PS has just condemned.

By deconvoluter (not verified) on 28 Jul 2013 #permalink

Garbage story about garbage science. How stupid can you be?!

By Richard Misior (not verified) on 28 Jul 2013 #permalink

@tim B, There's a very large literature on ensemble forecasting, which is essentially what you're describing. (More in NWP than in climate.) The way you've expressed it suggests you don't quite understand how these models work but you're basically on target.

By American Idiot (not verified) on 28 Jul 2013 #permalink

@Dean [quote]Towards the topic of this post: I thought one of the assertions of the denialists was that climate scientists were all in cahoots. How do they reconcile that with the ‘discovery’ mentioned here?[/quote]

Both sides have their conspiracy theorists.

There are some skeptics who believe there is some sort of vast conspiracy among climate scientists and certain government officials (I don't). There are those on your side who believe in a vast conspiracy by oil companies to promote skepticism of global warming.

By TheGoodLocust (not verified) on 28 Jul 2013 #permalink

Of course, in a strict mathematical sense Pielke is right. A time dependent problem is an initial value problem. Climate has time varying "boundary values" or forcings which are not mathematically "boundary values" at all but "source terms.' So this whole language is neither rigorous nor correct.

And its misleading. The first boundary value problem most people encounter in university is the elliptic boundary value problem such as used to model a structural stress and buckling problem. These problems are well posed so that the response to a change in the boundary values is bounded. And they have unique solutions. They are also linear problems

Of course Lorentz showed that for nonlinear systems, the initial value problem is ill-posed.

So by invoking the boundary value problem name, incorrectly, one implies something about climate and weather simulations that is not true, or at least cannot be shown in any conclusive sense.

The most charitable interpretation of the boundary value "analogy" is that in the limit, the attractor is so strong that everything gets sucked in and all those shorter term variations don't matter. Except, the attractor can be very complex and which part you get sucked into could depend on where you started, etc. The problem here is this is really just "communication" and not really mathematics, science, or anything rigorously defensible.

[I think you've misunderstood.

If you take an atmosphere-only climate model, and try to simulate its climate, then you end up with a statistically stable simulation, that nonetheless has "chaotic" weather. But the climate of the model isn't chaotic.

If you're trying to simulate the future evolution of climate in response to an external perturbation (such as increasing CO2) then it isn't at all clear that there is sensitive dependence on initial conditions - i.e., chaos (despite what people may try to assert about tipping points). Indeed, there may well be the reverse - that given a wide range of different initial conditions, the model future climate will converge on the same thing. "A time dependent problem is an initial value problem" may only be trivially true. Consider, say, the evolution of the Sun. This is time dependent problem. Will you tell me that its an initial value problem, and therefore insoluble? I hope not. More from me:

* Climate is stable in the absence of external perturbation
* Repeatability of GCMs

-W]

By David Young (not verified) on 28 Jul 2013 #permalink

I never said climate simulation was impossible, merely that the arguments for its being stable are neither rigorous or scientific.

Have you ever heard of bifurcations in a nonlinear system?

[Weirdly enough, yes I have -W]

There is no basis for asserting that climate is not chaotic.

[Wrong.

There is such a basis, and I've pointed you at it. I did not claim it was completely rigorous, but it does exist, and I really don't know what your purpose is in coming here to talk if you're not going to listen -W]

You see, the problem here is that if climate models are indeed very stable, that is probably due to unphysical numerical dissipation, present in all Navier-Stokes simulations. It well known that this dissipation can strongly damp the dynamics and make things seem stable when they are not. I assume your background is not strong in these subjects, but the effect of dissipation really is mathematically provable.

[Nah, I did Navier Stokes for my thesis and then more whilst starting up in climate modelling.

What is it with you folk who know almost nothing relevant to the subject and then assume that the people who do know, know nothing? The dynamics of the GCMs aren't "damped" in any way that is important to this discussion -W]

By David Young (not verified) on 28 Jul 2013 #permalink

Yawn. Model-bashing by fake sceptics is both tedious and demonstrates how badly they understand "the science". Here is well-known model sceptic James Hansen on the way it really is:

TH: A lot of these metrics that we develop come from computer models. How should people treat the kind of info that comes from computer climate models?

Hansen: I think you would have to treat it with a great deal of skepticism. Because if computer models were in fact the principal basis for our concern, then you have to admit that there are still substantial uncertainties as to whether we have all the physics in there, and how accurate we have it. But, in fact, that's not the principal basis for our concern. It's the Earth's history-how the Earth responded in the past to changes in boundary conditions, such as atmospheric composition. Climate models are helpful in interpreting that data, but they're not the primary source of our understanding.

TH: Do you think that gets misinterpreted in the media?

Hansen: Oh, yeah, that's intentional. The contrarians, the deniers who prefer to continue business as usual, easily recognize that the computer models are our weak point. So they jump all over them and they try to make the people, the public, believe that that's the source of our knowledge. But, in fact, it's supplementary. It's not the basic source of knowledge. We know, for example, from looking at the Earth's history, that the last time the planet was two degrees Celsius warmer, sea level was 25 meters higher.

And we have a lot of different examples in the Earth's history of how climate has changed as the atmospheric composition has changed. So it's misleading to claim that the climate models are the primary basis of understanding.

Well then BBD, aside from the name calling, I'm glad to see Hansen owning up to the criticisms of his climate modeling proposals from a long time ago. Apparently, the reviewers of his proposals said the same thing. However, if this is so then why are we spending so much on modeling and running models?

1. We should be investing in the "stronger points" that will give better information.

2. Why does the IPCC consider the computer models to be one of its lines of evidence for sensitivity and its only line of evidence for projected future temperature rises?

3. Why the resistance to this obvious fact about models from people such as yourself elsewhere and

By David Young (not verified) on 28 Jul 2013 #permalink

Sorry, there is a "feature" of the software for this blog that if you expand the comment box too far, you can no longer edit it.

...As I was saying. If Hansen is right and I know he is (and really agrees with the reviewers of his proposals who were critical of them), what's with the billions of bytes and words over the years defending climate models and the billions of dollars spent on them?

This fact itself makes one distrust this field of science. Which version of Hansen is right? They cannot both be.

By David Young (not verified) on 28 Jul 2013 #permalink

So, David Young, it sounds like your proposal would be to stop attempting to understand the problem.

By Rob Honeycutt (not verified) on 28 Jul 2013 #permalink

"David Young", you are a fake sceptic. Do not play the whining victim when you are correctly categorised. And here, you are doing *exactly* what Hansen describes.

As for the rest of your stuff, just re-read what Hansen said. Notably this bit:

Climate models are helpful in interpreting that data, but they’re not the primary source of our understanding.

Also re-read Rob Honeycutt # 24.

David... I think you'll find that every climate modeler out there holds the same position as Hansen does in the comments presented here. Does that mean climate modeling is pointless and a waste of money? Far from it.

I would put forth that weather modeling is even worse than climate modeling. Why do we rely so much on weather models today when the predictive powers of any given model is anything short of abysmal? Because collectively, over a range of models and over multiple model runs, they give us a great deal of information.

Do you remember how weather models predicted the abrupt dogleg to the west hurricane Sandy would take, straight into the NY area, nearly a week in advance?

By Rob Honeycutt (not verified) on 28 Jul 2013 #permalink

tgl, the bit about oil companies is not fictional. The items you promote, however, are far from reality.

Rob, What I propose is stated above. Shouldn't we spend money on the lines of evidence that can give us the clearest answer? Seems like GCM's are admitted to be the weakest by Hansen. In fact, I've had trouble in the last year finding anyone who disagrees, except the uninformed or mathematically naive.

Weather models are useful, given their limitations. Climate models are mostly weather models just run on very course spatial grids and for a long period of time. So, once the weather model diverges from reality, what evidence is there that the "statistics" will be meaningful.

"The skeptics ... easily recognize that the computer models are our weak point." I believe Hansen now says that paleoclimate is the evidence we should trust. If so, then we should be working hard to reduce uncertainties there. Annan and Hargreaves seem to be doing some new and interesting work in this area, which is also showing lower temperature differences, but you can read about that on James' Empty Blog.

By David Young (not verified) on 28 Jul 2013 #permalink

TGL, have you ever read or heard of the 1998 API strategy document.

http://www.euronet.nl/users/e_wesker/ew@shell/API-prop.html

Here is a quote from that document:

Victory Will Be Achieved When

Average citizens "understand" (recognize) uncertainties in climate science; recognition of uncertainties becomes part of the "conventional wisdom"
Media "understands" (recognizes) uncertainties in climate science
Media coverage reflects balance on climate science and recognition of the validity of viewpoints that challenge the current "conventional wisdom"
Industry senior leadership understands uncertainties in climate science, making them stronger ambassadors to those who shape climate policy
Those promoting the Kyoto treaty on the basis of extent science appears to be out of touch with reality.

con·spir·a·cy
/kənˈspirəsē/
Noun

A secret plan by a group to do something unlawful or harmful.
The action of plotting or conspiring.

By Ian Forrester (not verified) on 28 Jul 2013 #permalink

Ian beat me to the punch, but it is not a conspiracy theory when you have ironclad proof of the conspiracy.

By Rattus Norvegicus (not verified) on 28 Jul 2013 #permalink

#28

Rohling et al. (2013):

Many palaeoclimate studies have quantified pre-anthropogenic climate change to calculate climate sensitivity (equilibrium temperature change in response to radiative forcing change), but a lack of consistent methodologies produces a wide range of estimates and hinders comparability of results. Here we present a stricter approach, to improve intercomparison of palaeoclimate sensitivity estimates in a manner compatible with equilibrium projections for future climate change. Over the past 65 million years, this reveals a climate sensitivity (in K W−1 m2) of 0.3–1.9 or 0.6–1.3 at 95% or 68% probability, respectively. The latter implies a warming of 2.2–4.8 K per doubling of atmospheric CO2, which agrees with IPCC estimates.

#28

I believe Hansen now says that paleoclimate is the evidence we should trust.

When did he *not* argue this?

I think you are engaging in dishonest framing here.

@Ian The Protocols of the Elders of the American Petroleum Institute?

I'm half-joking of course, but environmentalists do not exactly have a great track record with "leaked" documents. After all, just recently Peter Gleick almost certainly forged a similar document.

Assuming your document is true (a big assumption), it is weak sauce. I would fully expect everyone on both sides to advocate for their position - that is not a conspiracy.

What is missing from the "big oil is funding denialism" theory is the money or the signs of money being spent in large amounts. There isn't some great media campaign by oil companies. There are a few blogs and a few journalists - that's about it.

If they were funding a campaign then I'd see constant commercials and advertisements on the subject. I DO see advertisements about various "renewables" and "low carbon" technologies from oil companies - they are constantly on TV with both implicit and explicit endorsements of global warming orthodoxy.

By TheGoodLocust (not verified) on 28 Jul 2013 #permalink

@Rattus Norvegicus [quote]Ian beat me to the punch, but it is not a conspiracy theory when you have ironclad proof of the conspiracy.[/quote]

You think an obscure 15-year old document of indeterminate origin and authenticity is "ironclad proof of a conspiracy?"

By TheGoodLocust (not verified) on 28 Jul 2013 #permalink

TGL

The funding of contrarian messaging by the fossil fuel industry is well documented and not in dispute. So why dispute it? It's not worth the effort.

Anyway..
We already have 'weather chaos'. Why should the possibility of 'climate chaos' as well, e.g. at high values of the forcing, be an argument for ramping up the forcing?

Does the lack of determinism in seismological models of earthquakes mean that builders need not bother about earthquake protection?

By deconvoluter (not verified) on 28 Jul 2013 #permalink

Have you ever heard of bifurcations in a nonlinear system?

Yep. Last time that was relevant for the climate was in the Precambrium during the Snowball Earth episodes. And it may be again in the future, if we trip one of those tipping points, wherever they may be.

There is no basis for asserting that climate is not chaotic.

Precisely the opposite is true. When external forcings are stable, so is the climate, as was the case last during the Holocene --- up to the time when we started messing with things. Globally, locally, for any climatic parameter you care to mention. Just point me to any place on Earth, any time, where the same external forcing situation was associated with two or more different climatic situations.

You're out of your depth with climate, in an unusually embarrassing way. As WMC insists on politeness, I'll stop here.

By Martin Vermeer (not verified) on 28 Jul 2013 #permalink

David Young:

"Rob, What I propose is stated above. Shouldn’t we spend money on the lines of evidence that can give us the clearest answer? Seems like GCM’s are admitted to be the weakest by Hansen."

Weakest for what? Hansen (and the mainstream climate science world) points to a long list of evidence ranging from basic physics to paleoclimate to demonstrate that adding CO2 to the atmosphere is going to cause warming, and that it's going to be in excess of the 1C or so per doubling of CO2 caused directly by the radiative properties of CO2 (due to feedbacks, in particular the large water vapor feedback).

GCMs aren't needed for this. They're the "weakest" evidence not necessarily because they're particularly weak as models go, but because the mountains of other evidence are so strong.

This doesn't mean that GCMs are the weakest tool for trying to figure out what current rates of CO2 increase in today's world mean for us on, say, a century timeframe. The historical evidence isn't going to be of much help here. I challenge you, for instance, to find a period in the past where

1. CO2 is doubled on a century timescale
2. human civilization is based on an agricultural technology developed in the relatively stable climate of the last few millenia
3. continents in their current geographical locations

etc etc etc. GCMs are useful for probing the future, but are unnecessary for the basic underlying scientific understanding of CO2s role in climate, which, after all, predates the earliest GCMs of the 1970s ...

David Young:

"Weather models are useful, given their limitations. Climate models are mostly weather models just run on very course spatial grids and for a long period of time. So, once the weather model diverges from reality, what evidence is there that the “statistics” will be meaningful."

You overstate your case. Weather models have a notoriously difficult time predicting the weather in the US's pacific northwest during winter. But they get the big picture right - low after low forming, front after front moving onshore and giving us our notorious rain. What they get wrong are details - will the next low form in one, two or three days and will zero, one or two dry days separate two fronts moving in succession towards Portland? When the current high with its 20 degree clear skies, when hit by the next pacific front, cool the front to below freezing giving us snow, lead to an inversion giving us freezing rain, or just lead to grey cold rain? Getting details of this kind of thing right further out than a few hours is really difficult.

However, the weather models never diverge from reality to the point of, say, predicting 105F weather in Portland in January ... they're still bounded by the climate reality of the region.

Martin, With respect, I don't think you understood what I said. There is no evidence that climate is a unique function of forcings, there are a lot of them and each has a different effect. And there are tipping points too, some of them result in small changes.

1. The topic of rapid climate change is a topic of a lot of research and a lot of people are interested in it. There is a growing literature.

2. These things are common in the Navier-Stokes equations even with constant forcing. Swinney from Univ. of Texas did some really good work on the Taylor column about 25 years which is experimental. Careful numerical experiments confirm his work.

3. In short after a bifurcation point there can be many solutions that are stable. Accidental details can determine which one you get, including unmeasurable noise.

4. With climate, there are so many factors, its unlikely that anyone can point you to 2 different times in history when the "same" forcings resulted in different climates. There are never 2 situations where the forcing is the same. In the mean, time simple systems that are subsystems of the climate system can tell us something. In science and engineering, those are the best analyses.

I already sent you some material from the published literature on these topics. Do you believe there are things we are not considering or missed? I mentioned some recent work on James' blog. I think the evidence is pretty persuasive and consistent with experiments.

You know, I would really like to see evidence and clear arguments on this.

By David Young (not verified) on 28 Jul 2013 #permalink

Dhog, Gerry Browning says that weather models have to be reinitialized with high altitude winds frequently to be stable. Any system can be stabilized by adding dissipation. A course computational grid introduces a lot of dissipation.
The problem here I suspect is that the real viscosity is quite small compared to the planetary scale. Thus for practical, you have the Euler equations, which are notorously unstable when discretized.

The weather in the Pacific Northwest is rather stable during the summer time. But there are wide swings in winter. Heavy snow and cold spells sometimes but infrequently occur. One thing I have noticed is that weather forecasters are shrewd and often say "the models diverge after a couple days and I slanted the forcast toward climatology. Such a forecast is not of much value, but its the best that can be done.

By David Young (not verified) on 28 Jul 2013 #permalink

# 41

There is no evidence that climate is a unique function of forcings, there are a lot of them and each has a different effect. And there are tipping points too, some of them result in small changes.

See Rohling et al. (2013).

Or let's just take the LGM and Holocene. About 4.5C GAT change between two quasi-equilibrium climate states sustained by ~6W/m^2 change in forcing.

Back of envelope gives you 0.75C per W/m^2 change in forcing.

Good enough for government work.

:-)

The real issue with the models is just how wrong they are vs actual observations. Every single one projects higher temperatures than are currently being recorded. Why is that?

[You've been spending too much time on the Dark Side. What you've just said doesn't make sense, though I'm sure that if you said it over at WUWT everyone would cheer you on. Would you like to try rephrasing your question into one that does make sense? -W]

David said, "Rob, What I propose is stated above. Shouldn’t we spend money on the lines of evidence that can give us the clearest answer?"

You seem to deliberately avoid my point. How else do you propose that we come up with projections for future climate without models? The reality of what you're proposing would be to just wait until we drive off the cliff to test to see if there is a cliff or not.

Climate models are clearly not worthless. They give us a better picture of future scenarios better than having no information at all. Given fundamental radiative physics and paleo data we know we have a potentially critical issue on our hands. That is why we spend what we do on climate models.

By Rob Honeycutt (not verified) on 28 Jul 2013 #permalink

@BDD [quote]The funding of contrarian messaging by the fossil fuel industry is well documented and not in dispute. So why dispute it? It’s not worth the effort.[/quote]

Ah "the debate is over" - where have I heard that one before?

@Ned [quote]ExxonMobil provides millions in funding for “skeptic” groups[/quote]

The Union of Concerned Scientists? That's the impeccable source we are talking about here?

Give me a break.

Exxon gives tens of millions to higher education every single year in the US. The UCS has a lot of misrepresentations in there - like taking small chunks of that money, combining the sums over many years, and then declaring "millions" spent to promote skepticism.

They also do the whole "employees of this company gave money to..." which is also flagrantly dishonest when you have a company as big as Exxon.

I mean, you do know that "Big Oil" funds plenty of scientists on your side right? Or do you think that none of the millions Exxon sends to Pennsylvania every year make it to Michael Mann?

I guess their funding is only a big deal when it gets sent to skeptics right?

By TheGoodLocust (not verified) on 28 Jul 2013 #permalink

Rob, It seems to me that observationally constrained estimates of sensitivity will give a better constraint on the future than GCM's. Of course, we as a species are notorously poor at predicting the future.

By David Young (not verified) on 28 Jul 2013 #permalink

@TGL ... The Heartland Institute, The Global Warming Policy Foundation, CATO, Heritage, AEI, ALEC, etc are funded by right-wing ideologues. They do not single out climate science, they're generally equal opportunity anti-science types. This isn't really a point of contention, is it? Now, considering that they spend tens/hundreds of millions more supporting politicians to give their captive scientists a gloss of respectability shouldn't we include that money as well? How else, for example, does a Wegman get his views into the Congressional Record and gist for the 'See, there's scientific doubt.." crowd? If you can't see the funding connection, then you choose not to see. And why do you think conservatives in the US fight so hard to keep these funders legally secret?

By Kevin O'Neill (not verified) on 28 Jul 2013 #permalink

Maybe TheGoodLocust could give us a list of all the published research that has been the result of Exxon's funding for all these "scpetics"?

Or can he only find a long list of press releases and other propaganda?

Because that is the first obvious difference between funding for science and funding for "sceptics": the first results in an increase to the sum total of human knowledge, while the latter is aimed at detracting from it, as admitted in the leaked Heartland documents which revealed that Heartland knows it is lying and aims to spread more lies through the deliberate targeting of education.

By Craig Thomas (not verified) on 28 Jul 2013 #permalink

All that said, it is a misfeature of the way we compute that very few large computations are bitwise repeatable across platforms. There is nothing particular to climate in this problem, nor is chaos relevant. It's just that we give too much power to the compiler-writers.

There are good scientific reasons for bit-for-bit reproducibility, and it should not be foreclosed by the high performance platform manufacturers that we have it at least as an option. All published results should be bit-for-bit reproducible and all tools required for such reproduction should be published to a repository at the time of publication of the results. Computation is a perfect tool for reproducibility, and the fact that the manufacturers make us throw it away is due to their idiotic obsessive performance benchmarking at the expense of actual scientific productivity.

Again, this has nothing whatsoever to do with climate or with chaos. It's a flaw in high performance computing.

http://planet3.org/2013/07/16/repeatability-of-large-computations/

[Commented there. I largely disagree -W]

http://insidehpc.com/2013/07/15/good-science-is-repeatable-the-recomput…

By Michael Tobis (not verified) on 28 Jul 2013 #permalink

David Young:

"The weather in the Pacific Northwest is rather stable during the summer time. But there are wide swings in winter. Heavy snow and cold spells sometimes but infrequently occur. One thing I have noticed is that weather forecasters are shrewd and often say “the models diverge after a couple days and I slanted the forcast toward climatology. Such a forecast is not of much value, but its the best that can be done."

I've lived here my whole life and have never heard a weather forecaster say any such thing.

"...observationally constrained estimates of sensitivity will give a better constraint on the future than GCM’s"

If I'm reading you right here, you're still suggesting we ignore most of the available data.

I would suggest that "we, as a species" are notoriously poor at heeding clear warning signs in favor of what we prefer to be true... sometimes until too late.

By Rob Honeycutt (not verified) on 28 Jul 2013 #permalink

Michael Tobis:

"It’s just that we give too much power to the compiler-writers."

As someone who spent much of his life in this realm, I'd say that compiler-writers just do what customers want, and what is wanted is performance. If customers don't want performance, compiler-writers will oblige as long as they're paid for it (optimization is hard, after all).

Still, pretty much any optimizing compiler system you want to mention provides compile-time switches to turn off those optimizations that cause the most problems. The paper in question points to this in that they talk about models compiled with "-O3", etc, i.e. aggressive optimization documented as very likely to increase rounding error problems, order-of-execution artifacts, etc.

Of course, pinning the blame on compiler-writers presumes that the authors of code sensitive to such optimization are writing their code in a way that minimizes such problems in the first place, i.e. that if their code isn't re-ordered etc the problem wouldn't occur. This simply isn't true. The subset of scientific programmers smart enough to carefully craft their floating-point expressions to minimize such issues are exactly the subset that will turn off re-ordering and similar optimizations in the first place, therefore - no problem.

Something not addressed by the paper, apparently, are questions like "what level of optimization is actually asked of compilers when compiling model code such as GISS Model-E?"

The paper assumes (AFACT, it's paywalled) that high level optimization is the norm.

Do we know this is true?

So, Tobis, let's play a game ... why don't you provide us some examples of good/bad code that, unoptimized, faithfully followed by a compiler, minimizes/maximizes rounding errors?

Just to test your apparent hypothesis that if compiler writers didn't mess with optimizing scientific code, that those writing such code would produce code free of such problems ...

Let's see how deeply you understand the problem...

David #41,

give me one example of a small difference in external forcing producing a large, persistent difference in climate, for a climate close to late Holocene. Reality or (realistic) model.

Just what dhogaza #40 said

However, the weather models never diverge from reality to the point of, say, predicting 105F weather in Portland in January … they’re still bounded by the climate reality of the region.

Toy models don't count.

By Martin Vermeer (not verified) on 28 Jul 2013 #permalink

MT, I don't necessarily agree that it is the fault of the compiler-writers or hardware manufacturers. I suspect that users (including, but certainly not limited to climate scientists) will vote with their wallets for speed over reproducibility.

Incidentally, the UK Met Office (/ Hadley Centre?) do invest a lot of time and effort into bitwise reproducibility. I wouldn't like to speculate as to whether this is money well spent or not.

By James Annan (not verified) on 29 Jul 2013 #permalink

Stoat's comment:

"There’s an important distinction to make here, which is that climate modelling isn’t an Initial Value Problem, as weather prediction is. Its more of a Boundary Value Problem, with things like GHGs being the “boundary”s. "

....is worth extending slightly since this gets to the heart of the Hong et al paper and its observations and conclusions.

Climate simulations like all computational simulations that assess time-dependent trajectories are both Initial Value problems (IVP) and Boundary Value Problems (BVP). The IVP relates to the particular trajectories of individual runs. The (more important) BVP relates to the ensemble of trajectories that defines the likely range of behaviour within a set of parameterizations. Much as the climate in the real world is bounded (by total thermal energy in the system, forcings and so on) so the simulations are bounded by its parameterizations.

What Hong et al showed is that the results of single runs with identical parameterization and initialization are platform/software-dependent but the ensemble i(what we're interested in a climate simulation) is broadly independent of the platform/software [*].
---------
As Hong et al conclude:

As shown in Fig. 3, the both ensemble results produce a tropical rainfall pattern comparable to the observation, with the main rain-belt along the intertropical convergence zone (ITCZ) (cf. Figs. 3a,b,and c). It confirms the resemblance that three-month averaged daily precipitations simulated from the 10-member initial condition ensemble runs are within a narrow range between 3.387 mm d−1 and 3.395 mm d−1.Those from the 10- member software system ensemble runs are within a similar range between 3.388 mm d−1 and 3.397 mm d−1.

[Thank you; that's nicely expressed -W]

There is nothing inherently wrong with oil companies funding research or public information about climate change. It is, after all, a subject which is relevant to their business and in which they have a legitimate interest.
The problem comes when such funding is done with the deliberate intention of giving the public a misleading impression of the state of climate science, in order to reduce the likelihood of legislation which might damage their commercial interests.
The fact that his has happened is so well documented as to surely be beyond dispute. It doesn't mean though that oil companies haven't also provided funding for research or other climate change related projects which are perfectly legitimate.

By andrew adams (not verified) on 29 Jul 2013 #permalink

The fact that his has happened is so well documented as to surely be beyond dispute.

Exactly my point to TGL (my # 35). Predictably ignored by TGL at #47.

My response would be the same as yours above:

The problem comes when such funding is done with the deliberate intention of giving the public a misleading impression of the state of climate science, in order to reduce the likelihood of legislation which might damage their commercial interests.

Why TGL feels it necessary to dispute this and obfuscate the point is puzzling.

dhog, Have you ever read the National Weather Service forcast discussions? The forecasts themselves of course don't say this. The discussion for Seattle says this rather often.

By David Young (not verified) on 29 Jul 2013 #permalink

Marten, Simple wing at angle of attack. It's called hystereous and has been known for at least 70 years. If you start from attached flow you will stay attached until you reach stall when the lift drops precipitously. If you start post stall, you stay with the low lift answer as alpha decreases until you get reattachment. Recent work shows there may actually be several intermediate separated flow solutions too that are physical. Thus, there are at least 2 different solutions for the same forcing and probably more than 2. You can easily find lots of other ones in the fluid dynamics literature.

Yea, weather models don't "blow up." Browning says that's because of unphysical dissipation and reinitializing the upper winds frequently. I vote for the dissipation. You can find in the code documentation references to hyper viscosity.

By David Young (not verified) on 29 Jul 2013 #permalink

Chris, The boundary value problem part is simply not techically correct. Climate is an initial value problem with time varying forcings. A forcing is not a boundary condition. In this case, the forcing is expressed throughout the atmosphere.

The idea that the attractor is a function of the forcings is correct. But that's not a very useful observation. The attractor can have very high dimension.

I'm not sure about this bitwise compatibility issue. If in fact, the problem is sensitive to numerical differences at the 10^-14 level, that is something that is interesting in its own right, viz., the problem is not mathematically stable and you need to investigate that carefully. I would say rounding error differences are inevitable and "fixing the problem" merely masks important information and gives a false sense of security.

[Do you realise that you're just babbling words and concepts that you don't understand? I'm afraid that you're going to have to present something meaningful at some point if you don't want to be merely trolling. "I would say" type comments only mean something if the person doing the saying has some reputation or credibility. Without such, "I would say" becomes equivalent to "You can ignore the rest of this sentence as it is nothing but personal opinion" -W]

By David Young (not verified) on 29 Jul 2013 #permalink

"Climate is an initial value problem..." isn't really a meaningful statement David Young. “Weather is an initial value problem….” is more meaningful. Clearly a climate state at equilibrium is bounded by the energy in the climate system as well as other factors like ocean depth, amount and location of land and so on. The bounds constrain possible trajectories of any particular temporal progression within the climate state.

I think you may be mixing up the climate state (a Boundary Value Problem) with the particular trajectory (an Initial Value Problem) of the system within the climate state that evolves according to internal variability. Computationally this is the difference between individual realizations of a model under a set of parameterizations and the ensemble of model runs that should encompass the variability within the climate state. It's the latter that is of interest in climate modelling. Hong et al. in the paper we're discussing show that the platform/software dependence has little effect on the latter (as we might well expect).

It may be that one would wish to assess the effects of forcings (e.g. changes in solar output; volcanic eruptions; massive release of greenhouse gases) but these are external to the climate system and provide new boundary conditions within which the climate system evolves. In a simulation (e.g. a GCM) these would be “added in” to assess the response of the system being simulated. They’re not part of the internal variability nor are they a consideration with respect to initialization of the model and so are not a consideration with respect to the Initial Value Problem….

Another indicator of trollery - opinionated, serial, repetitive wrongness aside - is ignoring inconvenient comments, eg #31 #32, #43, #44.

"David Young" is not demonstrating good faith.

"So, Tobis, let’s play a game … why don’t you provide us some examples of good/bad code that, unoptimized, faithfully followed by a compiler, minimizes/maximizes rounding errors?"

Hmm, let me see if I still have my old Hamming textbook. Yes, I understand roundoff at a postgrad level. The simplest example:

a = 10 ^ 12
b = 10 ^ 12 + pi
c = pi + 0.01

In practice (a - b) + c != a + (c - b) .

By Michael Tobis (not verified) on 29 Jul 2013 #permalink

A useful compiler allowing reproducibility would have repeatability across platforms and across decompositions.

The climate models with which I am familiar are repeatable across decompositions (at some optimization level) but not across platforms or compiler versions.

I am most interested in multi-model ensembles. If you seek to extend an ensemble but the platform changes out from under you, you want to ensure that you are running the same dynamics. It is quite conceivable that you aren't. There's a notorious example of a version of the Intel Fortran compiler that makes a version of CCM produce an ice age, perhaps apocryphal, but the issue is serious enough to worry about.

It really matters in a lot of cases that one runs the "same" model ensemble. WIth bit-for-bit reproducibility this can be verified. Without it, you have a vast and fruitless problem just verifying that you're in the same problem space.

By Michael Tobis (not verified) on 29 Jul 2013 #permalink

I agree that the customers are not demanding cross-platform repeatability. Most of the customers haven't got the remotest clue that this is possible, never mind desireable. But it is both.

By Michael Tobis (not verified) on 29 Jul 2013 #permalink

"Hmm, let me see if I still have my old Hamming textbook. Yes, I understand roundoff at a postgrad level. The simplest example:

a = 10 ^ 12
b = 10 ^ 12 + pi
c = pi + 0.01

In practice (a – b) + c != a + (c – b) ."

Excellent. Now, if you understand that the values you're working with are sensitive to the order of execution you will, of course, turn off compiler optmizations that can lead to the reordering of such equations, right? You understand the problem, you understand that you need to turn off such optimizations.

And, of course, providers of compilers state the same.

For instance:

http://www.nersc.gov/users/computational-systems/hopper/programming/com…

"The PGI compiler does not provide any optimization level that will guarantee that no floating point operations will be reordered and by default it is not strictly IEEE compliant, occasionally using "slightly less accurate methods" to improve performance.

For accurate, precise code, PGI recommends turning off all optimization and compelling strict IEEE compliance with these arguments:

% ftn -O0 -Kieee MyCode.F90"

Portland Group tells you: if you want the most accuracy and IEEE compliance, turn off optimization.

Don't blame compiler writers if users don't RTFM and proceed accordingly ...

MT:

"I agree that the customers are not demanding cross-platform repeatability. Most of the customers haven’t got the remotest clue that this is possible, never mind desireable. But it is both."

The problem was a heck of a lot worse before IEEE floating point specs were adopted. This at least removed a lot of the potential cross-platform issues and, in addition, incorporates well-thought out rounding options that were often ignored in earlier, manufacturer-specific FP implementations (Cray-1, for instance, the supercomputer of choice for large-scale modeling in the late 1970s).

You're right that most customers don't understand that cross-platform repeatability is possible. On the other hand, most aren't running applications that require maximum precision and minimum sensitivity to rounding errors ...

TGL's whirligig dance at #47 is hilarious.

I pointed to a lengthy report by the Union of Concerned Scientists that documents ExxonMobil's provision of millions of dollars in funding to "climate skeptics". TGL's response includes:

(1) Complaining about the messenger (he/she doesn't like UCS).

(2) Complaining about the fact that the report identifies many smaller donations that add up to something like $16 million. It's not really clear what TGL's problem with "addition" is.

(3) Complaining about the fact that UCS documents both contributions from ExxonMobil itself and its employees.

(4) Insisting that it's OK for ExxonMobil to subsidize "climate skeptic" propaganda because the company may also make donations to universities.

Why so defensive, TGL? Unless you're working for ExxonMobil or one of the "skeptic" think-tanks they support, wouldn't it be simpler to just say "Yeah, OK, I guess fossil fuel companies have been funding 'skeptic' groups. The comparison I made back in #18 was inappropriate."

And there's nothing particularly surprising about this. As others have noted, the fossil fuel industry is just following the same playbook used by tobacco companies.

"On the other hand, most aren’t running applications that require maximum precision and minimum sensitivity to rounding errors" is true but not remotely the issue.

A capacity to verify that you are running "the same model", for purposes of 1) refactoring or 2) extending an ensemble or just 3) verifying that published results are what they claim to be and not completely made up, all these are comparably important with optimum raw performance.

The end user, the application developer and the reviewers should have a hand in setting the tradeoff, not be forced to cede it completely to the compiler writers to do their benchmark contests.

This is not to defend Watts et al's confusion and ignorance about sensitivity to initial conditions and what a climate model actually is and what it is good for. But confusion aside, repeatability, like transparency, is a legitimate goal and one we should strive for, not argue against.

By Michael Tobis (not verified) on 29 Jul 2013 #permalink

#69; Perhaps a bit technical, but actually I do not understand why repeatability goes away when I allow the compiler to reorder the computations.

I have the same (highly optimized) calculation with the same initial conditions. I run the SAME exact executable twice with the SAME input files.

Results disagree and of course due to sensitivity to initial conditions they diverge and are not comparable.

Why doesn't the more optimized calculation deliver the same results AS ITSELF every time? I don't really get that.

[On a single-processor, I don't think this should happen. On a multi-processor system, it could be because of the order of execution on different nodes. I address that in my shiny new post -W]

By Michael Tobis (not verified) on 29 Jul 2013 #permalink

"“On the other hand, most aren’t running applications that require maximum precision and minimum sensitivity to rounding errors” is true but not remotely the issue."

It is because it reduces market pressure on hardware designers, application library developers, and compiler writers to minimize problems.

Ultimately, it is the market that pays our salaries, after all ...

"I have the same (highly optimized) calculation with the same initial conditions. I run the SAME exact executable twice with the SAME input files.

Results disagree and of course due to sensitivity to initial conditions they diverge and are not comparable.

Why doesn’t the more optimized calculation deliver the same results AS ITSELF every time? I don’t really get that.

[On a single-processor, I don't think this should happen...-W]"

Mostly, yes.. Any non-multiprocessing program run on a single processor should be deterministic, optimized or not. If I run into a situation like this, I first suspect uninitalized variables and the like. As a compiler writer you certainly hope you (or support engineers) can find such problems.

There is, of course, also the possibility of a bug in the compiler itself. A memory allocation error in an environment where, say, the OS does (say) asynch I/O in user memory space is the kind of situation that could cause to hard-to-reproduce results.

When multiprocessing is involved, even on a single processor system, execution order for separate threads/processes is rarely going to be exactly the same when the program's run multiple times, because the O/S's scheduler is impacted by other programs running on the system, etc. Writers of real-time operating systems go to great pains to be able to guarantee some level of maximum time to respond to events, etc, and are more deterministic in general than general purpose O/S environments like linux but the pitfalls are still many.

multi-process and multi-threaded programs are far more difficult to get right, of course ...

64 CPUs on 6 12-CPU nodes typically, so that explains it. Thanks.

By Michael Tobis (not verified) on 30 Jul 2013 #permalink

The sensitivity of a model to hardware or optimisation changes *can* be indicative of bugs or inappropriate code in the system. The move from NEC SX8 to IBM Power 7 highlighted some inappropriate tests in UM convection code that were triggered far more often on the IBM due to the way small numbers were stored. It wasn't a significant error, but the divergences were big enough to make it hard to validate optimisation changes.

By Steve Milesworthy (not verified) on 31 Jul 2013 #permalink

What is the optimal temperature of the planet?

[The people I talk to are more concerned about rate of chamge than absolute temps.

Using questions as rhetorical weapons instead of opportunities for thought is bad - W]

By Global Warming… (not verified) on 04 Aug 2013 #permalink