I recently attended the TEDxVancouver event, which was wonderfully done and also useful for being able to network with a lot of interesting people. There was, however, one thing that irked me – nothing to do with the conference logistics but rather a statement or two issued by one of the speakers, Patrick Moore.
Just a little background on Patrick: he’s one of the founders of Greenpeace, with a major role in the evolution of the organization in its earlier days. However, currently, he’s a little more well known for his climate change skepticism views, and particularly his advocacy for nuclear power.
Anyway, at the beginning of his talk, he essentially outlined a few points to suggest that anthropogenic climate change is all a ruse, a sentiment he backed up with what can be summarized (and since I just signed up for twitter the other day) in the following <140 character phrase.
"Earth has not actually been getting any warmer in the last 10 years. Oh yeah, the ice in the Antarctica doesn’t even seem to be melting.”
It’s a compelling statement and certainly easy to digest, but I thought I’d take a minute or two to weigh in a little here. I’d like to explain why I think that kind of statement (which happens to be classic climate change denialist prose) is a great example of spinning things to meet your own particular agenda.
Or as Jane Austen might say, “Badly done… Badly done…”
So let’s start with a few take home points – two actually:
1. Predictions of how things will be in the future are largely determined by the formulation, evaluation, and ultimately validation of climate models.
2. That in the world of climatology, drawing conclusions from observations seen in 10 year, or even 20 year spans, is statistically weak.
So, what do these points mean exactly?
Well, the first is to understand that when climatologists attempt to predict climate trends, they do so with the help of computers. They do this, because looking at this scientific puzzle is not like a conventional science experiment where you can test your ideas by comparing something against a control sample. We can’t say:
“Let’s take Earth here, and then compare it to this other Earth over here. On my Earth, I’ll raise the CO2 by so much, and on this other Earth, I won’t raise it at all. Of course, then I’ll have to wait 50 to 100 years or so, and then we can all get together with our calculators and compare graphs and stuff to try to figure out what’s going on.”
Obviously, you can’t really do that sort of thing in these circumstances, which is why you resort to computers attempting to model how things “will go.” Here, it’s a little bit like the Matrix movie, where there are these overarching algorithms and equations to try and explain the physical world. It works because physical laws, by and large, are entirely dependable. A classic example is the first law of thermodynamics: which some of us might remember as fancy talk for energy not being able to be created or destroyed – it just moves around. But if you think about it, a law like that (which can be eloquently stated in mathematical terms) is really important because it sets real boundaries on how things should be in the physical world.
Now, if you take the thermodynamic law as an example, and kick it up a notch, by applying other scientific principles, other scientific laws, some of which are very nuanced relating specifically to atmospheric conditions, water considerations, etc (the list can go on), hopefully, you can see that you have an opportunity to start to develop a reasonable representation of what climate reality might be.
And to be honest, these models have been getting better and better all the time. Why? Because, scientists keep learning more and more about the physical laws behind climate trends, and also because, computers have just gotten better, more powerful and in a freakishly fast pace.
At this point, however, you could still say that a fancy climate model is just that – a fancy climate model. Which is why, all models, to pass the scrutiny of the scientific community, have to be validated. This is just saying that there needs to be some way of ensuring a sort of quality control on the predictions.
How does one do this? Well, there a number of ways, but common examples include running the model backwards in time. i.e. since it is a mother algorithm, it should also be able to correctly represent things that have already happen, i.e. things where we do have good data on (a.k.a. the concept of the weather channel is not a new thing!). For example, let’s start the model at 1958 (when CO2 was first carefully measured at Mauna Loa), enter info on CO2 levels for the next 50 years, and then run the model to see if the climate warming trends match up with observed records.
Another example is taking these same models and validating them by seeing how they perform in special circumstances where notable weather related variations occur. These might be big things like El Nino, or say the year when such and such a huge volcano spewed a ton of stuff into the air.
I guess the point is, is that these models are our window to the future and scientists, as a whole, are pretty convinced by them because (1) they’ve been earnestly picked over and validated, and (2) they continue to be validated by the weather we see year to year.
Which brings me to the 2nd point: that not seeing a temperature rise in the last ten years or so doesn’t really mean too much.
The reason for this is because this trend still fits within the predictions of existing climate models (the same ones that say that our current CO2 production pace is bad news down the road). More importantly, the 10 year trend is really too short and narrow for climate timescales.
Let’s use an analogy here. Say you’re trying to plan a wedding, or bbq, or anything, where you hope to be outside, and you want to pick a particular day in the year to have the best chance of sunshine. Chances are, you would not base your day on only what happened the year before. That would be statistically risky. You might not even base it on only two years worth of data, and really, if you want to hedge your bets, you’d want an opportunity to look at many records of that day as possible. All through this, you can actually calculate probabilities along the way, and at some point even make calls on what might be a good number of years to look at all in an effort to feel pretty good about your chances.
Now, in our climate prediction case, we’ve got a much meatier scenario. One that is already looking at much larger timespans in a number of different ways (i.e. yearly average for temperature, as well as trying to project decades down the road). Again, you can reflect on the last ten years of climate data, and you can probably guess that that would be better than say reflecting on the last 2 years of climate data. But the long story short is that folks have done statistical analysis on this sort of thing, and it turns out that focusing on something like a 10 year trend is just not a reliable way to overturn the long term predictions. This, by the way, is also why climate models aren’t about predicting “weather” – which is something very specific to day to day considerations and also exact locations.
So where are the references for all of this talk? Well, lucky for us non-climatologists, there’s the UN’s IPCC report (in my opinion, the policy makers report is required reading for everyone who cares about the environment). And despite the flaws of how the UN operates, it’s generally agreed that its desired mission is to look into things of global consequence. In other words: poverty is bad, conflict is bad, inequity is bad – and so what can we do about it. Therefore, the IPCC report is essentially a global recognition that the Earth’s climate needs attention, that there’s something going on there.
And how does the IPCC work? Well, it’s an attempt to draw in expert folks, often the best in their fields, from all sorts of relevant areas, lots of them, and to get them to weigh in on the climate issue. Not only that, and this is the tricky part, they have to come up with a consensus statement. Not an easy task, because there are many scientists and academics in the report who do have opposing views.
However, at the end of the day, it works. And this is because the fact of the matter is that the robustness and elegance of experimental design is universal. There are empirical ways to determine whether an experiment is done well or shoddily, and this is something a scientist in his/her respective field is trained to do. Oh, and with the last version of the IPCC (4th version, released in 2007) it’s quite a few of them as well:
People from over 130 countries contributed to the IPCC Fourth Assessment Report over the previous 6 years. These people included more than 2500 scientific expert reviewers, more than 800 contributing authors, and more than 450 lead authors.
At the end of the day (and I may be niave here), that’s a huge collective of expertise, and you would presume that most if not almost all of them do this work to live up to Karl Popper’s ideals – i.e. trying, in a rational, objective, and testable manner, to get as close to the “truth” as possible.
All to say that Patrick, I suspect, is not on this report – partly because it’s likely that he is no longer a practicing ecology scientist, and partly, because the statements he’s making here really aren’t about the field of ecology. Even so, I’m not saying the IPCC is perfect – it’s far from it. But right now, it’s the best we’ve got. And the “best we’ve got” (as oppose to careless spin) is exactly the sort of thing we need for a global challenge of this magnitude.
(Sidenote: Didn’t get a chance to go into the Antarctic ice thing, but I will… it’s a great example of how nuanced these things can be. I can send a notice of this via twitter, @dnghub if folks are interested but don’t want to miss it).