Today’s Australian has a piece by Bob Carter predicting global cooling
Global atmospheric temperature reached a peak in 1998, has not warmed since 1995 and, has been cooling since 2002. Some people, still under the thrall of the Intergovernmental Panel of Climate Change’s disproved projections of warming, seem surprised by this cooling trend, even to the point of denying it. But why?
Well, look at this graph from my previous post. When you want to talk about climate trends, you need to use at a bare minimum ten years and not cherry pick your starting point.
There are two fundamentally different ways in which computers can be used to project climate. The first is used by the modelling groups that provide climate projections to the IPCC. These groups deploy general circulation models, which use complex partial differential equations to describe the ocean-atmosphere climate system mathematically. When fed with appropriate initial data, these models can calculate possible future climate states. The models presume (wrongly) that we have a complete understanding of the climate system.
Two things wrong here. Global Climate Models (as opposed to weather models) don’t depend on initial data. They solve a Boundary Value Problem (climate) rather than an Initial Value Problem (weather). And they don’t presume that we have a complete understanding of the climate system (otherwise scientists would just use that model instead of ensembles of models).
GCMs are subject to the well-known computer phenomenon of GIGO, which translates as “garbage in, God’s-truth out”.
Alternative computer projections of climate can be constructed using data on past climate change, by identifying mathematical (often rhythmic) patterns within them and projecting these patterns into the future. Such models are statistical and empirical, and make no presumptions about complete understanding; instead, they seek to recognise and project into the future the climate patterns that exist in real world data.
Because our understanding of atmospheric physics is incomplete, Carter thinks we can get better results if we throw out everything we do know. Fitting curves to the data without any modelling of the processes involved is not a good way to predict the future. See that sixth-degree polynomial fit again.
Carter then gives several examples of folks making such dubious predictions. I’ll look at one of then as an example.
In 2007, the 60-year climate cycle was identified again, by Chinese scientists Lin Zhen-Shan and Sun Xian, who used a novel multi-variate analysis of the 1881-2002 temperature records for China. They showed that temperature variation in China leads parallel variation in global temperature by five-10 years, and has been falling since 2001. They conclude “we see clearly that global and northern hemisphere temperature will drop on century scale in the next 20 years”.
Basically what they did was identify a cooling trend starting in 1881 and another one one starting in 1941 and concluded that there would be cooling every 60 years. This is like rolling a die and getting a five and concluding that you will get a five every time you roll it.
Which leads me to the main problem with EMD, which isn’t with the algorithm itself, but how it’s used by Zhen-Shan and Xian. They try to extrapolate the IMFs in order to predict stuff. But here’s the breaks: the IMFs are empirically derived functions which aren’t known to correspond to any neat formulae, so they can’t be extrapolated just like that — at least, not without some more work.
Conclusion: Lambert is right: this is indeed “just a rubbish paper that should not have been published”.