There is a new paper out (Transient Climate Simulations with the HadGEM1 Climate Model: Causes of Past Warming and Future Climate Change) in J Climate on climate change – past century and next – as simulated by HadGEM1. This reaches the standard conclusions: whereas the effects of a combination of anthropogenic and natural forcings on the climate system could explain the evolution of global mean temperatures over the twentieth century, natural factors failed to explain the warming observed over the second half of the century.

However, they also appear to have taken seriously some other proposed mechanisms, because they also say the new model can demonstrate that black carbon and land use changes are relatively unimportant for explaining global mean near-surface temperature changes. Land use changes are RP Sr’s thing; black carbon I’ve seen pushed elsewhere, but I forget by whom. Which brings up Many open questions remain, for example, the role of forcings not yet fully included in CGCM simulations, such as land use change or forcing by black carbon and nonsulfate aerosols, which is from Detecting and Attributing External Influences on the Climate System: A Review of Recent Advances by THE INTERNATIONAL AD HOC DETECTION AND ATTRIBUTION GROUP. So thats one solved, then :-)

The 40-70′s “cooling” (fig 4) works moderately well in the natural-forcings version (though not as well as in the anthro-forced) so presumably there is a natural component to this, as well as the more familiar sulphate aerosol (which they can’t spell :-(). And it still doesn’t seem to be clear if the early 20th warming is natural or anthro or some (what?) mixture.

Comments

  1. #1 John Fleck
    2006/07/03

    When I went hunting for black carbon/albedo papers a while back, the most significant I could find was Hansen and Nazarenko, PNAS 2004, arguing that BC albedo changes could explain a quarter of the 20th century warming:

    http://www.pnas.org/cgi/content/abstract/101/2/423

    [Interesting. In the HadGEM paper, they are talking about black carbon, which I take to be soot, but its not clear whether they mean the airbourne effects, or the albedo effects on deposition, which the Hansen paper is about. If they include snow-albedo-effects, then they would appear to disagree with H - W]

  2. #2 Eli Rabett
    2006/07/03

    Mark Jacobson at Stanford

  3. #3 Steve Bloom
    2006/07/04

    Thanks for that, Eli. The main relevant page seems to be http://www.stanford.edu/group/efmh/fossil/fossil.html ; follow the link at the bottom for an interesting (and new to me) tale about Bush.

  4. #4 Martin Lewitt
    2006/07/15

    It is interesting that the abstract is explicitly dismissive of discrepencies with the observations by noting that they “are largest where observational uncertainty is greatest in the Tropics and high latitudes.” The high latitudes and the tropics are also where nearly all the models show positive albedo bias, which would of course also bias models against solar forcing. Although, the above cited work is recent, so are the IPCC diagnostic studies finding this albedo bias. There is no reason to think that this model has been corrected before any of the others. There are more than just the dismissed observations documenting discrepencies in the models at high latitudes and in the tropics. Roesch (2006) found specific problems in model parameterizations of the spring snow melts, leading to albedo errors resulting from errors in snow water equivalent, snow cover error and snow cover fraction. In addition there were albedo problems specifically albedos were too high in forested areas where lower albedo limbs and stems shade the high albedo snow. He also found a high albedo bias in the tropical deserts.

    Until peer reviewers start requiring model validation against the albedo and snow cover observations, there is no reason to put much credibility in attribution studies such as the one above.

    The presentation by Royer (2006) shows that modelers are still making questionable assumptions.

    “Three approaches to tackle model uncertainty:

    Multi-model: 7 coupled GCMs, each 9 IC ensemble members
    Perturbedphysics: 2 coupled GCMs, each 9 IC ens. members
    Stochasticphysics: 1 coupled GCM, 9 ensemble members”

    In light of demonstrated widespread shared biases, merely combining models into ensembles can no longer be assumed to reduce model uncertainty.

    The Royer presentation also demonstrated other risky methodology: “No significant difference was found between the results obtained with the European models and those obtained with the full set of IPCC AR4 models (i.e.,16 models)”

    The danger of validating models against other models, is that you may propogate the errors they share in common. Models can no longer be treated as if they are statistically independent.

    Yes, some “observations”, such as satellite, have a checkered history and are themselves essentially the product of models of a different kind. All must be validated against each other. It appears however, that the models have now been shown to have gone too far astray from the observations.

    Bender, et al (2006) http://www.blackwell-synergy.com/doi/abs/10.1111/j.1600-0870.2006.00181.x

    Roesch (2006) http://www-pcmdi.llnl.gov/ipcc/abstract.php?ipcc_publication_id=36

    Royer (2006) http://www.ecmwf.int/research/EU_projects/ENSEMBLES/documents/presentations_June2006/Jean-FrancoisRoyer_RT2A_progress.pdf

The site is undergoing maintenance presently. Commenting has been disabled. Please check back later!