Comparing models and empirical estimates Part II: interview with Brown

I recently posted an overview of a new climate study, Comparing the model-simulated global warming signal to observations using empirical estimates of unforced noise, by Patrick T. Brown, Wenhong Li, Eugene C. Cordero & Steven A. Mauget. That study is potentially important because of what it says about how to interpret the available data on global warming caused by human generated greenhouse gas pollution. Also, the since publication the study has been rather abused by climate contrarians who chose to interpret it very inaccurately. This is addressed in this item by Media Matters.

My post on the paper describes the basic findings, but at the time I wrote that, I had a number of questions for the study authors. I sent the questions off noting that there was not a big hurry to get back to me, since the climate wasn’t going anywhere any time soon. Lead Author, Patrick Brown, in the mean time, underwent something of a trial by fire when the denialosphere went nuts in the effort to misinterpret the study’s results. I guess I can’t blame them. There is no actual science to grab on to in the effort to deny the reality or importance of anthropogenic climate change, so why not just make stuff up?

Anyway, Patrick Brown addressed all the questions I sent him, and I thought the best way to present this information is as a straight forward interview. As follows.

Amidst the reactions I’ve seen on social media, blogs, etc. to your paper, I see the idea that your study suggests a downward shift in the (severity of, literally, GMT) resulting from greenhouse gas pollution than what was previously thought. However, I don’t think your paper actually says that. Can you comment?

Reply: You are correct, our paper does not say that. How much warming you get for a given change in greenhouse gasses is termed ‘climate sensitivity’ and our study does not address climate sensitivity at all. In fact, the words ‘climate sensitivity’ do not even appear in the study so we are a little frustrated with this interpretation.

It seems to me that between RCP 4.5, 6 and 8.5, you are suggesting that they differ in their ability to predict, with 6 being the best, 4.5 not as good (but well within the range) and 8.5 as being least good, possible but depending on conditions maybe rejectable.

Reply: Yes and this is just over the recent couple decades, our study does not address how likely these scenarios will be by next year or 2050 or 2100.

At this point I think the following characterizes your work; 1) Taking a somewhat novel look at models and data, what we were thinking before seems by and large confirmed by your work; The central trend of warming with increased greenhouse gas is confirmed in that models and data are by and large aligned in both central tendency (the trend line) and variation. Is this correct?

Reply: We found that models largely get the ‘big picture’ correct when it comes to how large the natural chaotic variability is. We already knew the multi-model mean was not getting the trend correct over the past decade-or-so but we knew that this could have been due to random natural variation. Our study just quantified how large the underlying global warming progression could be given that we saw little warming over the recent past.

Your paper seems to confirm that the more likely scenarios are more likely and the less likely scenarios are as previously thought, possible but less likely. More extreme scenarios have not been taken off the table, though there may be refinement in how we view them. Is that a fair characterization?

Reply: Yes.

The amount of noise in the climate system (EUN) is sufficiently high that much of the observed squiggling around a central trend line is accounted for by that noise and does not require questioning the models (that is my rewrite of "We find that the empirical EUN is wide enough so that the interdecadal variability in the rate of global warming over the 20th century does not necessarily require corresponding variability in the rate-of-increase of the forced signal” in your paper) Is it correct to say that unforced squiggling/EUN/noise would naturally go away with longer sampling intervals (going from years to decades, for example) but these results suggest that even interdecadal variability is likely a result of noise, not forcing.

Reply: Yes, we do not rule out that forcing may be responsible but we are saying that this inderdecadal variability doesn’t necessarily require forcing.

Would it be accurate to say that your paper speaks mainly to the nature of variation observed temperature over time, the squiggling of the signal up and down along a trend line, in relation to variation that is seen in models?

Reply: Just to clarify, in the paper we refer to the component of GMT change that is due to external radiative forcings (e.g., greenhouse gasses) as the ‘signal’ and the component due to chaotic unforced variability as ‘noise’. We don’t necessarily expect either of these to be linear or to follow a trend line. We estimated how large the noise was and used this estimate to see what we might be able to infer regarding the underlying signal, given recent observations.

Noise in this signal is presumably dampened by averaging out the numbers over time (widening the sampling interval, if you will) so as we go from years to decades we get a straighter line that should be more in accord with the correct model. Your paper seems to be suggesting that natural/internal variation (EUN, noise) often operates at a scale larger than we would dampen by looking at the data at the decade-long scale. Is that correct? If so, is it the case that an excursion (such as the so called pause/hiatus) that is 10–20 years long does not fall out of the range of expectations (of noise effects) according to your work?

Reply: Yes. No recent trend was completely outside of the range of possibility - even for RCP8.5. However, it’s naturally the case that a steeper signal (like RCP8.5) is less likely than a slower progressing signal (like RCP6.0) over a time period of no warming.

There seems to be some confusion about your conclusions regarding RCP8.5. Does this paper suggest that RCP8.5 should be rejected? Or does it suggest that it is less likely than previous work suggests?

Reply: First it must be said that we were not looking at how likely RCP8.5 is in the long run. We are simply asking the question “if it hasn’t warmed in 11 years (2001–2013) how likely is it that we have been on RCP 8.5 during that time? We find that it is not very likely but still possible.

Asking this a slightly different way (to address the confusion that is out there) does your paper confirm that 8.5 is less likely than RCP 6.0 as previously thought? If RCP 8.5 is less likely than previously thought does this mean that the entire probability distribution estimate for climate sensitivity needs to be shifted downward, or, alternatively, does it only mean that the upper tail is less fat than previously thought, and if so, how much less fat?

Reply: We may have seen less warming than RCP8.5 because the forcings have been overestimated in RCP8.5 relative to reality over the past decade. If forcings have been overestimed than we expect less warming, even with high climate sensitivity. Because of this possibility, our study cannot make conclusions about the climate sensitivity distribution.

Schurer et al did something similar to what you’ve done here a couple of years ago. Comparing their work and yours the question arises, can you get adequate constraint on the forced and internal variability separately from the paleodata and paleo-forced simulations? Or is there too much noise in the two systems that differencing between two noisy data sets is affected by too much noise amplification? In other words, you have partitioned the problem into model outputs vs. empirical, while Shurer separate between forced and internal. Does your (relatively orthogonal) take an additional risk?

Reply: It is certainly a challenge to know how much can be inferred from the paleo record. Our goal, however, was simply to use the paleo-record in a sensible way to estimate the magnitude of unforced variability. We feel that we adequately account for uncertainty in this estimation as we came up with over 15,000 different estimates which sampled uncertainty in different parameters.

Finally, your study goes up to 2013. The year 2013 (or thereabouts) may be considered as part of a sequence of years with little increase in surface temperature. However, starting in March 2014 we have seen only very warm months (starting earlier than that, but excluding February). Predictions on the table suggest 2015 will be warm, and actually, 2016 as well. If it turns out that 2014, 2015, and 2016 are each warmer than the previous year, and your entire study was redone to go to the end of 2016, would your results change? If so, how? (I?m thinking not because the time scale of your work is so large, but I need to ask!)

Reply: The study was submitted before the 2014 datpoint was added to the record which is why it stops there. If by 2016, we are back in the middle of the distribution for RCP8.5 then it would imply that we might be back on the RCP8.5 scenario. This wouldn’t actually change the results of the study since the study was only concerned with what had already occurred. New data will not change that it did not warm from 2002–2013 so our probability calculations of how likely it was that we are on RCP8.5 over those 11 years would not change.

More like this