At the CMIP5 workshop earlier this week, one of Ed Hawkins‘ charts caught my eye, because he changed how we look at model runs. We’re used to seeing climate models used to explore the range of likely global temperature responses under different future emissions scenarios, and the results presented as a graph of changing temperature over time. For example, this iconic figure from the last IPCC assessment report (click for the original figure and caption at the IPCC site):

These graphs tend to focus too much on the mean temperature response in each scenario (where ‘mean’ means ‘the multi-model mean’). I tend to think the variance is more interesting – both within each scenario (showing differences in the various CMIP3 models on the same scenarios), and across the different scenarios (showing how our future is likely to be affected by the energy choices implicit in each scenario). A few months ago, I blogged about the analysis that Hawkins and Sutton did on these variabilities, to explore how the different sources of uncertainty change as you move from near term to long term. The analysis shows that in the first few decades, the differences in the models dominate (which doesn’t bode well for decadal forecasting – the models are all over the place). But by the end of the century, the differences between the emissions scenarios dominates (i.e. the spread of projections from the different scenarios is significantly bigger than the  disagreements between models). Ed presented an update on this analysis for the CMIP5 models this week, which looks very similar.

But here’s the new thing that caught my eye: Ed included a graph of temperature responses tipped on its side, to answer a different question: how soon will the global temperature exceed the policymaker’s adopted “dangerous” threshold of 2°C, under each emissions scenario. And, again, how big is the uncertainty? This idea was used in a paper last year by Joshi et. al., entitled Projections of when temperature change will exceed 2 °C above pre-industrial levels. Here’s their figure 1:

Figure 1 from Joshi et al, 2011

By putting the dates on the Y-axis and temperatures on the X-axis, and cutting off the graph at 2°C, we get a whole new perspective on what the models runs are telling us. For example, it’s now easy to see that in all these scenarios, we pass the 2°C threshold well before the end of the century (whereas the IPCC graph above completely obscures this point), and under the higher emissions scenarios, we get to 3°C by the end of the century.

A wonderful example of how much difference the choice of presentation makes. I guess I should mention, however, that the idea of a 2°C threshold is completely arbitrary. I’ve asked many different scientists where the idea came from, and they all suggest it’s something the policymakers dreamt up, rather than anything arising out of scientific analysis. The full story is available in Randalls, 2011, “History of the 2°C climate target”.


  1. Off topic, I know, but I have a suggestion that I want to run by you.

    There are various archives of IPCC/CMIP model runs. But there isn’t a similar archive of the models themselves. I would think that archiving the model would be useful for future modeling studies. For instance, it would be possible to run the archived models X years in the future with the previous X years of observations.
    Just cast your mind back to SAR models. I doubt that the you could find the actual code run in that effort for most of the models. Yet it would be interesting to do post-mortems, running the models with 15-20 years of additional obs and comparing them to the actual evolving climate.
    But without the code, its not possible.

  2. Pingback: Rotating the Question | Planet3.0

  3. That’s awesome. Related to your point about the arbitrariness of 2 degrees: how do we visualise impacts as things change, and their uncertainty? Can that all be combined in some informative way?

  4. Isn’t even B1 looking a little optimistic right now?

    Ron, it seems like that should be doable with at least several of the major models (e.g. GISS, NCAR, Hadley) and would be interesting to see. I vaguely recall a recent discussion by Hansen along these lines, although just for GISS and probably not for SAR specifically. Regarding the whole ensemble, note that nobody has gotten kicked off the island for having a crap model. That’s a bit of a scandal IMHO, but I understand national pride gets in the way. In any case, if, as Steve states above, current models are “all over the place” in the short term, I suppose we would expect the SAR models to be worse. As ever, we would all like more accuracy than we’re going to be able to get.

    Good point, Dan, and I don’t recall seeing such a thing. The Joshi graphic is useful and effective for the climate science-literate, but not so much for anyone else. Of course just putting things in terms of temp increases that are small relative to seasonal swings tends to make the problem seem innocuous, whereas adding things like season length, drought and heat waves might be effective with the general public. It sounds like a graphic challenge, though.

Leave a Reply

Your email address will not be published. Required fields are marked *