Over the next few years, you’re likely to see a lot of graphs like this (click for a bigger version):

This one is from a forthcoming paper by Meehl et al, and was shown by Jerry Meehl in his talk at the Annecy workshop this week. It shows the results for just a single model, CCSM4, so it shouldn’t be taken as representative yet. The IPCC assessment will use graphs taken from ensembles of many models, as model ensembles have been shown to be consistently more reliable than any single model (the models tend to compensate for each other’s idiosyncrasies).

But as a first glimpse of the results going into IPCC AR5, I find this graph fascinating:

  • The extension of a higher emissions scenario out to three centuries shows much more dramatically how the choices we make in the next few decades can profoundly change the planet for centuries to come. For IPCC AR4, only the lower scenarios were run beyond 2100. Here, we see that a scenario that gives us 5 degrees of warming by the end of the century is likely to give us that much again (well over 9 degrees) over the next three centuries. In the past, people talked too much about temperature change at the end of this century, without considering that the warming is likely to continue well beyond that.
  • The explicit inclusion of two mitigation scenarios (RCP2.6 and RCP4.5) give good reason for optimism about what can be achieved through a concerted global strategy to reduce emissions. It is still possible to keep emissions below 2 degrees of warming. But, as I discuss below, the optimism is bounded by some hard truths about how much adaptation will still be necessary – even in this wildly optimistic case, the temperature drops only slowly over the three centuries, and still ends up warmer than today, even at the year 2300.

As the approach to these model runs has changed so much since AR4, a few words of explanation might be needed.

First, note that the zero point on the temperature scale is the global average temperature for 1986-2005. That’s different from the baseline used in the previous IPCC assessment, so you have to be careful with comparisons. I’d much prefer they used a pre-industrial baseline – to get that, you have to add 1 (roughly!) to the numbers on the y-axis on this graph. I’ll do that throughout this discussion.

I introduced the RCPs (“Representative Concentration Pathways”) a little in my previous post. Remember, these RCPs were carefully selected from the work of the integrated assessment modelling community, who analyze interactions between socio-economic conditions, climate policy, and energy use. They are representative in the sense that they were selected to span the range of plausible emissions paths discussed in the literature, both with and without a coordinated global emissions policy. They are pathways, as they specify in detail how emissions of greenhouse gases and other pollutants would change, year by year, under each set of assumptions. The pathways matters a lot, because it is cumulative emissions (and the relative amounts of different types of emissions) that determine how much warming we get, rather than the actual emissions level in any given year. (See this graph for details on the emissions and concentrations in each RCP).

By the way, you can safely ignore the meaning of the numbers used to label the RCPs – they’re really just to remind the scientists which pathway is which. Briefly, the numbers represent the approximate anthropogenic forcing, in W/m², at the year 2100.

RCP8.5 and RCP6 represent two different pathways for a world with no explicit climate policy. RCP8.5 is at about the 90th percentile of the full set of non-mitigation scenarios described in the literature. So it’s not quite a worse case scenario, but emissions much higher than this are unlikely. One scenario that follows this path is a world in which renewable power supply grows only slowly (to about 20% of the global power mix by 2070) while most of a growing demand for energy is still met from fossil fuels. Emissions continue to grow strongly, and don’t peak before the end of the century. Incidentally, RCP8.5 ends up in the year 2100 with a similar atmospheric concentration to the old A1FI scenario in AR4, at around 900ppm CO2.

RCP6 (which is only shown to the year 2100 in this graph) is in the lower quartile of likely non-mitigation scenarios. Here, emissions peak by mid-century and then stabilize at a little below double current annual emissions. This is possible without an explicit climate policy because under some socio-economic conditions, the world still shifts (slowly) towards cleaner energy sources, presumably because the price of renewables continues to fall while oil starts to run out.

The two mitigation pathways, RCP2.6 and RCP4.5 bracket a range of likely scenarios for a concerted global carbon emissions policy. RCP2.6 was explicitly picked as one of the most optimistic possible pathways – note that it’s outside the 90% confidence interval for mitigation scenarios. The expert group were cautious about selecting it, and spent extra time testing its assumptions before including it. But it was picked because there was interest in whether, in the most optimistic case, it’s possible to stay below 2°C of warming.

Most importantly, note that one of the assumptions in RCP2.6 is that the world goes carbon-negative by around 2070. Wait, what? Yes, that’s right – the pathway depends on our ability to find a way to remove more carbon from the atmosphere than we produce, and to be able to do this consistently on a the global scale by 2070. So, the green line in the graph above is certainly possible, but it’s well outside the set of emissions targets currently under discussion in any international negotiations.

RCP4.5 represents a more mainstream view of global attempts to negotiate emissions reductions. On this pathway, emissions peak before mid-century, and fall to well below today’s levels by the end of the century. Of course, this is not enough to stabilize atmospheric concentrations until the end of the century.

The committee that selected the RCPs warns against over-interpretation. They deliberately selected an even number of pathways, to avoid any implication that a “middle” one is the most likely. Each pathway is the result of a different set of assumptions about how the world will develop over the coming century, either with, or without climate policies. Also:

  • The RCPs should not be treated as forecasts, nor bounds on forecasts. No RCP represents a “best guess”. The high and low scenarios were picked as representative of the upper and lower ends of the range described in the literature.
  • The RCPs should not be treated as policy prescriptions. They were picked to help answer scientific questions, not to offer specific policy choices.
  • There isn’t a unique socio-economic scenario driving each RCP – there are multiple sets of conditions that might be consistent with a particular pathway. Identifying these sets of conditions in more detail is an open question to be studied over the next few years.
  • There’s no consistent logic to the four RCPs, as each was derived from a different assessment model. So you can’t, for example, adjust individual assumptions to get from one RCP to another.
  • The translation from emissions profiles (which the RCPs specify) into atmospheric concentrations and radiative forcings is uncertain, and hence is also an open research question. The intent is to study these uncertainties explicitly through the modeling process.

So, we have a set of emissions pathways chosen because they represent “interesting” points in the space of likely global socio-economic scenarios covered in the literature. These are the starting point for multiple lines of research by different research communities. The climate modeling community will use them as inputs to climate simulations, to explore temperature response, regional variations, precipitation, extreme weather, glaciers, sea ice, and so on. The impacts and adaptation community will use them to explore the different effects on human life and infrastructure, and how much adaptation will be needed under each scenario. The mitigation community will use them to study the impacts of possible policy choices, and will continue to investigate the socio-economic assumptions underlying these pathways, to give us a clearer account of how each might come about, and to produce an updated set of scenarios for future assessments.

Okay, back to the graph. This represents one of the first available sets of temperature outputs from a Global Climate Model for the four RCPs. Over the next two years, other modeling groups will produces data from their own runs of these RCPs, to give us a more robust set of multi-model ensemble runs.

So the results in this graph are very preliminary, but if the results from other groups are consistent with them, here’s what I think it means. The upper path, RCP8.5, offers a glimpse of what happens if economic development and fossil fuel use continue to grow they way they have over the last few decades. It’s hard to imagine much of the human race surviving the next few centuries under this scenario. The lowest path, RCP2.6, keeps us below the symbolically important threshold of 2 degrees of warming, but then doesn’t bring us down much from that throughout the coming centuries. And that’s a pretty stark result: even if we do find a way to go carbon-negative by the latter part of this century, the following two centuries still end up hotter than it is now. All the while that we’re re-inventing the entire world’s industrial basis to make it carbon-negative, we also have to be adapting to a global climate that is warmer than any experienced since the human species evolved.

[By the way: the 2 degree threshold is probably more symbolic than it is scientific, although there's some evidence that this is the point above which many scientists believe positive feedbacks would start to kick in. For a history of the 2 degree limit, see Randalls 2010].

19 Comments

  1. How do the RCP scenarios compare to stabilisation of CO2e at 550, 450, or 350 ppm? Also, do these scenarios include emissions beyond 2100 or do they only show the continuing response to emissions before 2100?

  2. From the viewpoint of physical climate science, 2 degree Celsius is nothing special, just one point in a continuum. I understand that the paper by Randalls also says so, and that the threshold is given as a level which seems intolerable by the human society. Maybe “many scientists believe positive feedbacks would start to kick in”, but it is a belief and not a scientific consensus.

  3. Outstanding post, Steve.

    I’m particularly happy to see the IPCC focus on what happens after 2100; I’ve long contended that implicitly acting as if 2100 were a finish line — stay under 2C until then and you’re home free — was one of the most destructive and misleading things we were doing.

    As I see it, there are two points in this causality chain (co2 –> impacts) that we need to be, at a bare minimum, concerned about:

    The first is the mapping of emissions to temp increases, which I think we have a pretty good handle on, although it certainly deserves much more study.

    The second is the mapping of warming to knock-on effects, including sea level rise, droughts, floods, and those ever terrifying feedbacks like methane hydrates and permafrost carbon. My understanding is that this area is less well understood than is the co2 –> temp mapping, and it also contains the possibilities for some truly hair raising consequences.

    (I don’t mean to imply this is the end of the study-worthy topics, of course. Adaptation, mitigation, the complex of issues involving CC, food production, and failed states, etc. are all critical areas begging for attention.)

    My reading of relevant material suggests that 2C is indeed too high a limit, and I keep going back to the 1972(!) book by a UN-assembled team of 152 experts from 58 countries, published as the book “Only One Earth”, that says:

    “Clearly man has had nothing to do with these vast climatic changes [moving in and out of ice ages] in the past. And from the scale of the energy systems involved, it would seem rational to suppose that he is not likely to affect them in the future. But here we encounter another fact about our planetary life: the fragility of the balances through which the natural world that we know survives. In the field of climate, the sun’s radiations, the earth’s emissions, the universal influence of the oceans, and the impact of the ice are unquestionably vast and beyond any direct influence on the part of man. But the balance between incoming and outgoing radiation, the interplay of forces which preserves the average global level of temperature appear to be so even, so precise, that only the slightest shift in the energy balance could disrupt the whole system. It takes only the smallest movement at its fulcrum to swing a seesaw out of the horizontal. It may require only a very small percentage of change in the planet’s balance of energy to modify average temperatures by 2°C. Downward, this is another ice age; upward, a return to an ice-free age. In either case, the effects are global and catastrophic.”

  4. Hello,
    I’m new here and to make sure I REALLY make friends form the off I’m afraid I’m going to play devil’s advocate here. 

    I’m hazy on the actual merit of this graph. You stated that the models are not directly comparable given that there is “no consistent logic to the four RCP’s”, so one would have to assume that the graph is for illustrative purposes only- though one can certainly question the way this has been presented.

    On a practical note, why is there no consistent logic? Can you expand on that? What DO the models share (I’m thinking critical aspects here like an abstract value for climate sensitivity or a base assumption that doubling of co2 leads to a temp rise of X degrees)?

    I struggle to understand the ensemble methodology when an iterative reduction process is not utilised- am I missing something obvious??
    Incidentally, i posted a few questions in the V+V thread along this line :
    http://www.easterbrook.ca/steve/?p=2032&cpage=1#comment-6181

  5. @LM: Looks like you need a primer on what these models are. This paper might help:
    http://www.ncbi.nlm.nih.gov/pubmed/20148028
    especially Box 1, which distinguishes Integrated Assessment Models (IAMs) from General Circulation Models (GCMs).

    The graph is the output from a GCM, given four possible year-by-year emissions paths. GCMs don’t have any assumptions about climate sensitivity built in – they calculate it by simulating atmospheric and ocean circulation, radiative properties of the atmosphere, etc. It’s really very basic physics, with the simulation based on a set of thermodynamic equations. So putting them on the same graph makes complete sense – it shows how the model projects global temperature change for the four different emissions scenarios.

    The “no consistent logic” comment refers to the socio-economic models that were used to produce the sample emissions pathways in the first place, using IAMs. Projecting the rate at which the world will burn fossil fuels (and hence produce emissions) over the rest of the century is a very complex problem, requiring analysis of many different socio-economic factors. The four representative pathways were originally developed from four different models that analyze such socio-economic conditions.

    The process therefore offers some separation of concerns:
    Figuring out the likely emissions paths under different assumptions about development, economic growth, population, energy mix, etc, is done by one research community, using many different models (IAMs).
    From this process, four representative pathways have been picked to cover the full range of what’s considered likely by that community.
    These are then used as input to GCMs, which analyze the climate system response to the emissions pathways.
    The results from the GCMs are then used to guide analysis of impacts and adaptation needs. And also to feed back into the first community’s analysis, as human policy responses to unfolding climate change will certainly change the socio-economic conditions they originally built into their models.

    Hmmm – I can see a clearer explanation of all this might be useful. I’ll try writing a primer on this as a separate blog post soon….

  6. Thanks for the response!

    My issue i think is that in the GCM’s using ‘very basic physics’ you’re not actually simulating the climate, but an abstract version OF it. Therefore, given the inherant complexity of the climate, the still unknown mechanisms and drivers/feedbacks/forcings, how can you know that what you’re simulating bears any relation to the real world?

    What may work on the benchtop in lign with theoretical principles may behave completely different in a complex, non linear system.

    From my personal experience, i know it is exeptionally dangerous to extrapolate in vitro to in vivo testing, i think this analogy would hold here for the climate too.

    Now given that validation against real world data doesn’t seem to occur, how do we know that we’re not in fact only exploring OUR interprettion of the climate and not the climate itself?

    That’s my worry.

    I’m looking at this from an ‘industry’ point of view.

  7. Oh and thanks for the paper- i’ll grab it.

  8. @LM: It’s strange that you would profess so little knowledge of what these models are, and how they work, yet be so convinced that they’re not validated against real world data. What do you think the scientists who develop these models do all day? Sit on their hands?

    Here’s a clue. Go look at all the different CMIP5 experiments I listed in my previous post. Why do you think there are so many experiments focussed on the past, rather than the future?

  9. Steve, please don’t think i’m being difficult because of a pre-conceived viewpoint or anything like that- i’m genuinley trying to understand this process better.

    Most of my information on models has been gathered from reading the IPCC work and associated material on that.

    When i say ‘validated against real world data’ i mean that the ensemble runs are set up, ran and compared to real data. The models that do not work are discarded- following a typical iterative design process.

    Now my understanding is (and as i’ve said i’m happy to be shown to be wrong on this) that this does not happen, rather the models are adjusted to take into account the real world observations rather than being designed, specifically, to replicated them accuratley.

    I.e. inputs are assigned, parameters decided and the model is ran. The model is then tweaked to match observational results (depending on the application of course).

    The experiments focussed on the past are hindcasts, where they are attempting to use past data to ‘calibrate’ the models to run predictive tests. That makes sense, but that emphatically is NOT validation against real world data- that is an in-process qualification/optomisation run(s).

    For the models to have been validated, they must run predictive tests and be proven to be accurate- with no post-run adjustment. I haven’t seen any evidence of this sort of validation in climate models (again, happy to say i may have missed something).

    This is how engineering models work so i’m wondering why it isn’t applied to climate models.

    I appreciate your patience on this and your responses.

  10. @LM: No, you’re not “genuinely trying to understand this process better”. You have strong preconceptions about what’s wrong with the models – that’s very clear from what you say and how you say it. And it’s also clear that you have no evidential basis for these preconceptions because you obviously know very little about the science that the models support.

    Let me point out a couple of obvious problems with what you say:
    (1) “models that do not work are discarded…”. This is quite obviously a strawman. Nobody in any part of the software industry builds software by developing lots of different software solutions and discarding the ones that don’t work. No software is ever built like this – it’s too expensive. There’s usually nowhere near enough iteration in industrial software processes, but when there is, it’s iteration over a single codebase. Occasionally a major line of development of the codebase has to be discarded, but this is rare, and everyone tries to avoid this because of the wasted effort. So insisting on this for climate models is, in effect, holding climate modellers up to impossible standards.

    (2) “…the model is then tweaked to match observational results…”. This is a gross mischaracterization of how models are built. Comparison with observational data is used, day-in and day-out, to understand weaknesses in the models and seek ways to improve them. Now, of course the models contain many parameterizations, where processes that cannot be resolved are represented instead via empirically derived parameters. And these parameters are frequently adjusted to improve model skill. But such adjustments are always validated against different observational datasets to assess how they affect model skill for different climate processes, and the impact of changing some parameters is assessed by their impact on the other parameters that weren’t adjusted. There would be no point doing it in the way you describe – you can’t do science like that.

    So, the reason you “haven’t seen any evidence” is because you haven’t done your homework. Go study a specific climate model, read the scientific papers that describe model results, and get educated.

    There are many weaknesses in climate models, and limitations on their ability to project the impacts of climate change into the future. But lack of validation against observational data is not one of them.

  11. Steve, my word you have a combative attitude; which is a damned shame as i’m learning a hell of a lot just reading your posts.

    I FULLY accept that i’m coming at this issue from a position of (relative) ignorance hence my repeated assertions that i’m happy to be proven or pointed out to be wrong.

    Do not take questions or statements based on (described) assumptions as an attack against yourself or the moddlers themselves.

    My positions currently is based off what i have read to date, if that’s wrong so be it- it’s no hair off my nose, i AM genuinley trying to learn on this subject and often the best way TO learn is to speak to an expert, hence my loitering here.

    Re your #1:
    Of course the code isn’t ACTUALLY discarded, i was speaking generally. Think of an interative design process; sequental ’rounds’ of models with the best sets being moved forwaed and expanded and the obvious no-hopers being left (until a time they can be repurposed etc).

    The reason i ask about this sort of process (which you’ve already intimated doesnt happen or apply here) is that it prevents issues with false assumptions persisting within a test set.

    Though, if the model sets (as you seem to suggest) are constantly updated against new information, then that could probably work just as well.

    Re your #2

    Ok, that makes sense. Are the parameterisations performed individually or in groups?

    Finally- Are the empirical data ‘validation’, is this performed on past data, i.e. hindcasting, or on future data as a predictive test? Also, what is the spread of the adjustments required on hind vs predictive testing?

    Thanks,
    LM

  12. @LM: I wouldn’t call it combative. I just have zero tolerance for BS and concern trolling. YMMV.

    Your first point is still making a distinction that I don’t think reflects reality. “the obvious no-hopers being left” – you’re still describing what sounds like evolutionary programming, which is an interesting research topic and has been applied to some simple problems especially in artificial intelligence, but isn’t practical for any reasonably complex software engineering problem. In iterative development you repeatedly look for weaknesses and replace sections of the code with better designs. This is precisely what happens in climate modeling, exactly as it would in any iterative software development process.

    #2 – the design of parameterizations is a complex research problem, and there is a subcommunity devoted to just about every interesting parameterization scheme, working independently to come up with better and better ways of doing it. These are folded into in the coupled models once they’ve been validated by the relevant research community. The twist is that a parameterization that has been shown to be better when validated in isolation can sometimes reduce overall skill of the coupled model, due to the complexity of interactions in the system (and yes, over-tuning in other components). Hence, to some extent, it’s a game of whack-a-mole: each new improvement to the models raises a whole bunch more research questions.

    As for your last question, most of the validation is done through hindcasting. There’s too much short term variability in the weather to test claims about climate model prediction on any reasonable timescale. The shortest timescale that climate models do anything useful for validation purposes is the decade (although some modellers will tell you even decadal projections are still a work in progress). That’s way too long to be scientifically useful for validation purposes. There’s not even much to be learned by looking at how well the models of 10 or 20 years ago did, as today’s models are already so much better. There are some exceptions to this of course. For example a couple of groups in the UK (UKMO and ECMWF) draw on the experiences of operational weather forcasting to test how various model schemes work in short term forecasting, and use the same techniques as the NWP community for verification. See the chapter by Cath Senior et al in this book: The Development of Atmospheric Circulation Models.

    The other thing that you could read if you are genuinely interested in this is the paper by Jakob, where he calls for a closer connection between process studies and climate model development. He’s right of course, although there are some tricky organizational and funding questions in the way of doing this better.

  13. Steve-

    Thanks for the response. There’s a difference between trolling, BS and genuine questions/misconceptions. I had thought i’d (quite clearly) fallen into the latter.

    Thanks, I’ll have a look at that paper.

    The problem i have is that i’m from an industry background so ‘expect’ a higher level of validation than is shown here- hindcasting is good, but simply saying we can’t validate because the time scales are too short is BS in itself.

    You could quite easily set a validation run up and then continue on as normal anyway with that running in the background. You’re of course right that the constant evolution of these models would make the (finally) validated and cuurent-gen models different- but it would at least allow you to check some of the more important parameters.

    I think the problem here is that the policy makers are taking the models and using them out of their designed roles, but let’s be honest, it’d be easier to model the whole climate on an iphone that reign a policy maker in!

    That aside, i don’t envy the moddler, i hate it when i’m trying to deal with more than one variable (i have an experiment at the moment that’s got 3, independant variables and it’s a royal pain to constrain all of them), so i have genuine empathy for those trying to model the climate.

  14. @LM: Phew – that’s a whole lot of misconceptions to untangle. Before I try to untangle them, I must question your assumption that the level of validation you’re used to in industry is necessarily higher than that applied to climate models. In my experience the opposite is true. Climate models are subjected to a huge number of different validation tests, because a large group of scientists are continually experimenting with every aspect of the model. This means their productivity (lines of code per person per day) is several orders of magnitude lower than commercial software development – they spend so long doing scientific validation, they don’t actually get much time to develop more code.

    So on to the misconceptions:
    – you don’t seem to understand what climate means. The reason you can’t do short term predictive capability evaluation is that on the scale of years to decades, internal variability dominates the uncertainty, while forcings only start to dominate on the multi-decade scale (see Hawkins and Sutton 2009 for details). In other words, there is no climate on the short scale – climate by definition is an emergent phenomena.
    – On the other hand, much of the modeling techniques are the same as used in NWP, so these get tested for predictive quality every day, and advances in NWP are folded back into climate models (and, as I said above, in many cases it’s the same model code, so it’s validated by multiple communities at multiple scales).
    – Policymakers don’t make any use of GCMs at all. They’re only ever run by climate scientists, for experiments designed by the climate scientists. All policy makers get is assessment reports, like the IPCC reports, in which model experiment results are presented in the context of other sources of knowledge (process studies, paleoclimate, direct observations, etc). So, no, there’s no such thing as a policymaker using them “out of their designed roles”.

    I suggest you read the IPCC AR4 WG1, chapter 8. It’s nowhere near a complete treatment, but you might at least start to get some idea of the sophistication of these models, and the amount of effort that’s put into validating them. Then there are stacks of textbooks on this. You really ought to read them *before* you form an opinion on the extent of model validation.

  15. Steve.

    Again thanks.

    I understand that the term climate is not applicable on shorter timesclases, but correct me if i’m wrong; we’ve had 20+ years of modelling to look at. Surely, that’s a long enough timescale for at least a partial validation?

    Secondly- what you are referring to is NOT validation (in the industry sense), so your comparison breaks down there. You’re reffering to in-process qualification and suitability runs. Hindcasting, data checking, inter and intra model comparative runs are not validation.

    On policy makers- If that’s the case then that’s good. You often hear the models refffered to out of context by the policy makers (especially in the UK), hopefully they are just out of context sound bites…

    On AR4, i’ve read it. Steve, don’t get me wrong, i appreciate the sophistication of these models. They’re phenominal work- seriously impressive stuff in places.

    My opinion on validation comes from my industry background where, it is defined (loosely) as a ‘real’ operational test on the item to be validated.

    If a model is to simulate the climate in response to forcings, it cannot be said to be validated, in the industry and engineering sense, until it has been subjected to predictive test verification and process run accuracy tests.

    I think THIS is where i’m getting hung up- you have one definition of V+V, i have another (as i said, i do not question the work, or the skill involved in making these models, they’re impressive).

    Steve- again, thanks for taking time to chat.

  16. Pingback: An Illustrated Guide to the Science of Global Warming Impacts: How We Know Inaction Is the Gravest Threat Humanity Faces

  17. Are methane releases in the north going to be taken into account in the next IPCC report?

  18. @Marie: No idea – not my area. But I’ll ask around at the Open Science conference in Denver next month.

  19. @Labmunkey Ah, I see where you’re stuck. You think that the purpose of a climate model is to forecast future climate change, and therefore you insist that the only validation that counts is a demonstration of correct forecasts of future climate change.

    There are two problems with this:
    (1) The main purpose of a climate models is *not* to forecast future climate change. Many scientists would much prefer they not be used in this way, and have expressed concern on the lack of understanding of the uncertainties inherent in this. (e.g. see my post on a long AGU session on this). Climate models are built as a tool for scientists to understand how the climate system works. They’re used in huge variety of experiments to test out theories of climactic processes, using data from the past and present (for more, see my post on what validation means in this context).

    Unfortunately, every 5-6 years the IPCC comes along and asks the scientists for up-to-date information on their prognosis for the future. And that’s the only bit of climate science that most people ever see. Should scientists refuse to run the models for future scenarios? That would be a bit unfortunate, as they are one of the best tools for understanding how the climate system responds to forcings. Experimentation with simulation of past climates (especially paleoclimate) gives us a sense of how well they do (and the many areas of uncertainty). Without the model projections, we can still make pretty good projections of future temperature response, because we know a lot about climate sensitivity from many other parts of the science that don’t depend on models at all. But we can’t say much about questions like the geographical distribution of impacts, effects on regional weather patterns, and so on. So the model projections a the best way we currently have of putting everything we currently know (or think we know) together and testing the consequences of it. The model projections could be wrong in either direction, but they do represent a good summary of the current state of the science (which is exactly what the IPCC is tasked with providing).

    (2) The other problem is that asking for demonstrations of future forecast accuracy is an impossible standard. We already know that the models are very poor at short term (several year) forecasts, because of chaos theory (see here). So we would have to wait 15-20 years for any decent analysis of forecast quality. But there’s another problem. Future projections of climate change are based on scenarios for how anthropogenic emissions will change over time. As the world never does exactly what the scenarios projected for year-to-year emissions (can you anticipate periods of world recession or indutrial expansion?). Twenty years ago, we didn’t have the computing power to run projections for a wide range of different scenarios. And twenty years ago, we under-estimated just how dramatically emissions would grow. So the kind of validation you’re asking for is doubly complicated, because you’re comparing a “what-if” scenario with what actually happened.

    Unfortunately, we already know, from many other sources than just the models, that we can’t wait 15-20 years to get serious about climate policies. You don’t need a complicated model to determine that continued fossil fuel emissions at the levels (and trend) seen in the last few decades will dramatically change the climate in ways that are likely to have disastrous impacts on humanity. So it’s no good saying we can’t base climate policy on models until they’re validated in this way. That’s the same as a smoker saying I’ll wait until smoking kills me before I decide whether the doctor’s advice to quit is accurate.

    (3) Oh, did I mention there’s also a (3)? Turns out, of course, that scientists, being curious people, occasionally do exactly the kind of validation you ask for on the models of 20 years ago. Those models bear up pretty well (and the analysis confirms things we had already discovered about where those models were weak – e..g in Hansen’s case, a slight over-estimate of sensitivity, as we didn’t do such a good job of capturing the effect of aerosols in the models back then). Todays models are dramatically different (think of how much computing technology has changed in 20 years), so the results of such retrospectives don’t tell us much about current models. But we do know that today’s models do a much better job at simulating past climates, so it seems reasonable (but not certain) to infer that their forecast ability has also improved dramatically.

    I suggest you read Reichler and Kim to get a sense of what we can do for model validation.

  20. Steve- a great response, thanks.

    on #1- i think you’ve probably hit the nail on the head there. Especially wrt the IPCC. It certainly gives me more confidence in the models and the scientists.

    on #2- i disagree with this, as although models are not primarily to be used for future state predictions, that IS the ultimte goal; a good understanding of the climate. SO i think these tests do have a place.

    on#3- i’d already seen that actually- my understanding was that the lowest range prediction was still significantly (wrt temperature rises) above the observed trends- though it IS close. Ths intesesting point on that is that prediction used static co2 emissions, while of course they have been growing exponentially.

    INcidentally, as background- i agree that co2 is a ghg, i agree that all things being equal it’s increase temps. I’m just hungup on the feedbacks, so at present i’d disagree with your assessment that action needs taking now (at least to the scale proposed by the IPCC, normal, sensible across-the-board reductions in all emissions should be encouraged).

  21. Pingback: An Illustrated Guide to the Science of Global Warming Impacts: How We Know Inaction Is the Gravest Threat Humanity Faces » Global Activist Network

  22. Pingback: One Model to Rule them All? | Serendipity

  23. Pingback: Crisis: Climate Change | SEEK

  24. Pingback: James Hansen Is Correct About Catastrophic Projections For U.S. Drought If We Don’t Act Now

  25. Pingback: As Exxon CEO Calls Global Warming’s Impacts ‘Manageable’, Colorado Wildfires Shutter Climate Lab | Lawsonry

  26. Pingback: As Exxon CEO Calls Global Warming’s Impacts ‘Manageable’, Colorado Wildfires Shutter Climate Lab

  27. Pingback: Yes, Deniers And Confusionists, The IEA And Others Warn Of Some 11°F Warming by 2100 If We Keep Listening To You

  28. Pingback: The CMIP5 Climate Experiments | Serendipity

Join the discussion: