This week I’ve been attending a workshop at NCAR in Boulder, to provide input to the US National Acadamies committee on “A National Strategy for Advancing Climate Modeling”. The overall goal is to:

“…develop a strategy for improving the nation’s capability to accurately simulate climate and related Earth system changes on decadal to centennial timescales. The committee’s report is envisioned as a high level analysis, providing a strategic framework to guide progress in the nation’s climate modeling enterprise over the next 10-20 years”

The workshop has been fascinating, addressing many of the issues I encountered on my visits to various modelling labs last year – how labs are organised, what goes into the models, how they are tested and validated, etc. I now have a stack of notes from the talks and discussions, so I’ll divide this into several blog posts, corresponding to the main themes of the workshop:

The full agenda is available at the NRC website, and they’ll be posting meeting summaries soon.

The first session of the workshop kicked off with the big picture questions: what earth system processes should (and shouldn’t) be included in future earth system models, and what relationship should there be with the models used for weather forecasting?To get the discussion started, we had two talks, from Andrew Weaver (University of Victoria) and Tim Palmer (U Oxford and ECMWF).

Andrew Weaver’s talk drew on lessons learned from climate modelling in Canada to argue that we need flexibility to build different types of model for different kinds of questions. Central to his talk was a typology of modelling needs, based on the kinds of questions that need addressing:

  • Curiosity-driven research (mainly done in the universities). For example:
    • Paleo-climate (e.g. what are the effects of a Heinrich event on African climate, and how does this help our understanding of the migration of early humans from Africa?)
    • Polar climates (e.g. what is the future of the greenland ice sheet?)
  • Policy and decision making questions (mainly done by the government agencies). These can be separated into:
    • Mitigation questions (e.g. how much can we emit and still stabilize at various temperature levels?)
    • Adaptation questions (e.g. infrastructure planning over things like water supplies, energy supply and demand)

These diverse types of question place different challenges on climate modelling. From the policy-making point of view, there is a need for higher resolution and downscaling for regional assessments, along with challenges of modelling sub-grid scale processes (Dave Randall talked more about this in a later session). On the other hand, for paleo-climate studies, we need to bring additional things into the models, such as representing isotopes, ecosystems, and human activity (e.g. the effect on climate of the switch from hunter-gatherer to agrarian society).

This means we need a range of different models for different purposes. Andrew argues that we should design a model to respond to a specific research question, rather than having one big model that we apply to any problem. He made a strong case for a hierarchy of models, describing his work with EMICs like the UVic model, which uses a simple energy/moisture balance model for the atmosphere, coupled with components from other labs for sea ice, ocean, vegetation, etc. He raised a chuckle when he pointed out that neither EMICs nor GCMs are able to get the greenland icesheet right, and therefore EMICS are superior because they get the wrong answer faster. There is a serious point here – for some types of question, all models are poor approximations, and hence, their role is to probe what we don’t know, rather than to accurately simulate what we do know.

Some of problems faced in the current generation of models will never go away: clouds and aerosols, ocean mixing, precipitation. And there are some paths we don’t want to take, particularly the idea of coupling general equilibrium economic models with earth system models. The latter may be very sexy, but it’s not clear what we’ll learn. The uncertainties in the economics models are so large that you can get any answer you like, which means this will soak up resources for very little knowledge gain. We should also resist calls to consolidate modelling activities at just one or two international centres.

In summary, Andrew argued we are at a critical juncture for US and international modelling efforts. Currently the field is dominated by a race to build the biggest model with the most subcomponents for the SRES scenarios. But a future priority must be on providing information needed for adaptation planning – this has been neglected in the past in favour of mitigation studies.

Tim Palmer’s talk covered some of the ideas on probabilistic modeling that he presented at the AGU meeting in December. He began with an exploration of why better prediction capability is important, adding a third category to Andrew’s set of policy-related questions:

  • mitigation – while the risk of catastrophic climate change is unequivocal, we must reduce the uncertainties if we are ever to tackle the indifference to significant emissions cuts; otherwise humanity is heading for utter disaster.
  • adaptation – spending wisely on new infrastructure depends on our ability to do reliable regional prediction.
  • geo-engineering – could we ever take this seriously without reliable bias-free models?

Tim suggested two over-riding goals for the climate modeling community. We should aspire to develop ab initio models (based on first principles) whose outputs do not differ significantly from observations; and we should aspire to improve probabilistic forecasts by reducing uncertainties to the absolute minimum, but without making the probabilistic forecasts over-confident.

The main barriers to progress are limited human and computational resources, and limited observational data. Earth System Models are complex – there is no more complex problem in computational science – and great amounts of ingenuity and creativity are needed. All of the directions we wish to pursue – great complexity, higher resolution, bigger ensembles, longer paleo runs, data assimilation – demand more computational resources. But just as much, we’re hampered by an inability to get the maximum value from observations. So how do we use observations to inform modelling in a reliable way?

Are we constrained by our history? Ab initio climate models originally grew out of numerical weather prediction, but rapidly diverged. Few climate models today are viable for NWP. Models are developed at the institutional level, so we have diversity of models worldwide, and this is often seen as a virtue – the diversity permits the creation of model ensembles, and rivalry between labs ought to be good for progress. But do we want to maintain this? Are the benefits as great as they are often portrayed? This history, and the institutional politics of labs, universities and funding agencies provide an unspoken constraint on our thinking. We must be able to identify and separate these constraints if we’re going to give taxpayers best value for money.

Uncertainty is a key issue. While the diverisity of models often cited as measure of confidence in ensemble forecasts, this is very ad hoc. Models are not designed to span uncertainty in representation of physical processes in any systematic way, so the ensembles are ensembles of opportunity. So if the models agree, how can we be sure this is a measure of confidence? An obvious counter-example is in economics, where the financial crisis happened despite all economics models agreeing.

So can we reduce uncertainty if we had better models? Tim argues that seamless prediction is important. He defines it as “bringing the constraints and insights of NWP into the climate arena”. The key idea is to be able to use the same modeling system to move seamlessly between forecasts at different scales, both temporally and spatially. In essence, it means unifying climate and weather prediction models. This unification brings a number of advantages:

  • It accelerates model development and reduces model biases, by exploring model skill on shorter, verifiable timescales.
  • It helps to bridge between observations and testing strategies, using techniques such as data assimilation.
  • It brings the weather and climate communities together
  • It allows for cross-over of best practices.

Many key climate amplifiers are associated with fast timescale processes, which are best explored in NWP mode. This approach also allows us to test the reliability of probabilistic forecasts, on weekly, monthly and seasonal timescales. For example, reliability diagrams can be used to explore the reliability of a whole season of forecasts. This is done by subsampling the forecasts, taking, for example, all the forecasts that said it would rain with 40% probability, and checking that it did rain 40% of the time for this subsample.

Such a unification also allows a pooling of research activity in universities, labs, and NWP centres to make progress on important ideas such as stochastic parameterizations and data assimilation. Stochastic parameterizations are proving more skilful in NWP than multi-model ensembles on monthly timescales. The ideas are still in their infancy, but there is potential to pool research activity in universities, labs, and NWP centres. Data assimilation provides a bridge between modelling and observational world (As an example, Rodwell and Palmer were able to rule out high climate sensitivities in the climateprediction.net data using assimilation with the 6-hour ECMWF observational data).

In contrast to Andrew’s argument to avoid consolidation, Tim argues that consolidation of effort at the continental (rather than national) level would bring many benefits. He cites the example of the Aerobus, where European countries pooled their efforts because aircraft design had become too complex for any individual nation to go it alone. The Aerobus approach involves a consortium, allowing for specialization within each country.

Tim closed with a recap of an argument he made in a recent Physics World article. If a group of nations can come together to fund the Large Hadron Collider, then why can’t we come together to fund a computing infrastructure for climate computing? If climate is the number one problem facing the world, then why don’t we have number one supercomputer, rather than being around 50 in the TOP500.

Following these talks, we broke off into discussion groups to discuss a set of questions posed by the committee. I managed to capture some of the key points that emerged from these discussions – the committee report will present a much fuller account.

  • As we expand earth system models to include ever more processes, we should not lose sight of the old unsolved problems, such as clouds, ice sheets, etc.
  • We should be very cautious about introducing things into the models that are not tied back to basic physics. Many people in the discussions commented that they would resist human activities being incorporated into the models, as this breaks the ab initio approach. However, others argued that including social and ecosystem processes is inevitable, as some research questions demand it, and so we should  focus on how it should be done. For example, we should start with places where 2-way interaction is the largest. E.g. land use, energy feedbacks, renewable energy.
  • We should be able to develop an organized hierarchy of models, with traceable connections to one another.
  • Seamless prediction means same within the same framework/system, but not necessarily same model.
  • As models will become more complex, we need to be able to disentangle different kinds of complexity – for example the complexity that derives from emergent behaviour vs. number of processes vs. number of degrees of freedom.
  • The scale gap between models and observations is disappearing. Model resolution is increasing exponentially, while observational capability increasing only  polynomially. This will improve our ability to test models against observations.
  • There is a great ambiguity over what counts as an Earth System Model. Do they include regional models? What level of coupling defines an ESM?
  • There are huge challenges in bringing communities together to consolidate efforts, even on seemingly simple things like agreeing terminology.
  • There’s a complementary challenge to the improvement of ESMs: how can we improve the representation of climate processes in other kinds of model?
  • We should pay attention to user-generated questions. In particular, the general public doesn’t feel that climate modelling is relevant to them, which partly explains problems with lack of funding.

Next up: challenges arising from hardware, software and human resource limitations…

7 Comments

  1. A trip to NCAR and a speech by Andrew Weaver? I am jealous!

    I am wondering whether, in the next few decades, climate modeling will become very important for geoengineering plans – figuring out the safest way to introduce negative forcings. What do you think?

  2. Hi Steve,
    I notice that a few people participated using a video link. I am curious to know how that worked out?

    D.

  3. Pingback: Teaching with simulations | Serendipity

  4. I don’t understand why everyone took a soft definition of Seamless Prediction. I’m pretty sure Tim Palmer would define that as using the same model, not just the same framework.

  5. @ClimateSpin: The problem is, “same model” is an arbitrary concept. It’s not possible, for example, to move seamlessly across scales from days to centuries, and metres to 100s of km, using the same set of parameterizations. So even at places like the UK Met Office where they have unified their weather and climate models, they build different models from the same codebase for working at different scales.

  6. By “same model” I mean same code and all they change is the grid spacing, timestep and the values of some scalars in the various parameter schemes (but the same schemes). I thought UK was that “seamless”. -Rob

  7. The UK Met Office are nowhere near that point. They set up a seamless assessment research team last year, and one of their challenges is to come up with a better understanding of the consequences of changing schemes when they move between scales. In absence of using the same schemes, they would at least have a clearer idea about the rationale for when to switch, and how the switch affects the climatology in the model.
    A stronger notion of “seamless” is purely aspirational at the moment. I think a lot of the parameterizations would have to be re-thought completely to achieve this.

  8. Pingback: Workshop on Advancing Climate Modeling (3) | Serendipity

  9. A stronger notion of “seamless” is purely aspirational at the moment. I think a lot of the parameterizations would have to be re-thought completely to achieve this.

    In fact, there would be changes necessary at the governing equation level as well.

Leave a Reply

Your email address will not be published. Required fields are marked *