One of the questions I’ve been chatting to people about this week at the WCRP Open Science Conference this week is whether climate modelling needs to be reorganized as an operational service, rather than as a scientific activity. The two respond to quite different goals, and hence would be organized very differently:

  • An operational modelling centre would prioritize stability and robustness of the code base, and focus on supporting the needs of (non-scientist) end-users who want models and model results.
  • A scientific modelling centre focusses on supporting scientists themselves as users. The key priority here is to support the scientists’ need to get their latest ideas into the code, to run experiments and get data ready to support publication of new results. (This is what most climate modeling centres do right now).

Both need good software practices, but those practices would look very different in the case when the scientists are building code for their own experiments, versus serving the needs of other communities. There are also very different resource implications: an operational centre that serves the needs of a much more diverse set of stakeholders would need a much larger engineering support team in relation to the scientific team.

The question seems very relevant to the conference this week, as one of the running themes has been the question of what “climate services” might look like. Many of the speakers call for “actionable science”, and there has been a lot of discussion of how scientists should work with various communities who need knowledge about climate to inform their decision-making.

And there’s clearly a gap here, with lots of criticism of how it works at the moment. For example, here’s a great from Bruce Hewitson on the current state of climate information:

“A proliferation of portals and data sets, developed with mixed motivations, with poorly articulated uncertainties and weakly explained assumptions and dependencies, the data implied as information, displayed through confusing materials, hard to find or access, written in opaque language, and communicated by interface organizations only semi‐aware of the nuances, to a user community poorly equipped to understand the information limitations”

I can’t argue with any of that. But it begs the question as to whether solving this problem requires a reconceptualization of climate modeling activities to make them much more like operational weather forecasting centres?

Most of the people I spoke to this week think that’s the wrong paradigm. In weather forecasting, the numerical models play a central role, and become the workhorse for service provision. The models are run every day, to supply all sorts of different types of forecasts to a variety of stakeholders. Sure, a weather forecasting service also needs to provide expertise to interpret model runs (and of course, also needs a vast data collection infrastructure to feed the models with observations). But in all of this, the models are absolutely central.

In contrast, for climate services, the models are unlikely to play such a central role. Take for example, the century-long runs, such as those used in the IPCC assessments. One might think that these model runs represent an “operational service” provided to the IPCC as an external customer. But this is a fundamentally mistaken view of what the IPCC is and what it does. The IPCC is really just the scientific community itself, reviewing and assessing the current state of the science. The CMIP5 model runs currently being done in preparation for the next IPCC assessment report, AR5, are conducted by, and for, the science community itself. Hence, these runs have to come from science labs working at the cutting edge of earth system modelling. An operational centre one step removed from the leading science would not be able to provide what the IPCC needs.

One can criticize the IPCC for not doing enough to translate the scientific knowledge into something that’s “actionable” for different communities that need such knowledge. But that criticism isn’t really about the modeling effort (e.g. the CMIP5 runs) that contributes to the Working Group 1 reports. It’s about how the implications of the working group 1 translate into useful information in working groups 2 and 3.

The stakeholders who need climate services won’t be interested in century-long runs. At most they’re interest in decadal forecasts (a task that is itself still in it’s infancy, and a long way from being ready for operational forecasting). More often, they will want help interpreting observational data and trends, and assessing impacts on health, infrastructure, ecosystems, agriculture, water, etc. While such services might make use of data from climate model runs, it generally involve run models regularly in an operational mode. Instead the needs would be more focussed on downscaling the outputs from existing model run datasets. And sitting somewhere between current weather forecasting and long term climate projections is the need for seasonal forecasts and regional analysis of trends, attribution of extreme events, and so on.

So I don’t think it makes sense for climate modelling labs to move towards an operational modelling capability. Climate modeling centres will continue to focus primarily on developing models for use within the scientific community itself. Organizations that provide climate services might need to develop their own modelling capability, focussed more on high resolution, short term (decadal or shorter) regional modelling, and of course, on assessment models that explore the interaction of socio-economic factors and policy choices. Such assessment models would make use of basic climate data from global circulation models (for example, calculations of climate sensitivity, and spatial distributions of temperature change), but don’t connect directly with climate modeling.

4 Comments

  1. “The CMIP5 model runs currently being done in preparation for the next IPCC assessment report, AR5, are conducted by, and for, the science community itself. Hence, these runs have to come from science labs working at the cutting edge of earth system modelling. ”

    And they have to be completed on time for AR5. Much climate modelling is already operational, science led for sure, but working to external deadlines and for external customers. I suspect that most of the Hadley Centre supercomputing resource is devoted to “production” runs rather than experimental – but it’s no longer my personal responsibility, so I can’t say for sure.

  2. As for climate services, there are some long standing customers for climate data – the sorts of folks who care about long term means and return-periods of extreme events for agriculture, construction, water supplies, energy, etc. These ought to be the obvious customers for climate services since you’d have to be a very strange engineer indeed to want to design a dam or the like to cope with the last 100 years, rather than the next time-to-construct+100 yrs. But if your training has all been about using historical data rather than projections, that’s what you’ll do. Education and training trumps intuition for most folks – that’s how we’re trained 🙂

    Personally I’m more interested in developing services for 21st century infrastructure rather than C 19th. Mobile phone and internet that copes with everything that we, and our wonderful planet, can throw at it. http://earthquake-report.com/2011/09/16/mobile-communications-and-earthquakes-a-very-disturbing-marriage/ covers one type of disaster, climate change brings several more.

  3. In praxis the distinction between the types of modeling centers is quite vague. Most of the code of the numerical weather prediction models is also mainly written by scientists, not by software engineers. In fact often one code is used both as numerical weather prediction model and as climate model with only differences in the settings.

    Germany has just started a large project on seamless decadal climate prediction, MiKlip.
    http://www2.meteo.uni-bonn.de/venema/themes/adaptive_parameterisations/MiKlip.html
    The model components will be developed and validated by scientists at universities and research institutes, the full system will be put together by programmers at a climate research institute and once ready it will be operationally run at the Germany weather service.

  4. SLightly off topic: you may be interested in http://youtu.be/frGEzlrhve0 this demo of augmented reality. Kind of a breakthrough. And it occurred to me that this might be fun to use with climate modeling…. if only dynamic flowcharting. But it is too neat to ignore.

    ” …video shows handheld projector systems have the potential to enable users to dynamically augment environments with digital graphics. In this research project we explore new parts of the design space for interacting using handheld projection in indoor spaces, in particular those that are ‘aware’ of the environment in which they are used.”

Leave a Reply

Your email address will not be published. Required fields are marked *