The highlight of the whole conference for me was the Wednesday afternoon session on Methodologies of Climate Model Confirmation and Interpretation, and the poster session the following morning on the same topic, at which we presented Jon’s poster. Here’s my notes from the Wednesday session.
Before I dive in, I will offer a preamble for people unfamiliar with recent advances in climate models (or more specifically, GCMs) and how they are used in climate science. Essentially, these are massive chunks of software that simulate the flow of mass and energy in the atmosphere and oceans (using a small set of physical equations), and then couple these to simulations of biological and chemical processes, as well as human activity. The climate modellers I’ve spoken to are generally very reluctant to have their models used to generate predictions of future climate – the models are built to help improve our understanding of climate processes, rather than to make forecasts for planning purposes. I was rather struck by the attitude of the modellers at the Hadley centre at the meetings I sat in on last summer in the early planning stages for the next IPCC reports – basically, it was “how can we get the requested runs out of the way quickly so that we can get back to doing our science”. Fundamentally, there is a significant gap between the needs of planners and policymakers for detailed climate forecasts (preferably with the uncertainties quantified), and the kinds of science that the climate models support.
Climate models do play a major role in climate science, but sometimes that role is over-emphasized. Hansen lists climate models third in his sources of understanding of climate change, after (1) paleoclimate and (2) observations of changes in the present and recent past. This seems about right – the models help to refine our understanding and ask “what if…” questions, but are certainly only one of many sources of evidence for AGW.
Two trends in climate modeling over the past decade or so are particularly interesting: the push towards higher and higher resolution models (which thrash the hell out of supercomputers), and the use of ensembles:
- Higher resolution models (i.e. resolving the physical processes over a finer grid) offer the potential for more detailed analysis of impacts on particular regions (whereas older models focussed on global averages). The difficulty is that higher resolution requires much more computing power, and the higher resolution doesn’t necessarily lead to better models, as we shall see…
- Ensembles (i.e. many runs of either a single model, or of a collection of different models) allow us to do probabilistic analysis, for example to explore the range of probabilities of future projections. The difficulty, which came up a number of times in this session, is that such probabilities have to be interpreted very carefully, and don’t necessarily mean what they appear to mean.
Much of the concern is over the potential for “big surprises” – the chance that actual changes in the future will lie well outside the confidence intervals of these probabilistic forecasts (to understand why this is likely, you’ll have to read on to the detailed notes). And much of the concern is with the potential for surprises where the models dramatically under-estimate climate change and its impacts. Climate models work well at simulating 20th Century climate. But the more the climate changes in the future, the less certain we can be that the models capture the relevant processes accurately. Which is ironic, really: if the climate wasn’t changing so dramatically, climate models could give very confident predictions of 21st century climate. It’s at the upper end of projected climate changes where the most uncertainty lies, and this is the scary stuff. It worries the heck out of many climatologists.
Much of the question is to do with adequacy for answering particular questions about climate change. Climate models are very detailed hypotheses about climate processes. They don’t reproduce past climate precisely (because of many simplifications). But they do simulate past climate reasonably well, and hence are scientifically useful. It turns out that investigating areas of divergence (either from observations, or from other models) leads to interesting new insights (and potential model improvements).
Okay, with that as an introduction, on to my detailed notes from the session (be warned: it’s a long post). More »