On Thursday, Tim Palmer of the University of Oxford and the European Centre for Medium-Range Weather Forecasts (ECMWF) gave the Bjerknes lecture, with a talk entitled “Towards a Community-Wide Prototype Probablistic Earth-System Model“. For me, it was definitely the best talk of this year’s AGU meeting. [Update: the video of the talk is now up at the AGU webcasts page]

I should note of course, that this year’s Bjerknes lecture was originally supposed to have been given by Stephen Schneider, who sadly died this summer. Stephen’s ghost seems to hover over the entire conference, with many sessions beginning and ending with tributes to him. His photo was on the screens as we filed into the room, and the session began with a moment of silence for him. I’m disappointed that I never had a chance to see one of Steve’s talks, but I’m delighted they chose Tim Palmer as a replacement. And of course, he’s eminently qualified. As the introduction said: “Tim is a fellow of pretty much everything worth being a fellow of”, and one of the few people to have won both the Rossby and the Charney awards.

Tim’s main theme was the development of climate and weather forecasting models, especially the issue of probability and uncertainty. He began by reminding us that the name Bjerknes is iconic for this. Vilhelm Bjerknes set weather prediction on its current scientific course, by posing it as a problem in mathematical physics. His son, Jacob Bjerknes, pioneered the mechanisms that underpin our ability to do seasonal forecasting, particularly air-sea coupling.

If there’s one fly in the ointment though, it’s the issue of determinism. Lorenz put a stake into the heart of determinism, through his description of the butterfly effect. As an example, Tim showed the weather forecast for the UK for 13 Oct 1987, shortly before the “great storm” that turned the town of Sevenoaks [where I used to live!] into “No-oaks”. The forecast models pointed to a ridge moving in, whereas what developed was really a very strong vortex causing a serious storm.

Nowadays the forecast models are run many hundreds of times per day, to capture the inherent uncertainty in the initial conditions. An (retrospective) ensemble forecast for 13 Oct 1987 shows this was an inherently unpredictable set of circumstances. The approach now taken is to convert a large number of runs into a probabilistic forecast. This gives a tool for decision-making across a range of sectors that takes into account the uncertainty. And then, if you know your cost function, you can use the probabilities from the weather forecast to decide what to do. For example, if you were setting out to sail in the English channel on the 15th October 1987, you’d need both the probabilistic forecast *and* some measure of the cost/benefit of your voyage.

The same probabilistic approach is used in seasonal forecasting, for example for the current forecasts of the progress of El Niño.

Moving on to the climate arena, what are the key uncertainties in climate predictions? The three key sources are: initial uncertainty, future emissions, and model uncertainty. As we go for longer and longer timescales, model uncertainty dominates – it becomes the paramount issue in assessing reliability of predictions.

Back in the 1970’s, life was simple. Since then, the models have grown dramatically in complexity as new earth system processes have been added. But at the heart of the models, the essential paradigm hasn’t changed. We believe we know the basic equations of fluid motion, expressed as differential equations. It’s quite amazing that 23 mathematical symbols are sufficient to express virtually all aspects of motion in air and oceans. But the problem comes in how to solve them. The traditional approach is to project them (e.g. onto a grid), to convert them into a large number of ordinary differential equations. And then the other physical processes have to be represented in a computationally tractable way. Some of this is empirical, based on observations, along with plausible assumptions on how these processes work.

These deterministic, bulk-parameter parameterizations are based on the presumption of a large ensemble of subgrid processes (e.g. deep convective cloud systems) within each grid box, which then means we can represent them by their overall statistics. Deterministic closures have a venerable history in fluid dynamics, and we can incorporate these subgrid closures into the climate models.

But there’s a problem. Observations indicate a shallow power law for atmospheric energy wavenumber spectra. In other words, there’s no scale separation between the resolved and unresolved scales in weather and climate. The power law is consistent with what one would deduce from the scaling symmetries of the Navier-Stokes equations, but it’s violated by conventional deterministic parameterizations.

But does it matter? Surely if we can do a half-decent job on the subgrid scales, it will be okay? Tim showed a lovely cartoon from Schertzer and Lovejoy, 1993:

As pointed out in the IPCC WG1 Chp8:

“Nevertheless, models still show significant errors. Although these are generally greater at smaller scales, important large-scale problems also remain. For example, deficiencies remain in the simulation of tropical precipitation, the El Niño-Southern Oscillation and the Madden-Julian Oscillation (an observed variation in tropical winds and rainfall with a time scale of 30 to 90 days). The ultimate source of most such errors is that many important small-scale processes cannot be represented explicitly in models, and so must be included in approximate form as they interact with larger-scale features.”

The figures from the IPCC report show the models doing a good job over the 20thC. But what’s not made clear is that each model has had its bias subtracted out before this was plotted, so you’re looking at anomalies relative the the model’s own climatology. In fact, there is an enormous spread of the models against reality.

At present, we don’t know how to close these equations, and a major part of the uncertainty is in these equations. So, a missing box on the diagram of the processes in Earth System Models is “UNCERTAINTY”.

What does the community do to estimate model uncertainty? The state of the art is the multi-model ensemble (e.g. CMIP5). The idea is to poll across the  models to assess how broad the distribution is. But as everyone involved in the process understands, there are problems that are common to all of the models, because they are all based on the same basic approach to the underlying equations. And they also typically have similar resolutions.

Another pragmatic approach, to overcome the limitation of the number of available models, is to use perturbed physics ensembles – take a single model and perturb the parameters systematically. But this approach is blind to structural errors, because the one model used as the basis.

A third approach is to use stochastic closure schemes for climate models. You replace the deterministic formulae with stochastic formulae. Potentially, we have a range of scales at which we can try this. For example, Tim has experimented with cellular automata to capture missing processes, which is attractive because it can also capture how the subgrid processes move from one grid box to another. These ideas have been implemented in the ECMWF models (and are described in the book Stochastic Physics and Climate Modelling).

So where do we go from here? Tim identified a number of reasons he’s convinced stochastic-dynamic parameterizations make sense:

1) More accurate accounts of uncertainty. For example, attempts to assess skill of seasonal forecast with various different types of ensemble. For example Weisheimer et al 2009 scored the ensembles according to how well they captured the uncertainty – stochastic physics ensembles did slightly better than other types of ensemble.

2) Stochastic closures could be more accurate. For example, Berner et al 2009 experimented with adding stochastic backscatter up the spectrum, imposed on the resolved scales. To evaluate it, they looked a model bias. Use the ECMWF model, they increased resolution by factor of 5, which is computationally very expensive, but fills out the bias in the model. They showed the backscatter scheme reduces the bias of the model, in a way that’s not dissimilar to the increased resolution model. It’s like adding symmetric noise, but means that the model on average does the right thing.

3) Taking advantage of exascale computing. Tim recently attended talk by Don Grice, IBM Chief engineer, talking about getting ready for exascale computing. He said “There will be a tension between energy efficiency and error detection”. What he meant was that if you insist on bit-reproducibility you will pay an enormous premium in energy use. So the end of bit-reproducibility might be in sight for High Performance Computting.

To Tim, this is music to his ears, as he thinks stochastic approaches will be the solution to this. He gave the example of Lyric semiconductors, who are launching a new type of computer, with 1000 times the performance, but at the cost of some accuracy – in other words, probabilistic computing.

4) More efficient use of human resources. The additional complexity in earth system models comes at a price – huge demands on human resources. For many climate modelling labs, the demands are too great. So perhaps we should pool our development teams, so that we’re not all busy trying to replicate each other’s codes.

Could we move to a more community wide approach? It happened to the aerospace industry in Europe, when the various countries got together to form Aerobus. Is it a good idea for climate modelling? Institutional directors take a dogmatic view that it’s a bad idea. The argument is that we need model diversity to have good estimates of uncertainty. Tim doesn’t want to argue against this, but points out that once we have a probabilistic modelling capability, we can test this statement objectively – in other words, we can test whether in different modes, the multi-model ensemble does better than a stochastic approach.

When we talk about modelling, it covers a large spectrum, from idealized mathematically tractable models through to comprehensive mathematical models. But this has led to a separation of the communities. The academic community develops the idealized models, while the software engineering groups in the met offices build the brute-force models.

Which brings Tim to the grand challenge: the academic community should help develop prototype probabilistic Earth System Models, based on innovative and physically robust stochastic-dynamics models. The effort has started already, at the Isaac Newton Institute. They are engaging mathematicians and climate modellers, looking at stochastic approaches to climate modelling. They have already set up a network, and Tim encouraged people who are interested to subscribe.

Finally, Tim commented on the issue of how to communicate the science in this Post-cancun, post-climategate world. He went to a talk about how climate scientists should become much more emotional about communicating climate [Presumably the authors session the previous day]. Tim wanted to give his own read on this. There is a wide body of opinion that cost of major emissions cuts is not justified given current levels of uncertainty in climate predictions (and this body of opinion has strong political traction). Repeatedly appealing to the precautionary principle, and our grandchildren is not an effective approach. They can bring out pictures of their grandchildren, saying they don’t want them to grow up in a country bankrupted by bad climate policies.

We might not be able to move forward from the current stalemate without improving the accuracy of climate predictions. And are we (as scientists and government) doing all we possibly can to assess whether climate change will be disastrous, or something we can adapt to? Tim gives us 7/10 at present.

One thing we could do is to integrate NWP and seasonal to interannual prediction into this idea of seamless prediction. NWP and climate diverged in the 1960s, and need to come together again. If he had more time, he would talk about how data assimilation can be used as a powerful tool to test and improve the models. NWP models run at much finer resolution than climate models,  but are enormously computationally expensive. So are governments giving the scientists all the tools they need? In Europe, they’re not getting enough computing resources to put onto this problem. So why aren’t we doing all we possibly can to reduce these uncertainties?

Update: John Baez has a great in-depth interview with Tim over at Azimuth.

5 Comments

  1. But there’s a problem. Observations indicate a shallow power law for atmospheric energy wavenumber spectra. In other words, there’s no scale separation between the resolved and unresolved scales in weather and climate. The power law is consistent with what one would deduce from the scaling symmetries of the Navier-Stokes equations, but it’s violated by conventional deterministic parameterizations.

    Wow, that’s a really succinct description of turbulence modeling, and it’s not too garbled by the telephone game either.

  2. Sorry about the double quote boxes (still getting used to the iPad); Steve said the “But there’s a problem…” part and I’m saying the “Wow, that’s a really…” part.

    [Fixed that for ya. The words aren’t really mine though – it’s almost verbatim from Tim Palmer – Steve]

  3. Pingback: Tweets that mention AGU talk: Tim Palmer on Building Probabilistic Climate Models | Serendipity -- Topsy.com

  4. Good stuff. I’m still a little unclear on how the parameterizations are done, but the stochastic scatter stuff to cushion the parameterizations seem interesting. I wonder how this relates to Bayesian statistics?

    And a good supplement to this talks seems to be:
    Palmer 2000. A nonlinear dynamical perspective on model error: A proposal for non-local
    stochastic-dynamic parametrization in weather and climate prediction models
    . QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY.

    Btw the vid is up.

  5. Steve wrote: “A third approach is to use stochastic closure schemes for climate models.”

    What is a “closure scheme” for a climate model?

    Steve wrote: “Could we move to a more community wide approach? It happened to the aerospace industry in Europe, when the various countries got together to form Aerobus. Is it a good idea for climate modelling?”

    My first naive observation is that several industries seem to have settled in a situation where there are two major players competing for the market. Aerobus and Boeing is one example, others are

    – Java and .NET
    – Windows and Unix (derivates)
    – Oracle databases and IBM DB2 databases.

    I’m sure there are more. This seems to work pretty well: You need at least two players to have any competition at all (and you need some kind of competition, in science too!), and if there are no more than two players, they are able to concentrate the available resources. Given the current state of available resources, I’d say that two or three climate modeling centers would be a good idea (one centered in the EU, one in eastern Eurasia, one in the USA). But that’s just me 🙂

  6. Pingback: Workshop on Advancing Climate Modeling (1) | Serendipity

  7. A dozen years later & I still think back to Tim Palmer’s Bjerknes Lecture in December 2010. His comments on communicating climate science were particularly thought-provoking. I often refer others to both his scientific work and social commentary. Nice summary!

Leave a Reply

Your email address will not be published. Required fields are marked *