I missed out on liveblogging the last session on Tuesday on Seamless approaches in weather and climate, because the room had no power outlets at all, and my battery died. Which is a shame, as it was very interesting. The aim of seamless assessment is to be able to move back and forth between weather forecast models and climate models.
The last speaker in the session, Randall Dole, give a good explanation for the reasons why this is an emerging priority, with his three challenges:
- Understanding and modeling organized tropical convection and its global impacts. This is a key problem in predictability of weather beyond about a week, and a major factor in the regional differences in climate variations within the overall climate change trends.
- Predicting weather and climate extremes in a changing climate (e.g. tropical cyclones, floods, droughts, coastal inundation, etc)
- Integrating earth system models and observations. Or: how to build a scientifically-based, internally consistent record of how the earth system is evolving over time.
Randall also identified an opportunity to provide better information for energy and climate policy, for example to assess the likely unintended consequences of major new energy projects, geo-engineering proposals, etc.
David Williamson from NCAR described the Transpose-AMIP project, which takes a set of models (both numerical weather prediction (NWP) and climate models) and runs them against a benchmark of 2 months worth of real weather observations. The aim is to analyze the primary errors, and is especially useful for comparing parameterization schemes with the field data, to track down which parameterizations cause which forecast errors. The NWP models did dramatically better than the climate models, but probably because they are highly tuned to give high quality forecasts of things like precipitation.
Keith Williams from the UK Met Office Hadley Centre talked about progress on a new initiative there on seamless assessment. The aim is to get all of the Met Office models to use the same physics schemes, from the 2-day weather forecast model all the way to the centennial and regional climate models. (Except in cases where it is scientifically justifiable to use an alternative scheme). Work by Rodwell and Palmer paved the way for this. One of the big challenges is to predict extreme events (e.g. heavy storms and flash floods) under climate change. Keith demonstrated why this is hard, with an example of a particular flood in North Cornwall, which is only predicted by high resolution weather forecast models on a 1.5km grid, and not by climate models working on a 20km grid). The problem is we can’t say anything about the frequency of such events under future climate change scenarios if the models don’t capture them.
Frank Selten gave a talk on EC-Earth, a project aimed at extending the ECMWF weather forecast model, currently rated as the best in the world for medium range weather forecasts, and creating a longer range climate model. Interestingly, the plan is to synchronize this effort with the ECMWF’s seasonal model, rather than forking the code. [Note: In conversations after the talk, we speculated on what software engineering problems they might encounter with this, given that the two will be developed at different sites in different countries. My work at the Met Office suggested that a major factor in their success at keeping the weather and climate models integrated is that everything is done in a single building at a single site. EC-Earth might make a good case study for me]. Oh, and they’ll be using the climate prediction index introduced by Murphy et al to assess progress.
Finally, Prashant Sardeshmukh blew my mind, with his description of the twentieth century reanalysis project. The aim of this project is to recreate an entire record of 6-hour measurements of near-surface and tropospheric temperatures, extending back to the start of the 20th century, using all the available observational data and a 56-model ensemble of climate models. Once they’ve done that they plan to go all the way back to 1871. They do this by iteratively improving the estimates until the models and the available field data converge. I amused myself by speculating whether it would be easier to invent a time machine and send a satellite back in time to take the measurements instead…
Pingback: AGU Day 1 part B: The growing need for climate services | Serendipity
Pingback: Workshop on Advancing Climate Modeling (1) | Serendipity