I was particularly looking forward to two AGU keynote talks on Monday – John Holdren (Science and technology advisor to the President) and Julia Slingo (Chief Scientist at the UK Met Office). Holdren’s talk was a waste of time, while Slingo’s was fabulous. I might post later about what I disliked about Holdren’s talk (James Annan has some hints), and you can see both talks online:
- John Holdren “Scientists, Science Advice, and Science Policy in the Obama Administration“
- Julia Slingo “Society’s Growing Vulnerability to Natural Hazards and Implications for Geophysics Research“
Here’s my notes from Julia’s talk, for those who want a shorter version than the video.
Julia started with the observation that 2010 was an unprecedented year of geophysical hazards, which presents some serious challenges for how we discuss and communicate about these, and especially how to communicate about risks in a way that’s meaningful. And as most geophysical hazards either start with the weather or are mediated through impact on the weather, forecasting services like the UK Met Office have to struggle with this on a daily basis.
Julia was asked originally to come and talk about Eyjafjallajökull, as she was in the thick of the response to this emergency at the Met Office. But in putting together the talk, she decided to broaden things to draw lessons from several other major events this year:
- Eyjafjallajökull’s eruptions and their impact on European Air Traffic.
- Pakistan experienced the worst flooding since 1929, with huge loss of life and loss of crops, devastating an area the size of England.
- The Russian heatwave and the forest fires, which was part of the worst drought in Russia since records began.
- The Chinese summer floods and landslides, which was probably tied up with the same weather pattern, and caused the Three Gorges Dam, only just completed, to reach near capacity.
- The first significant space weather storm of the new solar cycle as we head into a solar maximum (and looking forward, the likelihood of that major solar storms will have an impact on global telecommunications, electricity supply and global trading systems).
- And now, in the past week, another dose of severe winter weather in the UK, along with the traffic chaos it always brings.
The big picture is that we are increasingly vulnerable to these geophysical events in an inter-dependent environment: Hydro-meteorological events and their impact on Marine and Coastal Infrastructures; Space Weather events and their impact on satellite communications, aviation, and electricity supply; Geolological hazards such as earthquakes and volcanos; and Climate Disruption and its impact on food and water security, health, and infrastructure resilience.
What people really want to know is “what does it mean to me?” and “what action should I take?”. Which means we need to be able to quantify exposure and vulnerability, and to assess socio-economic impact, so that we can then quantify and reduce the risk. But it’s a complex landscape, with different physical scales (local, regional, global), temporal scales (today, next year, next decade, next century), and responses (preparedness, reslience, adaptation). And it all exists within the the bigger picture on climate change (mitigation, policy, economics).
Part of the issue is the shifting context, with changing exposure (for example, more people live on the coast, and along rivers), changing vulnerability (for example our growing dependency on communication infrastructure, power grids, etc).
And forecasting is hard. Lorenz’s work on chaotic systems has become deeply embedded in meteorological science, with ensemble prediction systems now the main weapon for handling the various sources of uncertainty: initial condition uncertainty, model uncertainty (arising from stochastic unresolved processes and parameter uncertainty), and forecast uncertainty. And we can’t use past forecast assessments to validate future forecasts under conditions of changing climate. The only way to build confidence in forecast system is to do the best possible underpinning science, and go back to the fundamentals, which means we need to collect the best observational data we can, and think about the theoretical principles.
Eyjafjallajökull
This shouldn’t have been unusual – there are 30 active volcanoes in Iceland, but they’ve been unusually quiet during the period in which aviation travel has developed. Eyjafjallajökull began to erupt in March. But in April it erupted through the glacier, causing a rapid transfer of heat from magma to water. A small volume of water produces a large volume of steam and very fine ash. The eruption then interacted with unfortunate meteorological conditions, which circulated the ash around a high pressure system over the North Atlantic. The North Atlantic Oscillation (NAO) was in strong negative phase, which causes the Jet stream to make a detour north, and then back down over UK and Western Europe. This pattern caused more frequent negative blocking NAO patterns from February though March, and then again from April though June.
Normally, ash from volcanoes is just blown away, and normally it’s not as fine. The Volcanic Ash Advisory Centres (VAACs) are responsible for managing the risks. London handles a small region (which includes the UK and Iceland), but if ash originates in your area, it’s considered to be yours to manage, no matter where it then goes. So, as the ash spread over other regions, the UK couldn’t get rid of responsibility!
To assess the risk, you take what you know and feed it into a dispersion model, which then is used to generate a VAAC advisory. These advisories usually don’t say anything about how much ash there is, they just define a boundary of the affected area, and advise not to fly through it. As this eruption unfolded, it became clear there were no-fly zones all over the place. Then, the question came about how much ash there was – people needed to know how much ash and at what level, to make finer grained decisions about flying risk. The UK VAAC had to do more science very rapidly (within a five day period) to generate more detailed data for planning.
And there are many sources of uncertainty:
- Data on ash clouds is hard to collect, because you cannot fly the normal meteorological aircraft into the zone, as they have jet engines.
- Dispersion patterns. While the dispersion model gave very accurate descriptions of ash gradients, it did poorly on the longer term dispersion. Normally, ash drops out of the air after a couple of days. In this case, ash as old as five days was still relevant, and needed to be captured in the model. Also, the ash became very stratified vertically, making it particularly challenging for advising the aviation industry.
- Emissions characteristics. This rapidly became a multidisciplinary science operation (lots of different experts brought together in a few days). The current models represent the release as a vertical column with no vertical variation. But the plume changed shape dramatically over the course of the eruption. It was important to figure out what was exiting the area downwind, as well as the nature of the plume. Understanding dynamics of plumes is central to the problem, and it’s a hard computational fluid dynamics problem.
- Particle size, as dispersion patterns depend on this.
- Engineering tolerances. For risk based assessment, we need to work with aircraft engine manufacturers to figure out what kinds of ash concentration are dangerous. Needed to provide detailed risk assessment for exceeding thresholds for engine safety.
Some parts of the process are more uncertain than others. For example the formation of the suspended ash plume was a major source of uncertainty, and the ash cloud properties led to some uncertainty. The meteorology, dispersion forecasts, and engineering data on aircraft engines are smaller sources of uncertainty.
The Pakistan Floods
This is more a story of changing vulnerability rather than changing exposure.It wasn’t unprecedented, but it was very serious. There’s now a much larger population in Pakistan, and particularly more people living along river banks. So it had a very different impact to last similar flooding in 1920s.
The floods were caused by a conjunction of two weather systems – the active phase of the summer monsoon, in conjunction with large amplitude waves in mid-latitudes. The position of the sub-tropical jet, which is usually well to the north of the tibetan plateau, made a huge turn south, down over Pakistan. It caused exceptional cloudbursts over the mountains of western Pakistan.
Could these storms have been predicted? Days ahead, the weather forecast models showed unusually large accumulations – for example 9 days ahead, the ECMWF showed a probability of exceeding 100mm over four days. These figures could have been fed into hydrological models to assess impact on river systems (but weren’t).
The Russian heatwave
Whereas Eyjafjallajökull was a story of changing exposure, and Pakistan was a story of changing vulnerability, it’s likely that the Russian heatwave was a story of changing climate.
There were seasonal forecasts, and the heatwaves were within the range of the ensemble runs, but nowhere near the ensemble mean. For example, the May 2010 seasonal forecast for July showed a strong warm signal in over Russia in the ensemble mean. The two warmest forecasts in the ensemble captured very well the observed warm pattern and intensity. It’s possible that the story here is that use of past data to validate seasonal forecasts is increasingly problematic under conditions of changing climate, as it gives a probability density function that is too conservative.
More importantly, seasonal forecasts of extreme heat are associated with blocking and downstream trough. But we don’t have enough resolution in the models to do this well yet – the capability is just emerging.
We could also have taken these seasonal forecasts and pushed them through to analyze impact on air quality (but didn’t).
And the attribution? It was a blocking event (Martin Hoerling at NOAA has a more detailed analysis). It has the same cause as the European heatwaves in 2003. It’s part of a normal blocking pattern, but amplified by global warming.
Cumbrian floods
From 17-20 Novemver 2009, there was unprecedented flooding (at least going back 2 centuries) in Cumbria, in the north of England. The UK Met office was able to put out a red alert warning two days in advance for severe flooding in the region. It was quite a bold forecast, and they couldn’t have done this a couple of years ago. The forecast was possible from the high resolution 1.5km UK Model, which was quasi-operational in May 2009. Now these forecasts are on a scale that is meaningful and useful to hydrologists.
Conclusions
We have made considerable progress on our ability to predict weather and climate extremes, and geophysical hazards. We have made some progress on assessing vulnerability, exposure and socio-economic impact, but these are a major limiting factor our ability to provide useful advice. And there is still major uncertainty in quantifying and reducing risk.
The modelling and forecasting needs to be done in a probabilistic framework. Geophysical hazards cross many disciplines and many scales in space and time. We’re moving towards a seamless forecasting system, that attempts to bridge the gap between weather and climate forecasting, but there are still problems in bridging the gaps and bridging the scales. Progress depends on observation and monitoring, analysis and modelling, prediction and impacts assessment, handling and communicating uncertainty. And dialogue with end users is essential – it’s very stimulating, as they challenge the science, and they bring fresh thinking.
Pingback: Tweets that mention AGU talk: Julia Slingo on Vulnerability to Natural Hazards | Serendipity -- Topsy.com