Last week I was at the 2012 AGU Fall Meeting. I plan to blog about many of the talks, but let me start with the Tyndall lecture given by Ray Pierrehumbert, on “Successful Predictions”. You can see the whole talk on youtube, so here I’ll try and give a shorter summary.

Ray’s talk spanned 120 years of research on climate change. The key message is that science is a long, slow process of discovery, in which theories (and their predictions) tend to emerge long before they can be tested. We often learn just as much from the predictions that turned out to be wrong as we do from those that were right. But successful predictions eventually form the body of knowledge that we can be sure about, not just because they were successful, but because they build up into a coherent explanation of multiple lines of evidence.

Here are the sucessful predictions:

1896: Svante Arrhenius correctly predicts that increases in fossil fuel emissions would cause the earth to warm. At that time, much of the theory of how atmospheric heat transfer works was missing, but nevertheless, he got a lot of the process right. He was right that surface temperature is determined by the balance between incoming solar energy and outgoing infrared radiation, and that the balance that matters is the radiation budget at the top of the atmosphere. He knew that the absorption of infrared radiation was due to CO2 and water vapour, and he also knew that CO2 is a forcing while water vapour is a feedback. He understood the logarithmic relationship between CO2 concentrations in the atmosphere and surface temperature. However, he got a few things wrong too. His attempt to quantify the enhanced greenhouse effect was incorrect, because he worked with a 1-layer model of the atmosphere, which cannot capture the competition between water vapour and CO2, and doesn’t account for the role of convection in determining air temperatures. His calculations were incorrect because he had the wrong absorption characteristics of greenhouse gases. And he thought the problem would be centuries away, because he didn’t imagine an exponential growth in use of fossil fuels.

Arrhenius, as we now know, was way ahead of his time. Nobody really considered his work again for nearly 50 years, a period we might think of as the dark ages of climate science. The story perfectly illustrates Paul Hoffman’s tongue-in-cheek depiction of how scientific discoveries work: someone formulates the theory, other scientists then reject it, ignore it for years, eventually rediscover it, and finally accept it. These “dark ages” weren’t really dark, of course – much good work was done in this period. For example:

  • 1900: Frank Very worked out the radiation balance, and hence the temperature, of the moon. His results were confirmed by Pettit and Nicholson in 1930.
  • 1902-14: Arthur Schuster and Karl Schwarzschild used a 2-layer radiative-convective model to explain the structure of the sun.
  • 1907: Robert Emden realized that a similar radiative-convective model could be applied to planets, and Gerard Kuiper and others applied this to astronomical observations of planetary atmospheres.

This work established the standard radiative-convective model of atmospheric heat transfer. This treats the atmosphere as two layers; in the lower layer, convection is the main heat transport, while in the upper layer, it is radiation. A planet’s outgoing radiation comes from this upper layer. However, up until the early 1930′s, there was no discussion in the literature of the role of carbon dioxide, despite occasional discussion of climate cycles. In 1928, George Simpson published a memoir on atmospheric radiation, which assumed water vapour was the only greenhouse gas, even though, as Richardson pointed out in a comment, there was evidence that even dry air absorbed infrared radiation.

1938: Guy Callendar is the first to link observed rises in CO2 concentrations with observed rises in surface temperatures. But Callendar failed to revive interest in Arrhenius’s work, and made a number of mistakes in things that Arrhenius had gotten right. Callendar’s calculations focused on the radiation balance at the surface, whereas Arrhenius had (correctly) focussed on the balance at the top of the atmosphere. Also, he neglected convective processes, which astrophysicists had already resolved using the radiative-convective model. In the end, Callendar’s work was ignored for another two decades.

1956: Gilbert Plass correctly predicts a depletion of outgoing radiation in the 15 micron band, due to CO2 absorption. This depletion was eventually confirmed by satellite measurements. Plass was one of the first to revisit Arrhenius’s work since Callendar, however his calculations of climate sensitivity to CO2 were also wrong, because, like Callendar, he focussed on the surface radiation budget, rather than the top of the atmosphere.

1961-2: Carl Sagan correctly predicts very thick greenhouse gases in the atmosphere of Venus, as the only way to explain the very high observed temperatures. His calculations showed that greenhouse gasses must absorb around 99.5% of the outgoing surface radiation. The composition of Venus’s atmosphere was confirmed by NASA’s Venus probes in 1967-70.

1959: Burt Bolin and Erik Eriksson correctly predict the exponential increase in CO2 concentrations in the atmosphere as a result of rising fossil fuel use. At that time they did not have good data for atmospheric concentrations prior to 1958, hence their hindcast back to 1900 was wrong, but despite this, their projection for changes forward to 2000 were remarkably good.

1967: Suki Manabe and Dick Wetherald correctly predict that warming in the lower atmosphere would be accompanied by stratospheric cooling. They had built the first completely correct radiative-convective implementation of the standard model applied to Earth, and used it to calculate a +2C equilibrium warming for doubling CO2, including the water vapour feedback, assuming constant relative humidity. The stratospheric cooling was confirmed in 2011 by Gillett et al.

1975: Suki Manabe and Dick Wetherald correctly predict that the surface warming would be much greater in the polar regions, and that there would be some upper troposphere amplification in the tropics. This was the first coupled general circulation model (GCM), with an idealized geography. This model computed changes in humidity, rather than assuming it, as had been the case in earlier models. It showed polar amplification, and some vertical amplification in the tropics. The polar amplification was measured, and confirmed by Serreze et al in 2009. However, the height gradient in the tropics hasn’t yet been confirmed (nor has it yet been falsified – see Thorne 2008 for an analysis)

1989: Ron Stouffer et. al. correctly predict that the land surface will warm more than the ocean surface, and that the southern ocean warming would be temporarily suppressed due to the slower ocean heat uptake. These predictions are correct, although these models failed to predict the strong warming we’ve seen over the antarctic peninsula.

Of course, scientists often get it wrong:

1900: Knut Angström incorrectly predicts that increasing levels of CO2 would have no effect on climate, because he thought the effect was already saturated. His laboratory experiments weren’t accurate enough to detect the actual absorption properties, and even if they were, the vertical structure of the atmosphere would still allow the greenhouse effect to grow as CO2 is added.

1971: Rasool and Schneider incorrectly predict that atmospheric cooling due to aerosols would outweigh the warming from CO2. However, their model had some important weaknesses, and was shown to be wrong by 1975. Rasool and Schneider fixed their model and moved on. Good scientists acknowledge their mistakes.

1993: Richard Lindzen incorrectly predicts that warming will dry the troposphere, according to his theory that a negative water vapour feedback keeps climate sensitivity to CO2 really low. Lindzen’s work attempted to resolve a long standing conundrum in climate science. In 1981, the CLIMAP project reconstructed temperatures at the last Glacial maximum, and showed very little tropical cooling. This was inconsistent the general circulation models (GCMs), which predicted substantial cooling in the tropics (e.g. see Broccoli & Manabe 1987). So everyone thought the models must be wrong. Lindzen attempted to explain the CLIMAP results via a negative water vapour feedback. But then the CLIMAP results started to unravel, and newer proxies demonstrated that it was the CLIMAP data that was wrong, rather than the models. It eventually turns out the models were getting it right, and it was the CLIMAP data and Lindzen’s theories that were wrong. Unfortunately, bad scientists don’t acknowledge their mistakes; Lindzen keeps inventing ever more arcane theories to avoid admitting he was wrong.

1995: John Christy and Roy Spencer incorrectly calculate that the lower troposphere is cooling, rather than warming. Again, this turned out to be wrong, once errors in satellite data were corrected.

In science, it’s okay to be wrong, because exploring why something is wrong usually advances the science. But sometimes, theories are published that are so bad, they are not even wrong:

2007: Courtillot et. al. predicted a connection between cosmic rays and climate change. But they couldn’t even get the sign of the effect consistent across the paper. You can’t falsify a theory that’s incoherent! Scientists label this kind of thing as “Not even wrong”.

Finally, there are, of course, some things that scientists didn’t predict. The most important of these is probably the multi-decadal fluctuations in the warming signal. If you calculate the radiative effect of all greenhouse gases, and the delay due to ocean heating, you still can’t reproduce the flat period in the temperature trend in that was observed in 1950-1970. While this wasn’t predicted, we ought to be able to explain it after the fact. Currently, there are two competing explanations. The first is that the ocean heat uptake itself has decadal fluctuations, although models don’t show this. However, it’s possible that climate sensitivity is at the low end of the likely range (say 2°C per doubling of CO2), it’s possible we’re seeing a decadal fluctuation around a warming signal. The other explanation is that aerosols took some of the warming away from GHGs. This explanation requires a higher value for climate sensitivity (say around 3°C), but with a significant fraction of the warming counteracted by an aerosol cooling effect. If this explanation is correct, it’s a much more frightening world, because it implies much greater warming as CO2 levels continue to increase. The truth is probably somewhere between these two. (See Armour & Roe, 2011 for a discussion)

To conclude, climate scientist have made many predictions about the effect of increasing greenhouse gases that have proven to be correct. They have earned a right to be listened to, but is anyone actually listening? If we fail to act upon the science, will future archaeologists wade through AGU abstracts and try to figure out what went wrong? There are signs of hope – in his re-election acceptance speech, President Obama revived his pledge to take action, saying “We want our children to live in an America that …isn’t threatened by the destructive power of a warming planet.”

It’s AGU abstract submission day, and I’ve just submitted one to a fascinating track organised by John Cook, entitled “Social Media and Blogging as a Communication Tool for Scientists”. The session looks like it will be interesting, as there are submissions from several prominent climate bloggers. I decided to submit an abstract on moderation policies for climate blogs:

Don’t Feed the Trolls: An analysis of strategies for moderating discussions on climate blogs
A perennial problem in any online discussion is the tendency for discussions to get swamped with non-constructive (and sometimes abusive) comments. Many bloggers use some form of moderation policy to filter these out, to improve the signal to noise ratio in the discussion, and to encourage constructive participation. Unfortunately, moderation policies have disadvantages too: they are time-consuming to implement, introduce a delay in posting contributions, and can lead to accusations of censorship and anger from people whose comments are removed.

In climate blogging, the problem is particularly acute because of the politicization of the discourse. The nature of comments on climate blogs vary widely. For example, on a blog focussed on the physical science of climate, comments on posts might include personal abuse, accusations of misconduct and conspiracy, repetition of political talking points, dogged pursuit of obscure technical points (whether related or not to the original post), naive questions, concern trolling (negative reactions posing as naive questions), polemics, talk of impending doom and catastrophe, as well as some honest and constructive questions about the scientific topic being discussed. How does one decide which of these comments to allow? And if some comments are to be removed, what should be done with them?

In this presentation, I will survey a number of different moderation strategies used on climate blogs (along with a few notable examples from other kinds of blogs), and identify the advantages and disadvantages of each. The nature of the moderation strategy has an impact on the size and kind of audience a blog attracts. Hence, the choice of moderation strategy should depend on the overall goals for the blog, the nature of the intended audience, and the resources (particularly time) available to implement the strategy.

As today is the deadline for proposing sessions for the AGU fall meeting in December, we’ve submitted a proposal for a session to explore open climate modeling and software quality. If we get the go ahead for the session, we’ll be soliciting abstracts over the summer. I’m hoping we’ll get a lively session going with lots of different perspectives.

I especially want to cover the difficulties of openness as well as the benefits, as we often hear a lot of idealistic talk on how open science would make everything so much better. While I think we should always strive to be more open, it’s not a panacea. There’s evidence that open source software isn’t necessarily better quality, and of course, there’re plenty of people using lack of openness as a political weapon, without acknowledging just how many hard technical problems there are to solve along the way, not least because there’s a lack of consensus over the meaning of openness among it’s advocates.

Anyway, here’s our session proposal:

TITLE: Climate modeling in an open, transparent world

AUTHORS (FIRST NAME INITIAL LAST NAME): D. A. Randall1, S. M. Easterbrook4, V. Balaji2, M. Vertenstein3

INSTITUTIONS (ALL): 1. Atmospheric Science, Colorado State University, Fort Collins, CO, United States. 2. Geophysical Fluid Dynamics Laboratory, Princeton, NJ, United States. 3. National Center for Atmospheric Research, Boulder, CO, United States. 4. Computer Science, University of Toronto, Toronto, ON, Canada.

Description: This session deals with climate-model software quality and transparent publication of model descriptions, software, and results. The models are based on physical theories but implemented as software systems that must be kept bug-free, readable, and efficient as they evolve with climate science. How do open source and community-based development affect software quality? What are the roles of publication and peer review of the scientific and computational designs in journals or other curated online venues? Should codes and datasets be linked to journal articles? What changes in journal submission standards and infrastructure are needed to support this? We invite submissions including experience reports, case studies, and visions of the future.

One of the things that strikes me about discussions of climate change, especially from those who dismiss it as relatively harmless, is a widespread lack of understanding on how non-linear systems behave. Indeed, this seems to be one of the key characteristics that separate those who are alarmed at the prospect of a warming climate from those who are not.

At the AGU meeting this month, Kerry Emanuel presented a great example of this in his talk on “Hurricanes in a Warming Climate”. I only caught his talk by chance, as I was slipping out of the session in the next room, but I’m glad I did, because he made an important point about how we think about the impacts of climate change, and in particular, showed two graphs that illustrate the point beautifully.

Kerry’s talk was an overview of a new study that estimates changes in damage from tropical cyclones with climate change, using a new integrated assessment model. The results are reported in detail in a working paper at the World Bank. The report points out that the link between hurricanes and climate change remains controversial. So, while Atlantic hurricane power has more than doubled over the last 30 years, and model forecasts show an increase in the average intensity of hurricanes in a warmer world, there is still no clear statistical evidence of a trend in damages caused by these storms, and hence a great deal of uncertainty about future trends.

The analysis is complicated by several factors:

  • Increasing insurance claims from hurricane damage in the US have a lot to do with growing economic activity in vulnerable regions. Indeed, expected economic development in the regions subject to tropical storm damage means that there’s certain to be big increases in damage even if there were no warming at all.
  • The damage is determined more by when and where each storm makes landfall than it is by the intensity of the storm.
  • There simply isn’t enough data to detect trends. More than half of the economic damage due to hurricanes in the US since 1870 was caused by just 8 storms.

The new study by Emanuel and colleagues overcomes some of these difficulties by simulating large numbers of storms. They took the outputs of four different Global Climate Models, using the A1B emissions scenario, and fed them into a cyclone generator model to simulate thousands of storms, comparing the characteristics of these storms with those that have caused damage in the US in the last few decades, and then adjusting the damage estimates according to anticipated changes in population and economic activity in the areas impacted (for details, see the report).

The first thing to note is that the models forecast only a small change in hurricanes, typically a slight decrease in medium-strength storms and a slight increase in more intense storms. For example, at first sight, the MIROC model indicates almost no difference:

Probability density for storm damage on the US East Coast, generated from the MIROC model for current vs. year 2100, under the A1B scenario, for which this model forecasts a global average temperature increase of around 4.5C. Note that x axis is a logarithmic scale: 8 means $100 million, 9 means $1 billion, 10 means $10 billion, etc (source: Figure 9 in Mendelsohn et al, 2011)

Note particularly that at the peak of the graph, the model shows a very slight reduction in the number of storms (consistent with a slight decrease in the overall frequency of hurricanes), while on the upper tail, the model shows a very slight increase (consistent with a forecast that there’ll be more of the most intense storms). The other three models show slightly bigger changes by the year 2100, but overall, the graphs seem very comforting. It looks like we don’t have much to worry about (at least as far as hurricane damage from climate change is concerned). Right?

The problem is that the long tail is where all the action is. The good news is that there appears to be a fundamental limit on storm intensity, so the tail doesn’t really get much longer. But the problem is that it only takes a few more of these very intense storms to make a big difference in the amount of damage caused. Here’s what you get if you multiply the probability by the damage in the above graph:

Changing risk of hurricane damage due to climate change. Calculated as probability times impact. (Source: courtesy of K. Emanuel, from his AGU 2011 talk)

That tiny change in the long tail generates a massive change in the risk, because the system is non-linear. If most of the damage is done by a few very intense storms, then you only need a few more of them to greatly increase the damage. Note in particular, what happens at 12 on the damage scale – these are trillion dollar storms. [Update: Kerry points out that the total hurricane damage is proportional to the area under the curves of the second graph].

The key observation here is that the things that matter most to people (e.g. storm damage) do not change linearly as the climate changes. That’s why people who understand non-linear systems tend to worry much more about climate change than people who do not.

Here’s the call for papers for a workshop we’re organizing at ICSE next May:

The First International Workshop on Green and Sustainable Software (GREENS’2012)

(In conjunction with the 34th International Conference on Software Engineering (ICSE 2012), Zurich, Switzerland, June 2-9, 2012

Important Dates:

  • 17th February 2012 – paper submission
  • 19th March 2012 – notification of acceptance
  • 29th March 2012 – camera-ready
  • 3rd June 2011 – workshop

Workshop theme and goals: The Focus of the GREENS workshop is the engineering of green and sustainable software. Our goal is to bring together academics and practitioners to discuss research initiatives, challenges, ideas, and results in this critically important area of the software industry. To this end GREENS will both discuss the state of the practice, especially at the industrial level, and define a roadmap, both for academic research and for technology transfer to industry. GREENS seeks contributions addressing, but not limited to, the following list of topics:

Concepts and foundations:

  • Definition of sustainability properties (e.g. energy and power consumption, green-house gases emissions, waste and pollutants production), their relationships, their units of measure, their measurement procedures in the context of software-intensive systems, their relationships with other properties (e.g. response time, latency, cost, maintainability);
  • Green architectural knowledge, green IT strategies and design patterns;

Greening domain-specific software systems:

  • Energy-awareness in mobile software development;
  • Mobile software systems scalability in low-power situations;
  • Energy-efficient techniques aimed at optimizing battery consumption;
  • Large and ultra-large scale green information systems design and development (including inter-organizational effects)

Greening of IT systems, data and web centers:

  • Methods and approaches to improve sustainability of existing software systems;
  • Customer co-creation strategies to motivate behavior changes;
  • Virtualization and offloading;
  • Green policies, green labels, green metrics, key indicators for sustainability and energy efficiency;
  • Data center and storage optimization;
  • Analysis, assessment, and refactoring of source code to improve energy efficiency;
  • Workload balancing;
  • Lifecycle Extension

Greening the process:

  • Methods to design and develop greener software systems;
  • Managerial and technical risks for a sustainable modernization;
  • Quality & risk assessments, tradeoff analyses between energy efficiency, sustainability and traditional quality requirements;

Case studies, industry experience reports and empirical studies:

  • Empirical data and analysis about sustainability properties, at various granularity levels: complete infrastructure, or nodes of the infrastructure (PCs, servers, and mobile devices);
  • Studies to define technical and economic models of green aspects;
  • Return on investment of greening projects, reasoning about the triple bottom line of people, planet and profits;
  • Models of energy and power consumption, at various granularity levels;
  • Benchmarking of power consumption in software applications;

Guidelines for Submission: We are soliciting papers in two distinct categories:

  1. Research papers describing innovative and significant original research in the field (maximum 8 pages);
  2. Industrial papers describing industrial experience, case studies, challenges, problems and solutions (maximum 8 pages).

Please submit your paper online through EasyChair (see the GREENS website). Submissions should be original and unpublished work. Each submitted paper will undergo a rigorous review process by three members of the Program Committee. All types of papers must conform to the ICSE submission format and guidelines. All accepted papers will appear in the ACM Digital Library.

Workshop Organizers:

  • Patricia Lago (VU University Amsterdam, The Netherlands)
  • Rick Kazman (University of Hawaii, USA)
  • Niklaus Meyer (Green IT SIG, Swiss Informatics Society, Switzerland)
  • Maurizio Morisio (Politecnico di Torino, Italy)
  • Hausi A. Mueller (University of Victoria, Canada)
  • Frances Paulisch (Siemens Corporate Technology, Germany)
  • Giuseppe Scanniello (Università della Basilicata, Italy)
  • Olaf Zimmermann (IBM Research, Zurich, Switzerland)

Program committee:

  • Marco Aiello, University of Groningen, Netherlands
  • Luca Ardito, Politecnico di Torino, Italy
  • Ioannis Athanasiadis, Democritus Univ. of Thrace, Greece
  • Rami Bahsoon, University College London, UK
  • Ivica Crnkovic, Malardalen University, Sweden
  • Steve Easterbrook, University of Toronto, Canada
  • Hakan Erdogmus, Things Software
  • Anthony Finkelstein, University College London, UK
  • Matthias Galster, University of Groningen, Netherlands
  • Ian Gorton, Pacific Northwest National Laboratory, USA
  • Qing Gu, VU University Amsterdam, Netherlands
  • Wolfgang Lohmann, Informatics and Sustainability Research, Swiss Federal Laboratories for Materials Science and Technology, Switzerland
  • Lin Liu, School of Software, Tsinghua University, China
  • Alessandro Marchetto, Fondazione Bruno Kessler, Italy
  • Henry Muccini, University of L’Aquila, Italy
  • Stefan Naumann, Trier University of Applied Sciences, Environmental Campus, Germany
  • Cesare Pautasso, University of Lugano, Switzerland
  • Barbara Pernici, Politecnico di Milano, Italy
  • Giuseppe Procaccianti, Politecnico di Torino, Italy
  • Filippo Ricca, University of Genova
  • Antony Tang, Swinburne University of Tech., Australia
  • Antonio Vetro’, Fraunhofer IESE, USA
  • Joost Visser, Software Improvement Group and Knowledge Network Green Software, Netherlands
  • Andrea Zisman, City University London, UK

Here’s some events related to climate modeling and software/informatics that look interesting for the rest of this year. I won’t be able to make it to all of them (I’m trying to cut down on travel, for various reasons), but they all look tempting:

And then of course, in December, it’s the AGU Fall Meeting. Abstracts are due tomorrow, so we’ll be busy for the next 24 hours. Here’s a selection of conference tracks that look fascinating to me. In the Union sessions there some tracks that look at the big picture:

In the Education sessions, they’ve introduced a whole set of tracks on climate literacy:

And of course, many sessions on climate modeling and climate data in the Global Environmental Change sessions. I’ll go to many of these, but the following are ones I’ve especially enjoyed in previous years:

Of course, the Informatics sessions are where all the action is. I’m glad to there’s a track on Software Engineering Challenges again this year, and there are some interesting sessions on visualization, decision support, open source and data quality (among my pet themes!):

Finally, a couple of session in the Public Affairs division look interesting:

Phew. Look’s like it’ll be a busy week.

Next year’s International Conference on Software Engineering (ICSE), to be held in Zurich, has an interesting conference slogan: Sustainable Software for a Sustainable World

In many ways, ICSE is my community. By that I mean, this is the conference where I have presented my research most often, and is generally my first choice of venue for new papers. This is an important point: one of the most crucial pieces of advice I give to new PhD students is to “find your community”. To be successful as a researcher (and especially as an academic) you have to build a reputation for solid research within an existing research community. Which means figuring out early which community you belong to: who will be the audience for your research results? who will understand your work well enough to review your papers? And eventually, which community will you be looking to for letters of support for job applications, tenure reviews, and so on? And once you’ve figured out which community you belong to, you have to attend the conferences and workshops run by that community, and present your work to them as often as you can, and you have to get to know the senior people in that community. Or rather, they have to get to know you.

The problem is, in recent years, I’ve gone off ICSE. Having spent a lot of time in the last few years mixing with a different research community (climate science, and especially geoscientific model development), I come back to the  ICSE community with a different perspective, and what I see now (in general) it is a rather insular community, focussed on a narrow, technical set of research questions that seem largely irrelevant to anything that matters, and a huge resistance to inter-disciplinary research. This view crystallized for me last fall, when I attended a two-day workshop on “the Future of Software Engineering”, but came away very disappointed (my blog post from the workshop captured this very well).

I should be clear, I don’t mean to write off the entire community – there’s some excellent people in the ICSE community, doing fascinating research – many of them I regard as good friends. But the conference itself seems ever less relevant. The keynote talks always suck. And the technical program tends to be dominated by a large number of dull papers: incremental results on unimaginative research problems.

Perhaps this is a result of the way conference publication works. Thomas Anderson sets out a fascinating analysis of why this might be so for computer systems conferences, in his 2009 paper “Conference Reviewing Considered Harmful“. Basically, the accept/reject process for conferences that use a peer-review system creates a perverse incentive to researchers to write papers that are just good enough to get accepted, but no better. His analysis is consistent with my own observations – people talk about “the least publishable unit” of research. The net result is a conference full of rather dull papers, where nobody takes risks on more exciting research topics.

There’s an interesting contrast with the geosciences community here, where papers are published in journals rather than conferences. For example, at the AGU and EGU conferences, you just submit an abstract, and various track chairs decide whether to let you present it as a talk in their track, or whether it should appear as a poster. Researchers are only allowed to submit one abstract as first author, which means the conference is really a forum for each researcher to present her best work over the past year, with no strong relationship to the peer-reviewed publication process. This makes for big conferences, and very variable quality presentations. Attendees have to do a little more work in advance to figure out which talks might be worth attending. But the perverse incentive identified by Anderson is missing all together – each presenter is incentivized to present her best work, no matter what stage the research is at.

Which brings me back to ICSE. Next year’s conference chairs have chosen the slogan “Sustainable Software for a Sustainable World” for the conference. An excellent rallying call, but I sincerely hope they can do more with this than most conferences do – such conference slogans are usually irrelevant to the actual conference program, which is invariably business as usual. Of course, the term sustainability has been wildly overused recently, to the point that its in danger of becoming meaningless. So, how could ICSE make it something more than a meaningless slogan?

First, one has to acknowledge that an understanding of sustainability requires some systems thinking, and the ability to analyze multiple interacting systems. The classic definition, due to the Bruntland Commission, is that it refers to humanity’s ability to meet its needs, without compromising the needs of future generations. As Garvey points out, this is entirely inadequate, as it’s impossible to figure out how to balance our resource needs with those of an unknown number of potential future earthlings. A better approach is to break the concept down into sustainability in different, overlapping systems. Sverdrup and Svensson do this by breaking it down to three inter-related concepts: natural sustainability, social sustainability, and economic sustainability. Furthermore, they are hierarchically related: sustainability of social and economic activity are constrained by physical limits such as thermodynamics and mass conservation (e.g. forget a sustained economy if we screw the planet’s climate), and economic sustainability is constrained by social limits such as a functioning civil society.

How does this apply to ICSE? Well, I would suggest applying the sustainability concept to a number of different systems:

  • sustainability of the ICSE community itself, which would include nurturing new researchers, and fixing the problems of perverse incentives in the paper review processes. But this only makes sense within:
  • sustainability of scientific research as a knowledge discovery process, which would include analysis of the kinds of research questions a research community ought to tackle, and how should it engage with society. Here, I think ICSE has some serious re-assessment to do, especially with respect to it’s tendency to reject inter-disciplinary work.
  • sustainability of software systems that support human activity, which would suggest a switch in attention by the ICSE community away from the technical processes of de novo software creation, and towards questions of how software systems actually make life better for people, and how software systems and human activity systems co-evolve. An estimate I heard at the CHASE workshop is that only 20% of ICSE papers make any attempt to address human aspects.
  • sustainability of software development as an economic activity, which suggests a critical look at how existing software corporations work currently, but perhaps more importantly, exploration of new economic models (e.g. open source; end-user programming; software startups; mashups, etc)
  • the role of software in social sustainability, by which I mean a closer look at how software systems help (or hinder) the creation of communities, social norms, social equity and democratic processes.
  • the role of software in natural sustainability, by which I mean green IT topics such as energy-aware computing, as well as the broader role of software in understanding and tackling climate change.

A normal ICSE would barely touch on any of these topics. But I think next year’s chairs could create some interesting incentives to ensure the conference theme becomes more than just a slogan. At the session on SE for the planet that we held at ICSE 2009, someone suggested that in light of the fact that climate change will make everything else unsustainable, ICSE should insist that all submitted papers to future conferences demonstrate some relevance to tackling climate change (which is brilliant, but so radical that we have to shift the Overton window first). A similar suggestion at the one of the CHASE meetings was that all ICSE papers must demonstrate relevance to human & social aspects, or else prove that their research problem can be tackled without this. For ICSE 2012, perhaps this should be changed to simply reject all papers that don’t contribute somehow to creating a more sustainable world.

I think such changes might help to kick ICSE into some semblance of relevancy, but I don’t kid myself that they are likely. How about as a start, a set of incentives that reward papers that address sustainability in one of more of the senses above? Restrict paper awards to such papers, or create a new award structure for this purpose. Give such papers prominence in the program, and relegate other papers to the dead times like right after lunch, or late in the evening. Or something.

But a good start would be to abolish the paper submission process all together, to decouple the conference from the process of publishing peer-reviewed papers. That’s probably the biggest single contribution to making the conference more sustainable, and more relevant to society.

Hope everyone had a relaxing holiday and a great new year. I still have a whole pile of notes from the AGU meeting to polish up and post, but unfortunately they’re on a disk that crashed during my travels back from the meeting, so I’m keeping my fingers crossed that I can recover them.

In the meantime, here’s three interesting upcoming workshops this year:

On Thursday, Tim Palmer of the University of Oxford and the European Centre for Medium-Range Weather Forecasts (ECMWF) gave the Bjerknes lecture, with a talk entitled “Towards a Community-Wide Prototype Probablistic Earth-System Model“. For me, it was definitely the best talk of this year’s AGU meeting. [Update: the video of the talk is now up at the AGU webcasts page]

I should note of course, that this year’s Bjerknes lecture was originally supposed to have been given by Stephen Schneider, who sadly died this summer. Stephen’s ghost seems to hover over the entire conference, with many sessions beginning and ending with tributes to him. His photo was on the screens as we filed into the room, and the session began with a moment of silence for him. I’m disappointed that I never had a chance to see one of Steve’s talks, but I’m delighted they chose Tim Palmer as a replacement. And of course, he’s eminently qualified. As the introduction said: “Tim is a fellow of pretty much everything worth being a fellow of”, and one of the few people to have won both the Rossby and the Charney awards.

Tim’s main theme was the development of climate and weather forecasting models, especially the issue of probability and uncertainty. He began by reminding us that the name Bjerknes is iconic for this. Vilhelm Bjerknes set weather prediction on its current scientific course, by posing it as a problem in mathematical physics. His son, Jacob Bjerknes, pioneered the mechanisms that underpin our ability to do seasonal forecasting, particularly air-sea coupling.

If there’s one fly in the ointment though, it’s the issue of determinism. Lorenz put a stake into the heart of determinism, through his description of the butterfly effect. As an example, Tim showed the weather forecast for the UK for 13 Oct 1987, shortly before the “great storm” that turned the town of Sevenoaks [where I used to live!] into “No-oaks”. The forecast models pointed to a ridge moving in, whereas what developed was really a very strong vortex causing a serious storm.

Nowadays the forecast models are run many hundreds of times per day, to capture the inherent uncertainty in the initial conditions. An (retrospective) ensemble forecast for 13 Oct 1987 shows this was an inherently unpredictable set of circumstances. The approach now taken is to convert a large number of runs into a probabilistic forecast. This gives a tool for decision-making across a range of sectors that takes into account the uncertainty. And then, if you know your cost function, you can use the probabilities from the weather forecast to decide what to do. For example, if you were setting out to sail in the English channel on the 15th October 1987, you’d need both the probabilistic forecast *and* some measure of the cost/benefit of your voyage.

The same probabilistic approach is used in seasonal forecasting, for example for the current forecasts of the progress of El Niño.

Moving on to the climate arena, what are the key uncertainties in climate predictions? The three key sources are: initial uncertainty, future emissions, and model uncertainty. As we go for longer and longer timescales, model uncertainty dominates – it becomes the paramount issue in assessing reliability of predictions.

Back in the 1970′s, life was simple. Since then, the models have grown dramatically in complexity as new earth system processes have been added. But at the heart of the models, the essential paradigm hasn’t changed. We believe we know the basic equations of fluid motion, expressed as differential equations. It’s quite amazing that 23 mathematical symbols are sufficient to express virtually all aspects of motion in air and oceans. But the problem comes in how to solve them. The traditional approach is to project them (e.g. onto a grid), to convert them into a large number of ordinary differential equations. And then the other physical processes have to be represented in a computationally tractable way. Some of this is empirical, based on observations, along with plausible assumptions on how these processes work.

These deterministic, bulk-parameter parameterizations are based on the presumption of a large ensemble of subgrid processes (e.g. deep convective cloud systems) within each grid box, which then means we can represent them by their overall statistics. Deterministic closures have a venerable history in fluid dynamics, and we can incorporate these subgrid closures into the climate models.

But there’s a problem. Observations indicate a shallow power law for atmospheric energy wavenumber spectra. In other words, there’s no scale separation between the resolved and unresolved scales in weather and climate. The power law is consistent with what one would deduce from the scaling symmetries of the Navier-Stokes equations, but it’s violated by conventional deterministic parameterizations.

But does it matter? Surely if we can do a half-decent job on the subgrid scales, it will be okay? Tim showed a lovely cartoon from Schertzer and Lovejoy, 1993:

As pointed out in the IPCC WG1 Chp8:

“Nevertheless, models still show significant errors. Although these are generally greater at smaller scales, important large-scale problems also remain. For example, deficiencies remain in the simulation of tropical precipitation, the El Niño-Southern Oscillation and the Madden-Julian Oscillation (an observed variation in tropical winds and rainfall with a time scale of 30 to 90 days). The ultimate source of most such errors is that many important small-scale processes cannot be represented explicitly in models, and so must be included in approximate form as they interact with larger-scale features.”

The figures from the IPCC report show the models doing a good job over the 20thC. But what’s not made clear is that each model has had its bias subtracted out before this was plotted, so you’re looking at anomalies relative the the model’s own climatology. In fact, there is an enormous spread of the models against reality.

At present, we don’t know how to close these equations, and a major part of the uncertainty is in these equations. So, a missing box on the diagram of the processes in Earth System Models is “UNCERTAINTY”.

What does the community do to estimate model uncertainty? The state of the art is the multi-model ensemble (e.g. CMIP5). The idea is to poll across the  models to assess how broad the distribution is. But as everyone involved in the process understands, there are problems that are common to all of the models, because they are all based on the same basic approach to the underlying equations. And they also typically have similar resolutions.

Another pragmatic approach, to overcome the limitation of the number of available models, is to use perturbed physics ensembles – take a single model and perturb the parameters systematically. But this approach is blind to structural errors, because the one model used as the basis.

A third approach is to use stochastic closure schemes for climate models. You replace the deterministic formulae with stochastic formulae. Potentially, we have a range of scales at which we can try this. For example, Tim has experimented with cellular automata to capture missing processes, which is attractive because it can also capture how the subgrid processes move from one grid box to another. These ideas have been implemented in the ECMWF models (and are described in the book Stochastic Physics and Climate Modelling).

So where do we go from here? Tim identified a number of reasons he’s convinced stochastic-dynamic parameterizations make sense:

1) More accurate accounts of uncertainty. For example, attempts to assess skill of seasonal forecast with various different types of ensemble. For example Weisheimer et al 2009 scored the ensembles according to how well they captured the uncertainty – stochastic physics ensembles did slightly better than other types of ensemble.

2) Stochastic closures could be more accurate. For example, Berner et al 2009 experimented with adding stochastic backscatter up the spectrum, imposed on the resolved scales. To evaluate it, they looked a model bias. Use the ECMWF model, they increased resolution by factor of 5, which is computationally very expensive, but fills out the bias in the model. They showed the backscatter scheme reduces the bias of the model, in a way that’s not dissimilar to the increased resolution model. It’s like adding symmetric noise, but means that the model on average does the right thing.

3) Taking advantage of exascale computing. Tim recently attended talk by Don Grice, IBM Chief engineer, talking about getting ready for exascale computing. He said ”There will be a tension between energy efficiency and error detection”. What he meant was that if you insist on bit-reproducibility you will pay an enormous premium in energy use. So the end of bit-reproducibility might be in sight for High Performance Computting.

To Tim, this is music to his ears, as he thinks stochastic approaches will be the solution to this. He gave the example of Lyric semiconductors, who are launching a new type of computer, with 1000 times the performance, but at the cost of some accuracy – in other words, probabilistic computing.

4) More efficient use of human resources. The additional complexity in earth system models comes at a price – huge demands on human resources. For many climate modelling labs, the demands are too great. So perhaps we should pool our development teams, so that we’re not all busy trying to replicate each other’s codes.

Could we move to a more community wide approach? It happened to the aerospace industry in Europe, when the various countries got together to form Aerobus. Is it a good idea for climate modelling? Institutional directors take a dogmatic view that it’s a bad idea. The argument is that we need model diversity to have good estimates of uncertainty. Tim doesn’t want to argue against this, but points out that once we have a probabilistic modelling capability, we can test this statement objectively – in other words, we can test whether in different modes, the multi-model ensemble does better than a stochastic approach.

When we talk about modelling, it covers a large spectrum, from idealized mathematically tractable models through to comprehensive mathematical models. But this has led to a separation of the communities. The academic community develops the idealized models, while the software engineering groups in the met offices build the brute-force models.

Which brings Tim to the grand challenge: the academic community should help develop prototype probabilistic Earth System Models, based on innovative and physically robust stochastic-dynamics models. The effort has started already, at the Isaac Newton Institute. They are engaging mathematicians and climate modellers, looking at stochastic approaches to climate modelling. They have already set up a network, and Tim encouraged people who are interested to subscribe.

Finally, Tim commented on the issue of how to communicate the science in this Post-cancun, post-climategate world. He went to a talk about how climate scientists should become much more emotional about communicating climate [Presumably the authors session the previous day]. Tim wanted to give his own read on this. There is a wide body of opinion that cost of major emissions cuts is not justified given current levels of uncertainty in climate predictions (and this body of opinion has strong political traction). Repeatedly appealing to the precautionary principle, and our grandchildren is not an effective approach. They can bring out pictures of their grandchildren, saying they don’t want them to grow up in a country bankrupted by bad climate policies.

We might not be able to move forward from the current stalemate without improving the accuracy of climate predictions. And are we (as scientists and government) doing all we possibly can to assess whether climate change will be disastrous, or something we can adapt to? Tim gives us 7/10 at present.

One thing we could do is to integrate NWP and seasonal to interannual prediction into this idea of seamless prediction. NWP and climate diverged in the 1960s, and need to come together again. If he had more time, he would talk about how data assimilation can be used as a powerful tool to test and improve the models. NWP models run at much finer resolution than climate models,  but are enormously computationally expensive. So are governments giving the scientists all the tools they need? In Europe, they’re not getting enough computing resources to put onto this problem. So why aren’t we doing all we possibly can to reduce these uncertainties?

Update: John Baez has a great in-depth interview with Tim over at Azimuth.

To follow on from the authors session on Wednesday morning, Michael Oppenheimer, from Princeton, gave the inaugural Stephen Schneider Global Change Lecture, with a talk entitled “Scientists, Expert Judgment, and Public Policy: What is Our Proper Role?” (see the webcast here)

Michael’s theme was about how (and when) scientists should engage in broader communication in the public arena. His aim was to address three issues: the doubts that scientists often have about engaging in public communication, strategies for people who aren’t Carl Sagans (or Stephen Schneiders), and some cautionary tales about the difficulties.

First some context. There is a substantial literature on the relationship between scientists and broader communities, going at least back to CP Snow, and through Naomi Oreskes. CP Snow provides a good starting point. In the two cultures talk, Snow launched a diatribe against Britain’s educated elite. Strip away the critique of class structures, and you get an analysis of the difficulty most political leaders have of comprehending the science that sheds light on how the world is. There have been some changes since then – in particular, the culture of political leaders is no longer as highbrow as it used to be. Snow argued that industrial revolution was a mixed bag, that brought huge inequalities. He saw scientists as wiser, more ethical, and more likely to act in the interest of society than others. But he also saw that they were poor at explaining their own work, making their role in public education problematic. One cannot prove that the world has taken a better path because of the intervention of scientists, but one can clearly show that scientists have raised the level of public discourse.

But science communication is hard to do. Messages are easily misunderstood, and it’s not clear who is listening to us, and when, or even whether anyone is listening at all. So why get involved? Michael began by answering the standard objections:

It takes time. Are we as a community obligated to do this? Can’t we stay in our labs while the policymakers get on with it? Answer: If we don’t engage, we leave congress (for example) with the option of seeking advice from people who are less competent to provide it.

Can we minimize involvement just by issuing reports and leaving it at that? Answer: reports need to be interpreted. For example, the statement “warming of the climate system is unequivocal” was used in the IPCC AR4. But what does this mean? A reasonably intelligent person could ask all sorts of questions about what it means. In this case, the IPCC did not intend to say that both the fact of warming, and the fact of human attribution of that warming, are unequivocal. But last week at COP Cancun, that double meaning was widely used.

Well someone has to do the dirty work, but I’m not so good at it, so I’ll let others do it. Answer: we may no longer have this choice. Ask the people who were at the centre of climategate, many of whom were swept up in the story whether they liked it or not (and some of the people swept up in it were no more than recipients of some of the emails). We’re now at the centre of a contentious public debate, and it’s not up to the institutions, but to the people who make up those institutions to participate.

Do we have an obligation? Answer: Public money funds much of our work, including our salaries. Because of this, we have an obligation not just to publish, but to think about how others might use our research. We don’t spend enough time thinking about the scientific context in which our findings will be understood and used.

So like it or not, we cannot avoid the responsibility to communicate our science to broader audiences. But in doing this, our organisations need to be able to distinguish fair criticism from outside, where responding to them will strengthen our institutions, from unsupported attacks, which are usually met with silence.

What are our options?

Take a partisan position (for a candidate or a policy) that is tied to your judgement on the science. Probably this is not to everyone’s taste. People worry that being seen as partisan will damage science. But visible participation by scientists in political process does not damage in any way the collective reputation of science and the scientific community. Problems occur when scientific credentials are used to support a political position that has nothing to do with the science. (Michael cited the example of Bill Frisk using his credentials to make medical pronouncements for political reasons, on a case he wasn’t qualified to comment on). Make sure you are comfortable in your scientific skin if you go the political route.

Take sides publicly about the policy implications of your research (e.g. blog about it, write letters, talk to your congressperson, etc). This is a political act, and is based both on science and on other considerations. The further from your own expertise you go in making pronouncements, the shakier ground you are on. For example, it is far outside the expertise of most people in the room to judge viability of different policy options on climate change. But if we’re clear what kind of value judgements we’re making, and how they relate to our expertise, then its okay.

Can we stop speaking as experts when we talk about value issues? The problem is that people wander over the line all the time without worrying about it, and the media can be lazy about doing due diligence on finding people with appropriate expertise. That doesn’t mean we shouldn’t take the opportunity to fix this. If you become concerned about an issue and want to speak out on it, do take the time to understand the relevant literature. Half-truths taken out of context can be the most damaging thing of all. And it’s intoxicating being asked to give expert opinions. So we need to keep our heads about us and be careful. For example make use of scripts provided by assessment reports.

We should not be reticent about expressing value judgements that border on our areas of expertise, but we should be clear that those value judgements don’t necessarily carry more weight than other people’s value judgements.

Participate in community activities such as the IPCC, NAS, panels, AGU outreach, etc. The emphasis we place on these implies some judgement. As more of us speak in public, there will be more open disagreement about the details (for example, different scientific opinions about likelihood of ice sheets melting this century). The IPCC doesn’t disparage divergent views, but it doesn’t tolerate people who don’t accept evidence-based assessments.

Avoid all of it (but is this even possible?). Even if you avoid sitting on panels where implications of the research will be discussed, or refuse to discuss applied aspects of your work, you’re still not safe, as the CRU email issue showed.

Above all, we all have a citizen’s right to express an opinion, and some citizens might think our opinions carry special weight because of our expertise. But we have no right to throw a temper tantrum because the policy choices don’t go the way we would like. Scientists are held in higher regard than most professional communities (although there isn’t much competition here). But we also need to be psychologically ready for when there are media stories about areas we are expert in, and nobody comes to seek our opinion.

We’re not a priesthood, we are fallible. We can contribute to the public debate, but we don’t automatically get a privileged role.

So some advice:

  • Don’t be rushed. Our first answer might not be our best answer – we often need to reflect. Michael pointed out the smartest response he ever gave a reporter was “I’ll call you back”.
  • Think about your audience in advance, and be prepared for people who don’t want to listen to or hear you. People tend to pick surrogates for expertise, usually ones who reflect their own worldview. E.g. Al Gore was well received among progressives, but on the right, people were attuned to other surrogates. Discordent threads often aren’t accommodated, they tend to be ignored. You could try putting aside moral principles while serving up the science, if your audience has different ideological view to you. For example, if you disagree with the Wall Street Journal editorial stance on science, then adjust your message when speaking to people who read them – cocktail parties might be more important than universities for education.
  • Expect to be vilified, but don’t return the favour (Michael read out some of the hate mails he has received at this point). You might even be subjected to legal moves and complaints of misconduct. E.g. Inhofe’s list of scientists who he claimed were implicated in the CRU emails, and for whom he recommended investigation. His criteria seems to have been anyone involved in IPCC processes who ever received any of the CRU emails (even if they never replied). Some people on the list have never spoken out publicly about climate change.
  • Don’t hide your biases, think them over and lay them out in advance. For example, Michael once asked a senior colleague why he believed climate sensitivity was around 1.5°C. rather than being in the 2 to 4.5°C range assessed by the national academies. He replied that he just didn’t think that humans could have that much impact on the climate. This is a belief though, rather than an evidence-based thing, and this should be clear up front, not hidden in the weeds.
  • Keep it civil. Michael has broken this rule in the past (e.g. getting into food fights on TV). But the worst outcome would be to let this divide us, whereas we’re all bound together by the same principles and ethics that underpin science.

And finally, to repeat Stephen Schneider’s standard advice: The truth is bad enough; Our integrity should never be compromised; Don’t be afraid of metaphors; and distinguish when speaking about values and when speaking as an expert.

I was particularly looking forward to two AGU keynote talks on Monday – John Holdren (Science and technology advisor to the President) and Julia Slingo (Chief Scientist at the UK Met Office). Holdren’s talk was a waste of time, while Slingo’s was fabulous. I might post later about what I disliked about Holdren’s talk (James Annan has some hints), and you can see both talks online:

Here’s my notes from Julia’s talk, for those who want a shorter version than the video.

Julia started with the observation that 2010 was an unprecedented year of geophysical hazards, which presents some serious challenges for how we discuss and communicate about these, and especially how to communicate about risks in a way that’s meaningful. And as most geophysical hazards either start with the weather or are mediated through impact on the weather, forecasting services like the UK Met Office have to struggle with this on a daily basis.

Julia was asked originally to come and talk about Eyjafjallajökull, as she was in the thick of the response to this emergency at the Met Office. But in putting together the talk, she decided to broaden things to draw lessons from several other major events this year:

  • Eyjafjallajökull’s eruptions and their impact on European Air Traffic.
  • Pakistan experienced the worst flooding since 1929, with huge loss of life and loss of crops, devastating an area the size of England.
  • The Russian heatwave and the forest fires, which was part of the worst drought in Russia since records began.
  • The Chinese summer floods and landslides, which was probably tied up with the same weather pattern, and caused the Three Gorges Dam, only just completed, to reach near capacity.
  • The first significant space weather storm of the new solar cycle as we head into a solar maximum (and looking forward, the likelihood of that major solar storms will have an impact on global telecommunications, electricity supply and global trading systems).
  • And now, in the past week, another dose of severe winter weather in the UK, along with the traffic chaos it always brings.

The big picture is that we are increasingly vulnerable to these geophysical events in an inter-dependent environment: Hydro-meteorological events and their impact on Marine and Coastal Infrastructures; Space Weather events and their impact on satellite communications, aviation, and electricity supply; Geolological hazards such as earthquakes and volcanos; and Climate Disruption and its impact on food and water security, health, and infrastructure resilience.

What people really want to know is “what does it mean to me?” and “what action should I take?”. Which means we need to be able to quantify exposure and vulnerability, and to assess socio-economic impact, so that we can then quantify and reduce the risk. But it’s a complex landscape, with different physical scales (local, regional, global), temporal scales (today, next year, next decade, next century), and responses (preparedness, reslience, adaptation). And it all exists within the the bigger picture on climate change (mitigation, policy, economics).

Part of the issue is the shifting context, with changing exposure (for example, more people live on the coast, and along rivers), changing vulnerability (for example our growing dependency on communication infrastructure, power grids, etc).

And forecasting is hard. Lorenz’s work on chaotic systems has become deeply embedded in meteorological science, with ensemble prediction systems now the main weapon for handling the various sources of uncertainty: initial condition uncertainty, model uncertainty  (arising from stochastic unresolved processes and parameter uncertainty), and forecast uncertainty. And we can’t use past forecast assessments to validate future forecasts under conditions of changing climate. The only way to build confidence in forecast system is to do the best possible underpinning science, and go back to the fundamentals, which means we need to collect the best observational data we can, and think about the theoretical principles.

Eyjafjallajökull

This shouldn’t have been unusual – there are 30 active volcanoes in Iceland, but they’ve been unusually quiet during the period in which aviation travel has developed. Eyjafjallajökull began to erupt in March. But in April it erupted through the glacier, causing a rapid transfer of heat from magma to water. A small volume of water produces a large volume of steam and very fine ash. The eruption then interacted with unfortunate meteorological conditions, which circulated the ash around a high pressure system over the North Atlantic. The North Atlantic Oscillation (NAO) was in strong negative phase, which causes the Jet stream to make a detour north, and then back down over UK and Western Europe. This pattern caused more frequent negative blocking NAO patterns from February though March, and then again from April though June.

Normally, ash from volcanoes is just blown away, and normally it’s not as fine. The Volcanic Ash Advisory Centres (VAACs) are responsible for managing the risks. London handles a small region (which includes the UK and Iceland), but if ash originates in your area, it’s considered to be yours to manage, no matter where it then goes. So, as the ash spread over other regions, the UK couldn’t get rid of responsibility!

To assess the risk, you take what you know and feed it into a dispersion model, which then is used to generate a VAAC advisory. These advisories usually don’t say anything about how much ash there is, they just define a boundary of the affected area, and advise not to fly through it. As this eruption unfolded, it became clear there were no-fly zones all over the place. Then, the question came about how much ash there was – people needed to know how much ash and at what level, to make finer grained decisions about flying risk. The UK VAAC had to do more science very rapidly (within a five day period) to generate more detailed data for planning.

And there are many sources of uncertainty:

  • Data on ash clouds is hard to collect, because you cannot fly the normal meteorological aircraft into the zone, as they have jet engines.
  • Dispersion patterns. While the dispersion model gave very accurate descriptions of ash gradients, it did poorly on the longer term dispersion. Normally, ash drops out of the air after a couple of days. In this case, ash as old as five days was still relevant, and needed to be captured in the model. Also, the ash became very stratified vertically, making it particularly challenging for advising the aviation industry.
  • Emissions characteristics. This rapidly became a multidisciplinary science operation (lots of different experts brought together in a few days). The current models represent the release as a vertical column with no vertical variation. But the plume changed shape dramatically over the course of the eruption. It was important to figure out what was exiting the area downwind, as well as the nature of the plume. Understanding dynamics of plumes is central to the problem, and it’s a hard computational fluid dynamics problem.
  • Particle size, as dispersion patterns depend on this.
  • Engineering tolerances. For risk based assessment, we need to work with aircraft engine manufacturers to figure out what kinds of ash concentration are dangerous. Needed to provide detailed risk assessment for exceeding thresholds for engine safety.

Some parts of the process are more uncertain than others. For example the formation of the suspended ash plume was a major source of uncertainty, and the ash cloud properties led to some uncertainty. The meteorology, dispersion forecasts, and engineering data on aircraft engines are smaller sources of uncertainty.

The Pakistan Floods

This is more a story of changing vulnerability rather than changing exposure.It wasn’t unprecedented, but it was very serious. There’s now a much larger population in Pakistan, and particularly more people living along river banks. So it had a very different impact to last similar flooding in 1920s.

The floods were caused by a conjunction of two weather systems – the active phase of the summer monsoon, in conjunction with large amplitude waves in mid-latitudes. The position of the sub-tropical jet, which is usually well to the north of the tibetan plateau, made a huge turn south, down over Pakistan. It caused exceptional cloudbursts over the mountains of western Pakistan.

Could these storms have been predicted? Days ahead, the weather forecast models showed unusually large accumulations – for example 9 days ahead, the ECMWF showed a probability of exceeding 100mm over four days. These figures could have been fed into hydrological models to assess impact on river systems (but weren’t).

The Russian heatwave

Whereas Eyjafjallajökull was a story of changing exposure, and Pakistan was a story of changing vulnerability, it’s likely that the Russian heatwave was a story of changing climate.

There were seasonal forecasts, and the heatwaves were within the range of the ensemble runs, but nowhere near the ensemble mean. For example, the May 2010 seasonal forecast for July showed a strong warm signal in over Russia in the ensemble mean. The two warmest forecasts in the ensemble captured very well the observed warm pattern and intensity. It’s possible that the story here is that use of past data to validate seasonal forecasts is increasingly problematic under conditions of changing climate, as it gives a probability density function that is too conservative.

More importantly, seasonal forecasts of extreme heat are associated with blocking and downstream trough. But we don’t have enough resolution in the models to do this well yet – the capability is just emerging.

We could also have taken these seasonal forecasts and pushed them through to analyze impact on air quality (but didn’t).

And the attribution? It was a blocking event (Martin Hoerling at NOAA has a more detailed analysis). It has the same cause as the European heatwaves in 2003. It’s part of a normal blocking pattern, but amplified by global warming.

Cumbrian floods

From 17-20 Novemver 2009, there was unprecedented flooding (at least going back 2 centuries) in Cumbria, in the north of England. The UK Met office was able to put out a red alert warning two days in advance for severe flooding in the region. It was quite a bold forecast, and they couldn’t have done this a couple of years ago. The forecast was possible from the high resolution 1.5km UK Model, which was quasi-operational in May 2009. Now these forecasts are on a scale that is meaningful and useful to hydrologists.

Conclusions

We have made considerable progress on our ability to predict weather and climate extremes, and geophysical hazards. We have made some progress on assessing vulnerability, exposure and socio-economic impact, but these are a major limiting factor our ability to provide useful advice. And there is still major uncertainty in quantifying and reducing risk.

The modelling and forecasting needs to be done in a probabilistic framework. Geophysical hazards cross many disciplines and many scales in space and time. We’re moving towards a seamless forecasting system, that attempts to bridge the gap between weather and climate forecasting, but there are still problems in bridging the gaps and bridging the scales. Progress depends on observation and monitoring, analysis and modelling, prediction and impacts assessment, handling and communicating uncertainty.  And dialogue with end users is essential – it’s very stimulating, as they challenge the science, and they bring fresh thinking.

And finally, a major barrier is access to supercomputing power – we could do so much more if had more computing capability.

I spent most of Wednesday attending a series of sessions featuring bestselling authors from the AGU Global Environmental Change division. The presenters were authors of books published in the last couple of years, all on various aspects of climate change, and all aimed at a more general audience. As the chairs of the track pointed out, it’s not news when an AGU member publishes a book, but it is news when so many publish books aimed at a general audience in a short space of time  - you don’t normally walk into a bookstore and see a whole table of books authored by AGU members.

As the session unfolded, and the authors talked about their books, and their reasons for writing them, it became clear that there’s a groundswell here, of scientists who have realised that the traditional mode by which science gets communicated with the broader society just isn’t working with respect to climate change, and a different approach is needed, along with a few from outside the climate science community who have stepped in to help overcome the communication barrier.

The first two books were on geoengineering. Unfortunately, I missed the first, Eli Kintish’s “Hack the Planet: What we Talk About When we Talk About Geoengineering”, and second speaker, Jeff Goodall, author of “How to Cool the Planet” didn’t make it. So instead, I’ll point to the review of both books that appeared in Nature Reports back in April. As the review makes clear, both books are very timely, given how little public discussion there has been on geoengineering, and how important it is that we think much more carefully about this because we’re likely to be approaching a point where people will attempt geoengineering in desperation.

One interesting point made in the Nature Reports review is the contrast in styles, between Eli’s book, which is much more of a science book suitable for a scientifically literature audience, and which digs deeper into how various geoengineering proposals might work, versus Jeff’s book, which is more lively in style, illustrating each chapter through the work of a particular scientist.

This theme of how to get the ideas across, and especially how to humanize them, came out throughout the session as the other authors presented their experiences.

In place of Jeff’s talk, Brian Fagan, author of “The great warming: Climate Change and the Rise and Fall of Civilization” filled in. Brian is an anthropologist by training, but has focussed much of his career on how to explain research in his field to a broader audience. As snippets from his book, Brian gave a number of examples of how human civilization in the past has been affected by changing climate. He talked about how a warmer European Climate in medieval times allowed the Vikings to explore widely across the North Atlantic (in open boats!), and how the Mayan civilization, which lasted from 200BC to 900AD was eventually brought down by a series of droughts. The Mayans took water very seriously, and many of their rituals focussed on water (or lack of it), while the Mayan pyramids also acted as water towers. In the late 19th Century, the Indian Monsoons failed, and millions died, at at time when the British Raj was exporting rice from India to bring down food prices in Europe.

The interesting thing about all these examples is that it’s not the case that climate change causes civilization to fall. It’s more like the ripples spreading out from a stone dropped into a calm pool – the spreading ripples are the social and economic consequences of climate changes, which in some cases make adaptation possible, and in other cases lead to the end of a civilization.

But most of what Brian wanted to talk about was why he wrote the book in the first place, or rather why he got involved in communicating issues such as climate change to broader audiences. He taught as a professor for 36 long(!) years. But he was strongly affected by experiences at the beginning of his career, in his early 20s, when he spent a year in the Zambezi valley. Here, rainfall is unpredictable, and when the rains don’t come people starve. He’s thought a lot since then about the experience. More recently, seeing the results of the Hadley Centre models that forecast increasing droughts through the next few decades, he realised that the story of drought in human history needed to be told.

But there’s a challenge. As an academic, from a research culture, you have to deal with the “publish or perish” culture. If we want to reach the public, something has to change. The NSF doesn’t provide research funds to explain to the public what we do. So he had to raise money by other means to fund his work, mostly from the private sector. Brian made much of this contrast – studies of (faintly disgusting) ancient artefacts for their own sake are fundable, but attempts to put this work in context and tell the larger stories are not. Brian was accused by one University administrator of doing “inappropriate research”. And yet, archeology is about human diversity – about people, so telling these stories about human diversity ought to be central to the field.

Having written the book, he found himself on the bestseller lists, and got onto the Daily show. This was quite an experience – Jon Stewart reads everything in the book, and he sits right up close to you and is in your face. Brian’s comment was “Thank god I had taught graduate seminars” and was experienced with dealing with probing questions.

His other advice was if you want to reach out, you have to know why. People will ask, and “because I love it” isn’t enough – you have to have a really have a good reason. Always think about how your work related to others and to wider society. Use your research to tell stories, write clearly, and personal experience is very important. But above all, you must have passion – there is no point writing for a wider audience without it.

The next talk was by Claire .L. Parkinson, author of “Coming Climate Crisis? Consider the Past, Beware the Big Fix”. Claire’s motivation for writing the book was her concerns about geoengineering, and the need to explain the risks. She mentioned that if she’d realised Eli and Jeff were writing their books, she probably wouldn’t have.

She also felt she needed to deal with the question about how polarized and confused the community has become about climate change. Her goal was to lessen the confusion and to encourage caution about geoengineering. A central message of the book is that the earth’s climate has been changing for 4.6 billion years, but humans were not around for most of this. Climate can change can happen much more abruptly than what humans have experienced. And in the face of abrupt climate change, people tend to assume geoengineering can get us out of the problem. But geoengineering can have serious unintended consequences, because we are not all knowing, no matter how good our models and analysis are.

Claire gave an quick, chapter-by-chapter overview of the book: Chapter 2 gives an overview of 4.6 billion years of global changes, including tectonics, extra-terrestrial events, changes in orbit, etc; Chapter 3 covers abrupt climate changes, putting the last 20 years in comparison with the historical record from ice cores, with the key point being that the earth’s system can and does change abruptly, with the beginning and end of the Younger-Dryas period as the most obvious examples. Chapter 4 is a short history of human impacts on climate. The big impacts began with human agriculture, and with the industrial revolution.

Chapter 5 looks at the future, and the consensus view that the future looks bleak if business as usual continues. The IPCC scenarios show consequences of warming over the coming century. In this chapter, Claire also included a section at the end about scientists who disagree with the IPCC assessment. Her feeling is that we shouldn’t be disrespectful to the skeptics, because we might not be right. However she has been criticized for this [see for example, Alan Robock's review, which explains exactly what he thinks is wrong about this approach],

The next few chapters then explore geoengineering. Chapter 6 looks at things that were done in the past with good intentions, but went wrong. An example is the introduction of prickly pear cactus into Australia. Within decades it had grown so profusely that areas were destroyed by it and homesteads had to be abandoned. Chapter 7 explains the commonly touted geoengineering schemes, including space mirrors, carbon capture and sequestration, white roofs (which actually make sense), stratospheric sulfates, artificial trees, and ocean fertilization. Chapter 8 covers examples of attempts at a smaller scale to change the weather, such as cloud seeding, lessening hailstorms, and attempts to tame hurricanes (Jim Fleming, the next speaker had many more examples). These examples demonstrate lots of interest and ingenuity, but none were really successful, and therefore they provide a cautionary tale.

The last three chapters are also cautionary: just because we have a scientific consensus doesn’t mean we’re right. It’s unfortunately that people express things with 100% certainty, because it give the impression that we’re not open minded scientists. Chapter 10 is on climate models – no matter how wonderful they are, and no matter how wonderful the data records are, neither are perfect. So the models might provide misleading results, for example arctic sea ice has declined far faster than the models predicted. Chapter 11 is on the social pressures, and was the toughest chapter to write. There is both peer pressure and media pressure to conform to the consensus. Most people who got into the earth sciences in Claire’s generation never expected their work to have strong public interest. Scientists are now expected to provide soundbites to the media, which then get distorted and cause problems. Finally, chapter 12 looks at the alternatives – if geoengineering is too risky, what else can we do?

The next speaker was Jim Fleming, author of “Fixing the Sky: Why the History of Climate Engineering Matters”. Jim is a historian, and points out that most history of science books are heroic stories, whereas this book was his first tragicomedy. Throughout the book, hubris (on the part of the scientists involved) is a strong theme.

As an aside, Jim gave a simple reason why you should be nice to historians, best captured in the Samuel Johnson quote “God can’t alter the past, but historians can”. He also pointed out that we should take heed of the Bruntland’s point that current environmental crises require that we move beyond scientific compartmentalization, to draw the very best of our intellectual reserves from every field of endeavour.

Jim was the only historian invited to a NASA meeting at Ames in 2007, on managing solar radiation. He was rather amused when someone got on the mic to apologise for the problems they were having managing the temperature in the meeting room (and here they were, talking about managing the planet’s climate!). There were clearly some serious delusions among the scientists in the room about the prospect. As a result, he wrote an essay, ”The climate engineers” which was published in Wilson Quarterly, but was clearly a bit too short to do justice to the topic.

So the book set out to bring these issues to the public, and in particular the tragic history of public policy in weather and climate engineering. For climate change and geoengineering, people have been claiming we don’t have a history to draw on, that we are the first generation to think about these things, and that we don’t have time to ponder the lessons of history because the problem is too urgent. Jim says otherwise – there is a history to draw on, and we have to understand this history and learn from it. If you don’t study history, everything is unprecedented!!

Geogengineering will alter relationships, not just between humans and climate,  but among humans. If you think someone else is modifying your climate, you’re going to have a fundamentally altered relationship with them! He gave some fascinating anecdotes to illustrate this point. For example, one of the NCAR gliders was attacked with molotov cocktail – it turns out people thought they were “stealing the sky-water”, while in fact, the reason they were using a glider was to minimize the impact on clouds.

An early example of attempts to manage the weather include James Espy, who, having studied volcanoes, realized there’s always more rain after an eruption. In 1839, he proposed we should burn large fires across the appalachians to make more rain and to purify the air (because the extra rain would wash out the “miasmas”).

About the same time, Eliza Leslie wrote a short story “The Rain King“, which captures many of the social dynamics of geoengineering very well. It’s the story of the opening of a new Rain Office, which has the machinery to control the weekend weather, and sets up a democratic process for people to vote on what weather they want for the weekend. The story is brilliant in its depiction of the different petitioners, and the cases they make, along with the biases of the rain office staff themselves (they want to go for rain to test the machinery), the eventual trumping of them all by a high society lady, and the eventual disappointment of everyone concerned at the outcome of the process.

Another example focusses on Wexler (von Neumann’s right hand man) and the story of numerical computing in the 1940′s and 1950′s. At the time, one could imagine decommissioned WW2 flight squadrons going out to bomb a developing hurricane to stop it. Wexler and von Neumann both endorsed this idea. von Neumann’s 1955 essay ”Can we survive technology?” warned that climate control could lead to serious social consequences. Meanwhile, Wexler was concerned with other ways of fighting the Russians, opening up access to space, etc. While studying the Weather Watch program, he explored how rocket trails affect the ozone layer, and explored the idea of an ozone bomb that could take out the ozone layer, as well as weapons that could warm or cool the planet.

James Van Allen, discoverer of the van Allen belt, was also a geoengineer. He explored ways to change the earth’s magnetic field using A-bombs. His work was mainly focussed on “bell ringing” to test the impact of these bombs on the magnetic field. But there were also attempts to weaponize this, e.g. to cause a magnetic storm over Moscow.

Jim wrapped up with a crucial point about tipping points: if we attempt to tip the earth, where will it roll? If we do end up trying geoengineering, we will have to be interdisciplinary, international, and intergenerational about it.

The next speaker was Edward Parson, co-author with Andy Dessler of “The science and politics of global climate change: a guide to the debate”. The book is a broad overview of climate science, intended as a teaching resource. The collaboration in writing the book was interesting - Andy is an atmospheric scientist, Edward is an expert in climate policy. But neither knew much about the other discipline, so they had to collaborate and learn, rather than just dividing up the chapters. This meant they ended up looking in much more detail at the interactions between the science and the politics.

It was hard to nagivate a path through the treacherous waters of communicating the scientific knowledge as a basis for action: not just what we know, but how we know it and why we know it. In particular, they didn’t want to over-reach, to say scientific knowledge by itself is sufficient to know what to do in policymaking. Rather, it requires a second step, to specify something you wish to do, or something you wish to avoid, in order to understand policy choices. With climate change it has become much easier to demonstrate to anyone with a rational approach (as opposed to those who do magical thinking) that there are very clear arguments for urgent policy action, but you have to make this second step clear.

So why does everyone try to frame their policy disagreements as scientific disagreements? Edward pointed out that in fact most people are just doing “evidence shopping”, on one side or another. He’s been to many congressional hearings, where intelligent, thoughtful legislators, who are quite ignorant about the science, pound the table saying “the science says this, the science says that”. Scientific assessment processes are an important weapon in curtailing this evidence shopping. They restrain the ability of legislators to misuse the science to bolster their preferred policy response. A scientific assessment process is not the same as collective authorship of a scientific paper. It’s purpose is to assemble and survey the science.

Many of the fights over climate policy can actually be understood as different positions on how to manage risks under uncertainty. Many of these positions take an extreme stance on management of risk. Some of this can be traced back to the 1970s, when it was common for advocates to conflate environmental issues with criminal law. For example, a manufacturer of CFCs, arguing against action to protect the ozone layer, saying “what happened to the presumption of innocence?”, while ignoring the fact that chemicals aren’t humans.

In criminal proceedings, there are two ways to be wrong – you can convict the innocent, or release the guilty. We have a very strong bias in favour of the defendant, because one of these errors is regarded as much more serious than the other  - we always try and err on the side of not convicting innocent people. This rhetoric of “the burden of proof” and “presumption of innocence” has faded in environmental issues, but its legacy lives on. Now we hear lots of rhetoric about “science-based” policy, for example the claim that the Kyoto protocol isn’t based on the science. In effect, this is the same rhetorical game, with people demanding to delay policy responses until there is ever more scientific evidence.

But science is conservative in this, in the same way that criminal law is. As a scientist, it is much worse to be promiscuous in accepting new scientific claims that turn out to be wrong, than it is to reject new claims that turn out to be right, largely because of the cost of getting it wrong, and directing research funds to a line of research that doesn’t bear fruit.

When there are high stakes for managing public risk, this perception about the relative magnitude of the cost of the two types of error no longer applies. So attacks on Kyoto as not being exclusively based on the science are technically correct, but they are based on an approach to decision making that is dangerously unbalanced. For example, some people say to assessment bodies, “don’t even tell me about a risk until you have evidence that allows you to be absolutely certain about it”. Which is nuts – it’s the role of these bodies to lay out the risks, lay out the evidence and the uncertainties, so that policymaking can take them into account.

Much of the book ended up being a guide for how to use the science in policy making, without making biasing mistakes, such as these recklessly risky demands for scientists to be absolutely certain, or demands for scientists to suppress dissent. But in hindsight, perhaps they punted a little on how to solve these problems. Also, the book does attempt to address some of the claims of climate change deniers, but it’s not always possible to keep up with the silly things people are saying.

Edward finished by saying he has long wished for a book you could give to your irritating uncle, who is a smart guy with forceful opinions, but who gets his knowledge on climate change from Fox news and climate denialist blogs. The feedback is that the book does a good job on this. It’s a shame that the denialist movement has appropriated and sullied the term “skeptic” which is really what science is all about.

The next speaker was Naomi Oreskes, co-author (with Erik Conway) of “Merchants of Doubt”. Naomi titled her talk “Are debatable scientific questions debatable?”, a title taken from John Ziman’s 2000 paper, who points out there is a big contrast between debate in politics and debate in science, and this difference disadvantages scientists.

In political debates, debate is adversarial and polarized, aimed typically at deciding simple yes/no decisions. In science, we seek out intermediate positions, multivalent arguments, and consider many different hypotheses. And there is no simple voting process to declare a “winner”.

More importantly, ”scientific debates” generally aren’t about the science (evidence, findings) at all, they are about trans-scientific issues, and cannot be resolved by doing more science, nor won by people with more facts. Naomi argues that climate change is a trans-science issue.

When they wrote merchants of doubt, they were interested in why there is such a big gap between the scientific consensus and the policy discussions. For example, 18 years after the UN framework convention on climate change, the world still has not acted on it in any significant way. In 2007, the IPCC said the warming is unequivocal. But opinion polls showed a vast majority of the [American] population didn’t believe it. At the same time as scientific consensus was developing on climate change, a politically motivated consensus to attack the science was also developing. It focussed on credible, distinguished scientists who rejected the work of their own colleagues, and made common cause with the tobacco and fossil fuel industry.

Central to the story is the Marshall institute, which has been denying the science since the 1980′s. It was founded by three physicists, Seitz, Jastrow, and Nierenberg. All three had built their careers in cold war weaponry. They founded the Marshall institute to defend the Strategic Defence Initiative (SDI), which was extremely controversial at the time in the scientific community. 6500 scientists and engineeers signed a boycott of the program funds, a move that was historically unprecedented in the cold war era. In anger at this boycott,, Jastrow wrote an article in 1987 entitled ”America has Five Years Left”, warning about Soviet technical supremacy (and there’s a prediction that didn’t come true!). Jastrow was also working for the Reynolds corporation, whose principle strategy to fight increasing tobacco regulation was to cast doubt on the science that linked tobacco smoke to cancer. An infamous tobacco industry memo boasted that ”Doubt is our product”.

You might have thought that after the collapse of the Soviet Union, these old cold warriors would have retired, happy that America had won. But they found a new enemy: environmental extremism. They applied the tabacco strategy, but they needed credible scientists to promote doubt. In every case, they argued that the scientific evidence was not strong enough to lead to government action.

Why did they do it? It wasn’t for money, nor for scientific concerns. They did it because they shared the political ideology that Soros calls “free market economy”. This brand of neo-liberalism was first widely promoted by Thatcher and Reagan, but also lives on even in the policies of left-leaning politicians such as Tony Blair. The ideology is based on the work of Milton Friedman. The problem, of course, is that environmentalists generally argue for regulation, but to the neo-liberal, regulation is one step to governmental control of everything.

This ideological motivation is clear in Singer’s work on the EPA ruling that second-hand smoke is a carcinogen. Independent expert reviews had concluded that second-hand smoke was responsible for 150,000 to 300,000 deaths. So why would a rocket scientist defend the tobacco industry? Singer lays it out clearly in his report: “If we do not carefully limit government control…”

These people tend to refer to environmentalists as “watermelons” – green on the outside, red on the inside. And yet the history of American environmentalism traces back to the work of Roosevelt, and Rockefeller. For example, the 1964 Wilderness Act was clearly bi-partisan – it passed congress with a vote of 373-1. Things began to change in the 1980s, when scientific evidence revealed problems such as acid rain and the ozone hole that seemed to require much greater government regulation, just as Reagan was promoting the idea of less government.

Some environmentalists might be socialists, but this doesn’t mean the science is wrong. But it does mean that there is a a problem with our economic system as we know it. It’s due to “negative externalities” – costs of economic activity that are not borne by those reaping the profits. Stern described climate change as “the greatest market failure ever”. In fact, acid rain, the ozone hole and climate change are all market failure, and it’s science that revealed this.

It seems pretty clear that all Americans believe in liberty, and prefer less intrusion by government. But at the same time, all societies accept there are limits to their freedoms. The debate, then, is on where these limits l should ie, which is clearly not a scientific question.

If this analysis is correct, then we should focus not on more evidence that the science is unequvocal, nor on collecting more evidence that there is a consensus among scientists. What we need is more vivid portrayals of what will happen.

The next talk was by Wally Broecker, about his latest book, The Great Ocean Conveyor. He said he wrote the book partly because he loves to write books, and partly because he’s been encouraged to speak out more on global warming, especially to young people. He wrote it in 3 months, but it took about a year to get published. Which is a shame, because in a fast moving science, things go out of date very quickly.

Students have a tendency to think everything in their textbooks is gospel. But of course this is not true – the science moves on. In the book, Wally shows that many of the things he originally thought about the ocean conveyer turned out not to be correct.

The first diagram showing the conveyer was produced from a sketch for a magazine article. Wally never met the artist, and the diagram is wrong in many ways, but it does get across the idea that the ocean is an interconnected system.

The ocean conveyer idea was discovered by serendipity. A series of meetings were held to examine the new data coming from the Greenland ice cores. On seeing graphs showing the CO2 record against ice depth, Wally wondered how the wide variations in the CO2 record could be explained. He focussed on the North Atlantic, exploring whether the CO2 could have got in and out of the atmosphere through changes to the the ocean overturning. Eventually he stumbled on the idea of the ocean conveyer.

I was particularly struck by the map Wally showed of world river drainage, showing that the vast majority of the world’s landmasses drain into the Atlantic. This drainage pattern, together with condensation from warm tropical seas cause large changes in salt concentration, which in turn drive ocean movements because saltier water is heavier and sinks, while less salty water rises.

There are still a number of mysteries to be solved. For example, what caused the Younger Dryas event? Wally was a proponent of the theory that a break in ocean overturning occurred when Lake Agassiz broke through to drain into the Atlantic, dramatically changing the salinity. But no evidence of this flood has been found, so he’s had to abandon this idea. Some argue that the flood might have gone in a different direction (e.g. to Gulf of Mexico).  Or it could all have been due to a meteorite. It’s a big remaining problem – what caused it?

The next talk was by Dorothy Kenny, “Seeing Through Smoke: Sorting through the Science and Politics in the Making of the 1956 British Clean Air Act”. She hasn’t published this study yet, but is hoping to find a publisher soon. The story starts on December 5th, 1952. A “pea soup” fog covers London. White shirts turns grey. Streetcars are abandoned in the street. The smog lasts until the 9th, and newspapers start to tot up the growing death count. Within a week, 4,000 people were dead. Three months later, the death toll had risen to 12,000 people. The smogs had become killers.

By July 1953, the UK government had formed a Committee on Air pollution. In December 1953, it presented an interim report on the cause and effect of the smogs (but with no policy prescriptions). A year later, it produced a final report with plans for action, and in 1956 the clean air act was finalized and passed by parliament.

What was needed for this act to pass? Dorothy laid out three factors:

  1. Responsibility had to be established. Who was responsible for acting? Three different Ministries (Health; Housing and Local Govt; and Fuel and Power) all punted, each pointing at the department of science and industry. But DSIR hadn’t looked into it, citing a lack of funding, and a lack of people. The formation of the Beaver committee fixed this – the committee could become the central body for public discontent. They were anxious to get something published by the first anniversary of the smog, in part responding to the need for a ritual response to show that government is doing something.
  2. The problem needed to be defined and described. The interim report identified sulphur dioxides and visible smoke as the main culprits, both from coal. The media critized the report, because it didn’t propose a solution, and just told people to stay indoors on smog days. There was widespread fear of another killer smog and the public wanted a plan of action.
  3. Possible solutions to the problem needed to be discussed and weighed up. A cost-benefit analysis was used in the final report to include and exclude policy solutions. In the end, the clean air act focussed on particulate matter, and left out any action on sulphur dioxide. It promoted smokeless fuel, which was a huge cultural change, taking away the traditional British coal fire, and replacing it with a new, strange fuel. Even the public pamphlets at the time hid the role of SO2, eliding them from graphs showing the impacts of the smogs. Why was SO2 excluded? Largely because of technical limitations. The available approaches for removing SO2 from coal were deemed impractical: flue gas washing, which involves flushing river water through the flues and dumping it back into the rivers, was highly polluting; while coal washing was ineffective, as there was no method at the time to get rid of the sulphur. The committee argued that solutions deemed not practical could not be included in the legislation.

What lessons can be drawn from the clean air act? First, that environmental policy is exceedingly complex. Second, policy doesn’t necessarily have a short term outcome. Third, even with loopholes and exclusions, the act was effective in setting the framework for dark smoke prevention. And finally, a change in public perception was crucial.

Next up was Jim Hansen, talking about his book “Storms of My Grandchildren: The Truth about the Coming Climate Catastrophe and Our Last Chance to Save Humanity”. (Lots of extra people flowed into the room for this talk!)

Jim gave a thoughtful account of his motivations, in particular the point that climate change is much more than a scientific matter. He has been doing science all his life, but it is only in the last few years that his grandchildren have dragged him into other aspects. Most especially, he’s motivated by the thought that he doesn’t want his grandchildren to look back and say “Grandpa understood the problem but didn’t do enough to make it clear”.

One thing he keeps forgetting to mention when talking about the book: all the royalties go to 350.org, which Jim believes is probably the most effective organization right now pushing for action.

Jim argues that dealing with climate change is not only possible, but makes sense for all sorts of reasons. But lots of people are busy making money from business as usual, and in particular, all governments are heavily invested in the fossil fuel industry.

Jim had testified to congress in the 1980s, and got lots of attention after this, but he decided didn’t want to get involved in this public aspect. So he referred requests from the media to other scientists who he thought more enjoyed the public visibility. Then, in 1990, after a newspaper report called him the grandfather of global warming, he used a photo in one of his talks of first grandchild, Sophie, at age 2, to demonstrate that at least was a grandfather, if not of global warming.

Later, he was invited to give a talk in Washington which for various reasons never happened, so he gave it instead as a distinguished lecture at the University of Iowa. In the talk, he used another photo of his grandchildren, to make a point about public understanding. It shows Sophie explaining greenhouse gas warming to her baby brother, with the caption “It’s 2W/m² forcing”. But baby Connor only counts to 1.

Just before the talk, he got a memo from NASA saying not to give the talk, as it could violate policy. He ignored the message, and gave the talk anyway, as he had paid his own way to get there for a vacation. A year later in 2005, Keeling invited him to give another talk, and for this he decided to connect the dots between special interests seeking to maximize profits and the long term economic wellbeing of the country. This talk gave rise to the “shitstorm at NASA HQ”, and the decision to prevent him from talking to the media. He managed to get the ban lifted by talking about it to the NY Times. But even that story was presented wrongly in the press – it wasn’t a 24 year-old appointee at NASA public relations, but a decision from very high up in NASA headquarters.

Then, in 2007, Bill McKibben started asking what is a safe level for carbon dioxide concentrations in the atmosphere. Bill was going to start an organisation called 450.org, based on Hansen’s work. But by 2007, it was becoming clear that even 450ppm might still be disastrous. Jim told him to wait until the AGU2007 fall meeting, when he would present a new paper with a new number. The analysis showed that if we want to keep a planet similar to the one in which civilization developed, we need to get back below 350ppm. This is feasible if we phase out coal over next two decades and leave the oil sands untouched. But the US has just signed an agreement for a pipeline from the Alberta tar sands to Texas refineries. The problem is that there’s a huge gap between the rhetoric of politicians and their policies, which are just small perturbations from business as usual.

Now he has two more grandchildren. Jim showed a photo of Jake at 2.5 years, showing he thinks he can protect his baby sister. But of course, Jake doesn’t understand there is more warming in the pipeline. The issue is really about inter-generational justice, but the public doesn’t understand this. It’s also about international justice – the developed countries have become rich by burning fossil fuels, but are unwilling to admit this. Fossil fuels are the cheapest source of energy, but only because nobody is obligated to pay for the damage caused.

Jim’s suggested solution is a fee at the point of energy generation, to be distributed to all people in the country (sometimes known as fee and dividend). It would stimulate the economy by putting money into peoples hands. He believes cap-and-trade won’t work, because industry, and China and India, won’t accept a cap. Cap-and-trade also keeps the issue very close to (and under control of) the fossil fuel industry.

So what are young people supposed to do? Recently, the young people in Britain who blocked a coal plant were convicted, and are likely to serve a jail term. Jim’s first grandchild, Sophie, now 12, wrote a letter to Obama, which includes phrases like “why don’t you listen to my grandfather?”. It’s rather a good letter. Young people need positive examples of things like this that they can do.

Jim ended his talk on a couple of notes of optimism:

  • China is behaving rationally. There is good chance they will put a price on carbon, and they are making enormous investments in carbon-free energy.
  • The legal approach is promising. The judicial branch of the US government is less influenced by fossil fuel money. We can sue the government for not doing it’s job!

The next speaker was Heidi M. Cullen, talking about her book “The Weather of the Future: Heat Waves, Extreme Storms, and Other Scenes from a Climate-Changed Planet”. Heidi set out to walk through the process of writing a book. She works for a non-profit group, Climate Central, aimed at communicating the science to the general public.

Heidi worked for many years as a climatologist for the weather channel, where she found it very hard to explain climate change to people who don’t understand the difference between climate and weather. When hurricane Katrina hit, she felt like a loser. It was the biggest story of the year, and as a climatologist, there was very little she could say about this tragic, terrible event. It was too hard amongst all the human tragedy to connect the dots and provide the context. But the experience planted the seed for the book, because it was a big climate change story – scientists had been saying for 20 years how vulnerable New Orleans was, and the disaster could have been prevented. And this story needed to be told.

So book was designed to tell the history – showing it goes all the way back to Arrhenius, not just something that started in the 1980′s with Hansen’s testimony to congress. And to tell the story of the science as a heroic endeavour, looking at the research that scientists are doing now, and how it fits into the story.

A recent poll showed that less than 18% of Americans know a scientist personally. So an important premise for the book was an attempt to connect the public more with scientists and their work. Heidi began by emailing all the climate scientists she knew, asking them if they had to pick the hotspots in the science, what would they pick.

It was a lot of work with publisher to pitch the book, and to convince them they should publish “another book on climate change”. Heidi’s editor was brilliant. He was also working on Pat Benetar’s biography, and other book on Rock and Roll, which made for an interesting juxtaposition. His advice was not to start the book at the beginning, but to start at the easiest place. But as an engineer, being anal, Heidi wanted to start at the beginning. Her editor turned out to be right.

It was very hard to manage the time needed to write the book. Each chapter, on a specific scientist, was effectively peer reviewed by the scientists. There were lots of interviews, all recorded and transcribed, which takes ages. She tried to tell it as a story that people could relate to. The story had no pre-ordained outcome, but different aspects scared the scientists in different ways.

The book came out in August, coincidentally, at the same time as the Russian heatwaves, so it got lots of interest from the press. Which brings Heidi to her final point: when you’ve finished the book and it gets published, that’s really only the start of the process!

The final talk of the session was by Greg Craven, author of “What’s the Worst that Could Happen”. Greg’s talk was completely different from everything that had come before. He gave an impassioned speech, more like the great speeches of the civil rights era – a call to arms – than a scientific talk. Which made both a great contrast to the previous speakers, and a challenge to them.

Greg challenged the audience, the scientists of the AGU, by pointing out we’re insane, at least according to the definition that insanity is doing the same thing over and over again expecting a different outcome. His point is that we’ve been using the same communication strategy, giving them straightforward scientific information, and that strategy isn’t working. Therefore it’s time for a radical change in approach. It’s time for scientists to come way outside of their comfort zones, and to inject some emotion, some passion in to the message.

It became clear during the talk that Greg was on at least his third different version of the talk, having lost one version when his hard drive crashed in the early hours, and having been inspired by the previous night’s dinner conversation with several seasoned climate scientists.

Greg’s advice was to stop communicating as scientists, and start speaking as human beings. Talk about our hopes and fears, and tell them frankly about the terrors you are ignoring when you get your head down doing the science, hoping that someone else will solve the problem. Scientists are civilization’s last chance – the cavalry who must come charging down the hill.

If you don’t believe now is the time, then come up with an operational definition, a test, for when it is the appropriate time to take extreme action. And if you can demonstrate rationally that it’s not the right time, then you can be absolved from the fight.

Anyway, I couldn’t possibly do justice to Greg’s passionate speech – you had to be there! Luckily, he’s promised to post the text of the speech to gregcraven.org by the weekend. Go read it, and figure out how you would respond to his challenge.

Here’s the first of a series of posts from the American Geophysical Society (AGU) Fall meeting, which is happening this week in San Francisco. The meeting is huge – they’re expecting 19,000 scientists to attend, making it the largest such meeting in the physical sciences.

The most interesting session today was a new session for the AGU:  IN14B “Software Engineering for Climate Modeling”. And I’m not just saying that because it included my talk – all the talks were fascinating. (I’ve posted the slides for my talk, “Do Over or Make Do: Climate Models as a Software Development Challenge“).

After my talk, the next speaker was Cecelia DeLuca of NOAA, with a talk entitled “Emergence of a Common Modeling Architecture for Earth System Science”. Cecelia gave a great overview of the Earth System Modelling Framework. She began by pointing out that climate models don’t just contain science code – they consist of a number of different kinds of software. Lots of the code is infrastructure code, which doesn’t necessarily need to be written by scientists. Around ten years ago, a number of projects started up that had the aim of building shared, standards-based infrastructure code. The projects needed to develop the technical and mathematical expertise to build infrastructure code. But the advantages of separating this code development from the science code was clear: the teams building infrastructure code could prioritize best practices, run the nightly testing process, etc, whereas typically the scientists would not do this.

ESMF provides a common modelling architecture. Native model data structures (modules, fields, grids, timekeeping) are wrapped into ESMF standard data structures, which conform to relevant standards (E.g. ISO standards, CF standards, and the Metafor common information model, etc). The framework also offers runtime compliance checking (e.g. to check timekeeping behaviour is correct), and automated documentation (e.g. the ability to write out model metadata in an XML standard format).

Because of these efforts, in the US, earth system  models are converging on a common architecture. It’s built on standardized component interfaces, and creates a layer of structured information within Earth system codes. The lesson here is that if you can take the legacy code, and express it in a standard way, you get tremendous power.

The next speaker was Amy Langenhorst from GFDL, “Making sense of complexity with the FRE climate modelling workflow system”. Amy explained the organisational setup at GFDL: there are approximately 300 people organized into groups: 6 science based groups groups, plus a technical services group, and a modelling services group. The latter consists of 15 people, with one of them acting as a liaison for each of the science groups. This group provides the software engineering support for the science teams.

The Flexible Modeling System (FMS) is software framework that provides a coupler and infrastructure support. FMS releases happen about once per year; it provides an extensive testing framework that currently includes 209 different model configurations.

One of the biggest challenges for modelling groups like GFDL is the IPCC cycle. Each providing the model runs for the IPCC assessments involves massive complex data processing, for which a good workflow manager is needed. FRE is the workflow manager for FMS. Development of FRE was started in 2002 by Amy, at a time when the model services group didn’t yet exist.

FRE includes version control, configuration management, tools for building executables, control of execution, etc. It also provides facilities for creating XML model description files, model configuration (using a component-based approach), and integrated model testing (e.g. basic tests, restarts, scaling). It also allows for experiment inheritance, so that it’s possible to set up new model configurations based on variants of previous runs, which is useful for perturbation studies.

Next up was Rob Burns from NASA GSFC, talking about “Software Engineering Practices in the Development of NASA Unified Weather Research and Forecasting (NU-WRF) Model“. WRF is a weather forecasting model originally developed at NCAR, but widely used across the NWP community. NU-WRF is an attempt to unify variants of NCAR WRF and to facilitate better use of WRF. NU-WRF is built from versions of NCAR’s WRF, with separate process of folding in enhancements.

As is common with many modelling efforts, there were challenges arising from multiple science teams, with individual goals, interests and expertise, and scientists don’t consider software engineering as their first priority. At NASA, the Sofware Integration and Visualization Office (SIVO) provides Software Engineering support for the scientific modelling teams. SIVO helps to drive, but not to lead the scientific modelling efforts. They help with full software lifecycle management, assisting with all software processes from requirements to release, but with domain experts still making the scientific decisions. The code is under full version control, using Subversion, and the software engineering team coordinates the effort to get the codes into version control.

The experience with NU-WRF shows that this kind of partnership between science teams and a software support team can work well. Leadership and active engagement with the science teams is needed. However, involvement of the entire science team for decisions is too slow, so a core team was formed to do this.

The next speaker was Thomas Clune from NASA GISS, with a talk “Constraints and Opportunities in GCM Model Development“. Thomas began with the question: How did we end up with the software we have today? From a software quality perspective, we wrote the wrong software. Over the years, improvements in fidelity in the models have driven a disproportionate growth in complexity of implementations.

One important constraint is that model codes change relatively slowly, in part because of the model validation processes – it’s important to be able to validate each code change individually – they can’t be bundled together. But also because code familiarity is important – the scientists have to understand their code, and if it changes too fast, they lose this familiarity.

However, the problem now is that software quality is incommensurate with the growing socioeconomic role for our models in understanding climate change. There’s a great quote from Ward Cunningham: “Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite… The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise…” Examples of this debt in climate models include long procedures, kludges, cut-and-paste duplication, short/ambiguous names, and inconsistent style.

The opportunities then are to exploit advances in software engineering from elsewhere to systematically and incrementally improve the software quality of climate models. For example:

  • Coding standards – these improve productivity through familiarity, reducesome types of bugs, and help newcomers. But must be adopted from within the community by negotiation.
  • Abandon CVS. It has too many liabilities for managing legacy code, e.g. a permanence to the directory structures. The community needs version control systems that handle branching and merging. NASA GISS is planning to switch to GIT in the new year, as soon as the IPCC runs are out of the way.
  • Unit testing. There’s a great quote from Michael Feathers: “The main thing that distinguishes legacy code from non-legacy code is tests. Or rather lack of tests”. Lack of tests leads to fear of introducing subtle bugs. Elsewhere, unit testing frameworks have caused a major shift in how commercial software development works, particularly in enabling test-driven development. Tom has been experimenting with pFUnit, a testing framework with support for parallel Fortran and MPI. The existence of such testing frameworks removes some of the excuses for not using unit testing for climate models (in most cases, the modeling community relies on regression testing in preference to unit testing). Some of the reasons commonly given for not doing unit testing seem to represent some confusion about what unit testing is for: e.g. that some constraints are unknown, that tests would just duplicate implementation, or that it’s impossible to test emergent behaviour. These kinds of excuse indicate that modelers tend to conflate scientific validation with the verification offered by unit testing.
  • Clone Detection. Tools now exist to detect code clones (places where code has been copied, sometimes with minor modifications across different parts of the software). Tom has experimented with some of these with the NASA modelE, with promising results.

The next talk was by John Krasting from GFDL, on “NOAA-GFDL’s Workflow for CMIP5/IPCC AR5 Experiments”. I didn’t take many notes, mainly because the subject was very familiar to me, having visited several modeling labs over the summer, all of whom were in the middle of the frantic process of generating their IPCC CMIP5 runs (or in some cases struggling to get started).

John explained that CMIP5 is somewhat different from the earlier CMIP projects, because it is much more comprehensive, with a much larger set of model experiments, and much larger set of model variables requested. CMIP1 focussed on pre-industrial control runs, while CMIP2 added some idealized climate change scenario experiments. For CMIP3, the entire archive (from all modeling centres) was 36 terabytes. For CMIP5, this is expected to be at least two orders of magnitude bigger. Because of the larger number of experiments, CMIP5 has a tiered structure, so that some kinds of experiments are prioritized (e.g. see the diagram from Taylor et al).

GFDL is expecting to generate around 15,000 model years of simulation, yielding around 10 petabytes of data, of which around 10%-15% will be released to the public, distributed via the ESG Gateway. The remainder of the data represents some redundancy, and some diagnostic data that’s intended for internal analysis.

The final speaker in the session was Archer Batcheller, from University of Michigan, with a talk entitled “Programming Makes Software; Support Makes Users“. Archer was reporting on the results of a study he has been conducting of several software infrastructure projects in the earth system modeling community. His main observation is that e-Science is about growing socio-technical systems, and that people are a key part of these systems. Effort is needed to nurture communities of users, but such effort is crucial for building the scientific cyberinfrastructure.

From his studies, Archer found that most people developing modeling infrastructure software divide their time about 50:50 between coding and other activities, including:

  • “selling” – explaining/promoting the software in publications, at conferences, and at community meetings (even though the software is free, it still has to be “marketed”)
  • support – helping users, which in turn helps with identifying new requirements
  • training – including 1-on-1, workshops online tutorials, etc.

Reading through the schedule for the AGU fall meeting this December, I came across the following session, scheduled for the final day of the conference (Dec 17). What a great line-up of speakers (I’ve pasted in the abstracts, as they’re hard to link to on the AGU’s meeting schedule):

U52A Climate Change Adaptation:

  • 10:20AM Jim Hansen (NASA) “State of Climate Change Science: Need for Adaptation and Mitigation” (Invited)
    Observations of on-going climate change, paleoclimate data, and climate simulations all concur: human-made greenhouse gases have set Earth on a path to climate change with dangerous consequences for humanity. We show that the matter is urgent and a moral issue that pits the rich and powerful against the young and unborn, against the defenseless, and against nature. Adaptation can only partially ameliorate the effects, as governments are failing to protect the public interest and failing in their duty to provide young people equal protection of the laws. We quantify the reduction pathway for fossil fuel emissions that is required to restore Earth’s energy balance and stabilize climate. We show that rapid changes in emission pathways are essential to avoid morally unacceptable adaptation requirements.
  • 10:50AM Richard Alley (Penn State U) “Ice in the Hot Box—What Adaptation Challenges Might We Face?” (Invited)
    Warming is projected to reduce ice, despite the tendency for increased precipitation. The many projected impacts include amplification of warming, sea-ice shrinkage opening seaways, and loss of water storage in snowpacks. However, sea-level rise may combine the largest effects with the greatest uncertainties. Rapid progress in understanding ice sheets has not yet produced projections with appropriately narrow uncertainties and high confidence to allow detailed planning. The range of recently published scaling arguments and back-of-the-envelope calculations is wide but often includes 1 m of rise this century. Steve Schneider’s many contributions on dangerous anthropogenic influence and on decision-making in the face of uncertainty help provide context for interpreting these preliminary and rapidly evolving results.
  • 11:10AM Ken Caldeira (Stanford) Adaptation to Impacts of Greenhouse Gases on the Ocean (Invited)
    Greenhouse gases are producing changes in ocean temperature and circulation, and these changes are already adversely affecting marine biota. Furthermore, carbon dioxide is absorbed by the oceans from the atmosphere, and this too is already adversely affecting some marine ecosystems. And, of course, sea-level rise affects both what is above and below the waterline.
    Clearly, the most effective approach to limit the negative impacts of climate change and acidification on the marine environment is to greatly diminish the rate of greenhouse gas emissions. However, there are other measures that can be taken to limit some of the negative effects of these stresses in the marine environment.
    Marine ecosystems are subject to multiple stresses, including overfishing, pollution, and loss of coastal wetlands that often serve as nurseries for the open ocean. The adaptive capacity of marine environments can be improved by limiting these other stresses.
    If current carbon dioxide emission trends continue, for some cases (e.g., coral reefs), it is possible that no amount of reduction in other stresses can offset the increase in stresses posed by warming and acidification. For other cases (e.g., blue-water top-predator fisheries), better fisheries management might yield improved population health despite continued warming and acidification.
    In addition to reducing stresses so as to improve the adaptive capacity of marine ecosystems, there is also the issue of adaptation in human communities that depend on this changing marine environment. For example, communities that depend on services provided by coral reefs may need to locate alternative foundations for their economies. The fishery industry will need to adapt to changes in fish abundance, timing and location.
    Most of the things we would like to do to increase the adaptive capacity of marine ecosystems (e.g., reduce fishing pressure, reduce coastal pollution, preserve coastal wetlands) are things that would make sense to do even in the absence of threats from climate change and ocean acidification. Therefore, these measures represent “no regrets” policy options for the marine environment.
    Nevertheless, even with adaptive policies in place, continued greenhouse gas emissions increasingly risk damaging marine ecosystems and the human communities that depend on them.
  • 11:30AM Alan Robock (Rutgers) Geoengineering and adaptation
    Geoengineering by carbon capture and storage (CCS) or solar radiation management (SRM) has been suggested as a possible solution to global warming. However, it is clear that mitigation should be the main response of society, quickly reducing emissions of greenhouse gases. While there is no concerted mitigation effort yet, even if the world moves quickly to reduce emissions, the gases that are already in the atmosphere will continue to warm the planet. CCS, if a system that is efficacious, safe, and not costly could be developed, would slowly remove CO2 from the atmosphere, but this will have a gradual effect on concentrations. SRM, if a system could be developed to produce stratospheric aerosols or brighten marine stratocumulus clouds, could be quickly effective in cooling, but could also have so many negative side effects that it would be better not do it at all. This means that, in spite of a concerted effort at mitigation and to develop CCS, there will be a certain amount of global warming in our future. Because CCS geoengineering will be too slow and SRM geoengineering is not a practical or safe solution to geoengineering, adaptation will be needed. Our current understanding of geoengineering makes it even more important to focus on adaptation responses to global warming.
  • 11:50AM Olga Wilhelmi (NCAR) Adaptation to heat health risk among vulnerable urban residents: a multi-city approach
    Recent studies on climate impacts demonstrate that climate change will have differential consequences in the U.S. at the regional and local scales. Changing climate is predicted to increase the frequency, intensity and impacts of extreme heat events prompting the need to develop preparedness and adaptation strategies that reduce societal vulnerability. Central to understanding societal vulnerability, is population’s adaptive capacity, which, in turn, influences adaptation, the actual adjustments made to cope with the impacts from current and future hazardous heat events. To-date, few studies have considered the complexity of vulnerability and its relationship to capacity to cope with or adapt to extreme heat. In this presentation we will discuss a pilot project conducted in 2009 in Phoenix, AZ, which explored urban societal vulnerability and adaptive capacity to extreme heat in several neighborhoods. Household-level surveys revealed differential adaptive capacity among the neighborhoods and social groups. In response to this pilot project, and in order to develop a methodological framework that could be used across locales, we also present an expansion of this project into Houston, TX and Toronto, Canada with the goal of furthering our understanding of adaptive capacity to extreme heat in very different urban settings. This presentation will communicate the results of the extreme heat vulnerability survey in Phoenix as well as the multidisciplinary, multi- model framework that will be used to explore urban vulnerability and adaptation strategies to heat in Houston and Toronto. We will outline challenges and opportunities in furthering our understanding of adaptive capacity and the need to approach these problems from a macro to a micro level.
  • 12:05PM Anthony Socci (US EPA) An Accelerated Path to Assisting At-Risk Communities Adapt to Climate Change
    Merely throwing money at adaptation is not development. Nor can the focus of adaptation assistance be development alone. Rather, adaptation assistance is arguably best served when it is country- or community-driven, and the overarching process is informed and guided by a set of underlying principles or a philosophy of action that primarily aims at improving the lives and livelihoods of affected communities.
    In the instance of adaptation assistance, I offer the following three guiding principles: 1. adaptation is at its core, about people; 2. adaptation is not merely an investment opportunity or suite of projects but a process, a lifestyle; and 3. adaptation cannot take place by proxy; nor can it be imposed on others by outside entities.
    With principles in hand, a suggested first step toward action is to assess what resources, capacity and skills one is capable of bringing to the table and whether these align with community needs. Clearly issues of scale demand a strategic approach in the interest of avoiding overselling and worse, creating false expectations. And because adaptation is a process, consider how best to ensure that adaptation activities remain sustainable by virtue of enhancing community capacity, resiliency and expertise should assistance and/or resources dwindle or come to an end.
    While not necessarily a first step, community engagement is undoubtedly the most critical element in any assistance process, requiring sorting out and agreeing upon terms of cooperation and respective roles and responsibilities, aspects of which should include discussions on how to assess the efficacy of resource use, how to assess progress, success or outcomes, what constitutes same, and who decides. It is virtually certain that adaptation activities are unlikely to take hold or maintain if they are not community led, community driven or community owned. There is no adaptation by proxy or fiat.
    It’s fair to ask at this point, how might one know what communities and countries need, what and where the opportunities are to assist countries and communities in adapting to climate change, and how might one get started? One of the most effective and efficient ways of identifying community/country needs, assistance opportunities and community/country entry points is to search the online archive of National Adaptation Programmes of Action (NAPAs) that many of the least developed countries have already assembled in conformance with the UNFCCC process. Better still perhaps, consider focusing on community-scale assessments and adaptation action plans that have already been compiled by various communities seeking assistance as national plans are unlikely to capture the nuances and variability of community needs. Unlike NAPAs, such plans are not archived in a central location. Yet clearly, community-scale plans in particular, not only represent an assessment of community needs and plans, presumptively crafted by affected communities, but also represent opportunities to align assistance resources and capacity with community needs, providing the basis for engaging affected communities in an accelerated process. Simply stated, take full advantage of the multitude of assessment and planning efforts that communities have already engaged in on their own behalf.

Here’s a whole set of things I can’t make it to. The great thing about being on sabbatical is the ability to travel, visit different labs, and so on. The downside is that there are far more interesting places and events than I can possibly make it to, and many of them clash. Here’s some I won’t be able to make it to this fall: