Well, my intention to liveblog from interesting sessions is blown – the network connection in the meeting rooms is hopeless. One day, some conference will figure out how to provide reliable internet…

Yesterday I attended an interesting session in the afternoon on climate services. Much of the discussion was building on work done at the third World Climate Conference (WCC-3) in August, which set out to develop a framework for provision of climate services. These would play a role akin to local, regional and global weather forecasting services, but focussing on risk management and adaptation planning for the impacts of climate change. Most important is the emphasis on combining observation and monitoring services with research and modeling services (both of which already exist) with a new climate services information system (I assume this would be distributed across multiple agencies across the world) and system of user interfaces to deliver the information in forms needed for different audiences. Rasmus at RealClimate discusses some of the scientific challenges.

My concern in reading the outcomes of the WCC this is that it’s all focussed on a one-way flow of information, with insufficient attention to understanding who the different users would be, and what they really need. I needn’t have worried – the AGU session demonstrated that there’s plenty of people focussing on exactly this issue. I got the impression that there’s a massive international effort quietly putting in place the risk management and planning tools needed for a us to deal with the impacts of a rapidly changing climate, but which is completely ignored by a media still obsessed with the “is it happening?” pseudo-debate. The extent of this planning for expected impacts would make a much more compelling media story, and one that matters, on a local, scale to everyone.

Some highlights from the session:

Mark Svboda from the National Drought Mitigation Centre at the University of Nebraska, talking about drought planning the in US. He pointed out that drought tends to get ignored compared to other kinds of natural disasters (tornados, floods, hurricanes), presumably because it doesn’t happen within a daily news cycle. However drought dwarfs the damage costs in the US from all other kinds of natural disasters except hurricanes. One problem is that population growth has been highest in regions most subject ot drought, especially in the southwest US. The NDMC monitoring program includes the only repository of drought impacts. Their US drought monitor has been very successful, but next generation of tools need better sources of data on droughts, so they are working on adding a drought reporter, doing science outreach, working with kids, etc. Even more important, is improving the drought planning process, hence a series of workshops on drought management tools.

Tony Busalacchi from the Earth System Science Interdisciplinary Centre at the University of Maryland. Through a series of workshops in the CIRUN project, they’ve identified the need for tools for forecasting, especially around risks such as sea level rise. Especially the need for actionable information, but no service currently provides this. Climate information system needed for policymakers, on scales of seasons to decades, providing tailorable to regions, and with ability to explore “what-if” questions. To build this, needs coupling of models not used together before, and the synthesis of new datasets.

Robert Webb from NOAA, in Boulder, on experimental climate information services to support risk management. The key to risk assessment is to understand it’s across multiple timescales. Users of such services do not distinguish between weather and climate – they need to know about extreme weather events, and they need to know how such risks change over time. Climate change matters because of the impacts. Presenting the basic science and predictions of temperature change are irrelevant to most people – its the impacts that matter (His key quote: “It’s the impacts, stupid!”). Examples: water – droughts and floods, changes in snowpack, river stream flow, fire outlooks, and planning issues (urban, agriculture, health). He’s been working with the Climate Change and Western Water Group (CCAWWG) to develop a strategy on water management. How to get people to plan and adapt? The key is to get people to think in terms of scenarios rather than deterministic forecasts.

Guy Brasseur from German Climate Services Center, in Hamburg. German adaption strategy developed by german federal government, which appears to be way ahead of the US agencies in developing climate services. Guy emphasized the need for seamless prediction – need a uniform ensemble system to build from climate monitoring of recent past and present, and forward into the future, at different regional scales and timescales. Guy called for an Apollo-sized program to develop the infrastructure for this.

Kristen Averyt from the University of Colorado, talking about her “Climate services machine” (I need to get hold of the image for this – it was very nice). She’s been running workshops for Colorado-specific services, with beakout sessions focussed on impacts and utility of climate information. She presented some evaluations of the success of these workshop, including a climate literacy test they have developed. For example at one workshop, the attendees had 63% correct answers at the beginning (where the wrong answers tended to cluster and indicate some important misperceptions. I need to get hold of this – it sounds like an interesting test. Kristen’s main point was that these workshops play an important role in reaching out to people of all ages, including kids, and getting them to understand how climate change will affect them.

Overall, the main message of this session was that while there have been lots of advances in our understanding of climate, these are still not being used for planning and decision-making.

First proper day of the AGU conference, and I managed to get to the (free!) breakfast for Canadian members, which was so well attended that the food ran out early. Do I read this as a great showing for Canadians at AGU, or just that we’re easily tempted with free food?

Anyway, on to the first of three poster sessions we’re involved in this week. This first poster was on TracSnap, the tool that Ainsley and Sarah worked on over the summer:

Our tracSNAP poster for the AGU meeting. Click for fullsize.

Our tracSNAP poster for the AGU meeting. Click for fullsize.

The key idea in this project is that large teams of software developers find it hard to maintain an awareness of one another’s work, and cannot easily identify the appropriate experts for different sections of the software they are building. In our observations of how large climate models are built, we noticed it’s often hard to keep up to date with what changes other people are working on, and how those changes will affect things. TracSNAP builds on previous research that attempts to visualize the social network of a large software team (e.g. who talks to whom), and relate that to couplings between code modules that team members are working on. Information about the intra-team communication patterns (e.g. emails, chat sessions, bug reports, etc) can be extracted automatically from project repositories, as can information about dependencies in the code. TracSNAP extracts data automatically from the project repository to provide answers to questions such as “Who else recently worked on the module I am about to start editing?”, and “Who else should I talk to before starting a task?”. The tool extracts hidden connections in the software by examining modules that were checked into the repository together (even though they don’t necessarily refer to each other), and offers advice on how to approach key experts by identifying intermediaries in the social network. It’s still a very early prototype, but I think it has huge potential. Ainsley is continuing to work on evaluating it on some existing climate models, to check that we can pull out of the respositories the data we think we can.

The poster session we were in, “IN11D. Management and Dissemination of Earth and Space Science Models” seemed a little disappointing as there were only three posters (a fourth poster presenter hadn’t made it to the meeting). But what we lacked in quantity, we made up in quality. Next to my poster was David Bailey‘s: “The CCSM4 Sea Ice Component and the Challenges and Rewards of Community Modeling”. I was intrigued by the second part of his title, so we got chatting about this. Supporting a broader community in climate modeling has a cost, and we talked about how university labs just cannot afford this overhead. However, it also comes with a number of benefits, particularly the existence of a group of people from different backgrounds who all take on some ownership of model development, and can come together to develop a consensus on how the model should evolve. With the CCSM, most of this happens in face to face meetings, particularly the twice-yearly user meetings. We also talked a little about the challenges of integrating the CICE sea ice model from Los Alamos with CCSM, especially given that CICE is also used in the Hadley model. Making it work in both models required some careful thinking about the interface, and hence more focus on modularity. David also mentioned people are starting to use the term kernelization as a label for the process of taking physics routines and packaging them so that they can be interchanged more easily.

Dennis Shea‘s poster, “Processing Community Model Output: An Approach to Community Accessibility” was also interesting. To tackle the problem of making output from the CCSM more accessible to the broader CCSM community, the decision was taken to standardize on netCDF for the data, and to develop and support a standard data analysis toolset, based on the NCAR Command Language. NCAR runs regular workshops on the use of these data formats and tools, as part of it’s broader community support efforts (and of course, this illustrates David’s point about universities not being able to afford to provide such support efforts).

The missing poster also looked interesting: Charles Zender from UC Irvine, comparing climate modeling practices with open source software practices. Judging from his abstract, Charles makes many of the same observations we made in our CiSE paper, so I was looking forward to comparing notes with him. Next time, I guess.

Poster sessions at these meetings are both wonderful and frustrating. Wonderful because you can wonder down aisles of posters and very quickly sample a large slice of research, and chat to the poster owners in a freeform format (which is usually much better than sitting through a talk). Frustrating because poster owners don’t stay near their posters very long (I certainly didn’t – too much to see and do!), which means you get to note an interesting piece of work, and then never manage to track down the author to chat (and if you’re like me, you also forget to write down contact details for posters you noticed. However, I did manage to make notes on two to follow up on:

  • Joe Galewsky caught my attention with an provocative title: “Integrating atmospheric and surface process models: Why software engineering is like having weasels rip your flesh”
  • I briefly caught Brendan Billingsley of NSIDC as he was taking his poster down. It caught my eye because it was a reflection on software reuse in the  Searchlight tool.