In the last session yesterday, Inez Fung gave the Charney Lecture: Progress in Earth System Modeling since the ENIAC Calculation. But I missed it as I had to go pick up the kids. She has a recent paper that seems to cover some of the same ground, and allegedly the lecture was recorded, so I’m looking forward to watching it once the AGU posts it. And this morning, Joanie Keyplas gave the Rachel Carson Lecture: Ocean Acidification and Coral Reef Ecosystems: A Simple Concept with Complex Findings. She also has a recent paper covering what I assume was in her talk (again, I missed it!). Both lectures were recorded, so I’m looking forward to watching them once the AGU posts them.

I made it to the latter half of the session on Standards-Based Interoperability. I missed Stefano Nativi‘s talk on the requirements analysis for GIS systems, but there’s lots of interesting stuff on his web page to explore. However, I did catch Olga Wilhelmi presenting the results of a community workshop at NCAR on GIS for Weather, Climate and Impacts. She asked some interesting questions about the gathering of user requirements, and we chatted after the session about how users find the data they need (here’s an interesting set of use cases). I also chatted with Ben Domenico from Unidata/UCAR about open science. We were complaining about how hard it is at a conference like this to get people to put their presentation slides on the web. It turns out that some journals in the geosciences have explicit policies to reject papers if any part of the results have already been presented on the web (including in blogs, powerpoints, etc). Ben’s feeling is that these print media are effectively dead, and had some interesting thoughts about moving to electronic publishing, althoug we both worried that some of these restrictive policies might live on in online peer-review venues. (Ben is part of the THREDDS project, which is attempting to improve the way that scientists find and access datasets).

Down at the ESSI poster session, I bumped into Peter Fox, whom I’d met at the EGU meeting last month. We both chatted to Benjamin Branch, about his poster on spatial thinking and earth sciences, and especially how educators approach this. Ben’s PhD thesis looks at all the institutional barriers that prevent changes in high school curricula, all of which mitigate against the nurturing of cross-disciplinary skills (like spatial reasoning) necessary for understanding global climate change. We brainstormed some ideas for overcoming these barriers, including putting cool tools in the students hands (e.g. Google Maps mashups of interesting data sets; or idea that Jon had for a Lego-style constructor kit for building simplified climate models). I also speculated that if the education policy in the US prevents this kind of initiative, we should do it in another country, build it to a major success, and then import it back into the US as a best practice model. Oh, well, I can dream.

Next I chatted to Dicky Allison from Woods Hole, and Tom Yoksas from Unidata/UCAR. Dicky’s poster is on the MapServer project, and Tom shared with us the slides from his talk yesterday on the RAMADDA project, which is intended as a publishing platform for geosciences data. We spent some time playing with the RAMADDA data server, and Tom encouraged us to play with it more, and send comments back on our experiences. Again, most of the discussion was about how to facilitate access to these data sets, how to keep the user interface as simple as possible, and the need for instant access – e.g. grabbing datasets from a server while travelling to a conference, without having to have all the tools and data loaded on a large disk first. Oh, and Tom explained the relationship between NCAR and UCAR, but it’s too complicated to repeat here.

Here’s an aside. Browsing the UCAR pages, I just found the Climate Modeller’s Commandments. Nice.

This afternoon, I attended the session “A Meeting of the Models“, on the use of Multi-model Ensembles for weather and climate prediction. First speaker was Peter Houtekamer, talking about the Canadian Ensemble Prediction Systems (EPS). The key idea of an ensemble is that it samples across the uncertainty in the initial conditions. However, challenges arise from the incomplete understanding of the model-error. So the interesting questions are how to sample adequately across the space, to get a better ensemble spread. The NCEP Short-Range Ensemble Forecast System (SREF), claimed to be the first real-time operational regional ensemble prediction system in the world. Even grander is TIGGE, in which the output of lots of operational EPS’s are combined into an archive. The volume of the database is large (100s of ensemble members), and you really only need something like 20-40 members to get converging scores (he cites Talagrand for this) (aside: Talagrand diagrams are an interesting way of visualizing model spread). NAEFS combines 20-member American (NCEP) and 20-member Canadian (MSC) operational ensembles forecasts, to get a 40-member ensemble. Nice demonstration of how NAEFS outperforms both of the individual ensembles from which it is constructed. Multi-centre ensembles improve the sampling of model error, but impose a big operational cost: data exchange protocols, telecommunications costs, etc. As more centres are added, there are likely to be diminishing returns.

The American Geophysical Union’s Joint Assembly is in Toronto this week. It’s a little slim on climate science content compared to the EGU meeting, but I’m taking in a few sessions as it’s local and convenient. Yesterday I managed to visit some of the climate science posters. I also caught the last talk of the session on connecting space and planetary science, and learned that the solar cycles have a significant temperature impact on the upper atmosphere, but no obvious effect on the lower atmosphere, but more research is needed to understand the impact on climate simulations. (Heather Andres‘ poster has some more detail on this).

This morning, I attended the session on Regional Scale Climate  Change. I’m learning that understanding the relationship between temperature change and increased tropical storm activity is complicated, because tropical storms seem to react to complex patterns of temperature change, rather than just the temperature itself. I’m also learning that you can use statistical downscaling from the climate models to get finer grained regional simulations of the changes in rainfall, e.g. over the US, leading to predictions for increased precipitation over much of the US in the winters and decreased in the summers. However, you have to be careful, because the models don’t capture seasonal variability well in some parts of the continent. A particular challenge for regional climate predictions is that some placed (e.g. Carribean Islands) are just too small to show up in the grids used in General Circulation Models (GCMs), which means we need more work on Regional Models to get the necessary resolution.

Final talk is Noah Diffenbaugh‘s talk on an ensemble approach to regional climate forecasts. He’s using the IPCC’s A1B scenario (but notes that in the last few years, emissions have exceeded those for this scenario). The model is nested – a hight resolution regional model (25km) is nested within a GCM (CCSM3, at T85 resolution), but the information flows only in one direction, from the GCM to the RCM. As far as I can tell, the reason it’s one way, is because the GCM run is pre-computed; specifically, it is taken by averaging 5 existing runs of the CCSM3 model from the IPCC AR4 dataset, and generate 6-hourly 3D atmosphere fields to drive the regional model. The runs show that by 2030-2039, we should expect 6-8 heat stress events per deacade across the whole of the south-west US (where a heat stress event is the kind of thing that should only hit once per  decade). Interestingly, the warming is greater in the south-eastern US, but because the south-western states are already closer to the threshold temperature for heat stress events, they get more heatwaves. Noah also showed some interesting validation images, to demonstrate that the regional model reproduced 20th Century temperatures over the US much better than the GCM does. 

Noah also talked a little about the role of the 2°C threshold used in climate negotiations, particularly at the Copenhagen meeting. The politicians don’t like that the climate scientists are expressing uncertainty about the 2°C threshold. But there has to be, because the models show that even below 2 degrees, there are some serious regional impacts, in this case on the US. His take home message is that we need to seriously question greenhouse gas mitigation targets. One of the questioners pointed out that there is also some confusion between whether the 2°C is supposed to be above pre-industrial temperatures.

After lunch, I attended the session on Breakthrough Ideas and Technologies for a Planet at Risk II. First talk is by Lewis Gilbert on monitoring and managing a planet at risk. First, he noted that really, the planet itself isn’t at risk – destroying it is still outside our capacity. Life will survive. Humans will survive (at least for a while). But it’s the quality of that survival that is at question. Some definitions of sustainability (he has quibbles with them all). First Bruntland’s – future generations should be able to meet their own needs; Natural Capital – future generations should have a standard of living better or equal to our own. Gilbert’s own: existance of a set of possible futures that are acceptable in some satisficing sense. But all of these definitions are based on human values and human life. So the concept of sustainability has human concerns deeply embedded in it. The rest of his talk was a little vague – he described a state space, E, with multiple dimensions (e.g. physical, such as CO2 concentrations; sociological, such as infant mortality in Somalia; biological, such as amphibian counts in Sierra Nevada), in which we can talk about quality of human life a some function of the vectors. The question then becomes what are the acceptable and unacceptable regions of E. But I’m not sure how this helps any.

Alan Robock talked about Geoengineering. He’s conducted studies of the effect of seeding sulphur particles into the atmosphere, using NASA’s climate model. In particular, injecting them over the arctic, where there is the most temperature change, and least impact on humans. His studies show that the seeding does have a significant impact on temperature, but as soon as you stop the seeding, the global warming quickly rises to where it would have been. So basically, once you start, you can’t stop. Also, you get other effects: e.g. a reduction of the tropical monsoons, a reduction of precipitation. Here’s an alternative: could it be done by just seeding in the arctic summer (when the temperature rise matters), and not in the winter. e.g. seed in April, May and June, or just in April, rather than year round. He’s exploring options like these with the model. Interesting aside: Rolling Stone Magazine, Nov 3, 2006 “Dr Evil’s plan to stop Global Warming”. There was a meeting convened by NASA, at which Alan started to create a long list of risks associated with geoengineering (and has a newer paper updating the list currently in submission).

George Shaw talked about biogeologic carbon sequestration. First, he demolished the idea that peak oil / peak coal etc will save us, by calculating the amount of carbon that can be easily extracted by known fossil fuel reserves. Carbon capture ideas include iron fertilization of the oceans, which stimulates plankton growth, which extracts carbon from. Cyanobacteria also extract carbon. E.g. attach an algae farm to every power station smoke stack. However, to make any difference, the algae farm for one power plant might have to be 40-50 square km. He then described a specific case study, of taking the Salton Basin Area in southern California, and filling it up with an algae farm. This would remove a chunk of agricultural land, but would probably make money under the current carbon trading schemes.

Roel Snieder gave a talk “Facing the Facts and Living Our Values”. Interesting graph on energy efficiency, which shows that 60% of the energy we use is lost. Also presents a version of the graph showing cost of intervention against emissions reduction, point out that sequestration is the most expensive choice of all. Another nice point: understanding of the facts – how much CO2 gas is produced by burning all the coal in one railroad car. Answer is about 3 times the weight of the coal, but most people would say only a few ounces, because gases are very light. Also he has a neat public lecture, and encouraged the audience to get out and give similar lectures to the public.

Eric Barron: Beyond Climate Science. It’s a mistake for the climate science community to say that “the science is settled”, and we need to move on to mitigation strategies. Still five things we need:

  1. A true climate services – an authoritative, credible, user-centric source of information on climate (models and data). E.g. Advice on resettlement of threatened towns, advice on forestry management, etc.
  2. Deliberately expand the family of forecasting elements. Some natural expansion of forecasting is occurring, but the geoscience community needs to push this forward deliberately.
  3. Invest in stage 2 science – social sciences and the human dimension of climate change (physical science budget dwarves the social sciences budget).
  4. Deliberately tackle the issue of scale and the demand for an integrated approach.
  5. Evolve from independent research groups to environmental “intelligence” centres. Cohesive regional observation and modeling framework. And must connect vigorously with users and decision-makers.

Key point: we’re not ready. Characterizes the research community as a cottage industry of climate modellers. Interesting analogy: health sciences, which is almost entirely a “point-of-service” community that reacts to people coming in the door, with no coherent forecasting service. Finally, some examples of forecasting spread of west nile disease, lyme disease, etc.