I posted a few times already about Allen et al’s paper on the Trillionth Tonne, ever since I saw Chris Jones present it at the EGU meeting in April. Basically, the work gets to the heart of the global challenge. If we want to hold temperatures below a 2°C rise, the key factor is not how much we burn in fossil fuels each year, but the cumulative emissions over centuries (because once we release carbon molecules from being buried under the ground, they tend to stay in the carbon cycle for centuries).

Allen et. al. did a probablistic analysis, and found that cumulative emissions of about 1 trillion tonnes of carbon give us a most likely peak temperature rise of 2ºC (with a 90% confidence interval of 1.3 – 3.9°C). We’ve burnt about half of this total since the beginning of the industrial revolution, so basically, we mustn’t burn more than another 1/2 trillion tonnes. We’ll burn through that in less than 30 years at current emissions growth rates. Clearly, we can’t keep burning fossil fuels at the current rate and then just stop on a dime when we get to a trillion tonnes. We have to follow a reduction curve that gets us reducing emissions steadily over the next 50-60 years, until we get to zero net emissions. (One implication of this analysis is that a large amount of existing oil and coal reserves have to stay buried in the ground, which will be hard to ensure given how much money there is to be made in digging it up and selling it).

Anyway, there’s now a website with a set of counters to show how well we’re doing: Trillionthtonne.org. Er, not so well right now, actually.

Our group had three posters accepted for presentation at the upcoming AGU Fall Meeting. As the scientific program doesn’t seem to be amenable to linking, here are the abstracts in full:

Poster Session IN11D. Management and Dissemination of Earth and Space Science Models (Monday Dec 14, 2009, 8am – 12:20pm)

Fostering Team Awareness in Earth System Modeling Communities

S. M. Easterbrook; A. Lawson; and S. Strong
Computer Science, University of Toronto, Toronto, ON, Canada.

Existing Global Climate Models are typically managed and controlled at a single site, with varied levels of participation by scientists outside the core lab. As these models evolve to encompass a wider set of earth systems, this central control of the modeling effort becomes a bottleneck. But such models cannot evolve to become fully distributed open source projects unless they address the imbalance in the availability of communication channels: scientists at the core site have access to regular face-to-face communication with one another, while those at remote sites have access to only a subset of these conversations – e.g. formally scheduled teleconferences and user meetings. Because of this imbalance, critical decision making can be hidden from many participants, their code contributions can interact in unanticipated ways, and the community loses awareness of who knows what. We have documented some of these problems in a field study at one climate modeling centre, and started to develop tools to overcome these problems. We report on one such tool, TracSNAP, which analyzes the social network of the scientists contributing code to the model by extracting the data in an existing project code repository. The tool presents the results of this analysis to modelers and model users in a number of ways: recommendation for who has expertise on particular code modules, suggestions for code sections that are related to files being worked on, and visualizations of team communication patterns. The tool is currently available as a plugin for the Trac bug tracking system.

Poster Session IN31B. Emerging Issues in e-Science: Collaboration, Provenance, and the Ethics of Data (Wednesday Dec 16, 2009, 8am – 12:20pm)

Identifying Communication Barriers to Scientific Collaboration

A. M. Grubb; and S. M. Easterbrook
Computer Science, University of Toronto, Toronto, ON, Canada.

The lack of availability of the majority of scientific artifacts reduces credibility and discourages collaboration. Some scientists have begun to advocate for reproducibility, open science, and computational provenance to address this problem, but there is no consolidated effort within the scientific community. There does not appear to be any consensus yet on the goals of an open science effort, and little understanding of the barriers. Hence we need to understand the views of the key stakeholders – the scientists who create and use these artifacts.

The goal of our research is to establish a baseline and categorize the views of experimental scientists on the topics of reproducibility, credibility, scooping, data sharing, results sharing, and the effectiveness of the peer review process. We gathered the opinions of scientists on these issues through a formal questionnaire and analyzed their responses by topic.

We found that scientists see a provenance problem in their communications with the public. For example, results are published separately from supporting evidence and detailed analysis. Furthermore, although scientists are enthusiastic about collaborating and openly sharing their data, they do not do so out of fear of being scooped. We discuss these serious challenges for the reproducibility, open science, and computational provenance movements.

Poster Session GC41A. Methodologies of Climate Model Confirmation and Interpretation (Thursday Dec 17, 2009, 8am – 12:20pm)

On the software quality of climate models

J. Pipitone; and S. Easterbrook
Computer Science, University of Toronto, Toronto, ON, Canada.

A climate model is an executable theory of the climate; the model encapsulates climatological theories in software so that they can be simulated and their implications investigated directly. Thus, in order to trust a climate model one must trust that the software it is built from is robust. Our study explores the nature of software quality in the context of climate modelling: How do we characterise and assess the quality of climate modelling software? We use two major research strategies: (1) analysis of defect densities of leading global climate models and (2) semi-structured interviews with researchers from several climate modelling centres. Defect density analysis is an established software engineering technique for studying software quality. We collected our defect data from bug tracking systems, version control repository comments, and from static analysis of the source code. As a result of our analysis, we characterise common defect types found in climate model software and we identify the software quality factors that are relevant for climate scientists. We also provide a roadmap to achieve proper benchmarks for climate model software quality, and we discuss the implications of our findings for the assessment of climate model software trustworthiness.

I’m visiting Microsoft this week, and am fascinated to discover the scope and expertise in climate change at Microsoft Research (MSR), particularly through their Earth, Energy and Environment theme (also known as E3).

Microsoft External Research (MER) is the part of MSR that builds collaborative research relationships with academic and other industrial partners. It is currently headed by Tony Hey, who was previously director of the UK’s e-science initiative (and obviously, as a fellow Brit, he’s a delight to chat to). Tony is particularly passionate about the need to communicate science to the broader public.

The E3 initiative within MER is headed by Dan Fay, who has a fascinating blog, where I found a pointer to a thought-provoking essay by Bill Gail (of the Virtual Earth project) in the Bulletin of the American Meteorological Society on Achieving Climate Sustainability. Bill opens up the broader discussion of what climate sustainability actually means (beyond the narrow focus on physical properties such as emissions of greenhouse gases). The core of his essay is the observation that humanity has now replaced nature as the controller of the entire climate system, despite the fact that we’re hopelessly ill-equipped either philosophically or politically to take on this role right now (this point was also made very effectively at the end of Gwynne Dyer’s book, and in Michael Tobis’ recent talk on the Cybernetics of Climate). More interestingly, Bill argues that we began to assume this role much earlier that most people think: about 7,000 years ago at the dawn of agricultural society, when we first started messing around with existing ecosystems.

The problem I have with Bill’s paper though, is that he wants to expand the scope of the climate policy framework at a time when even the limited, weak framework we have is under attack from a concerted misinformation campaign. Back to that point about public understanding of the science: we have to teach the public about the unavoidable physical facts about greenhouse gases first, to get at least to a broad consensus of the urgent need to move to a zero-carbon economy. You can’t start the broader discussion about longer term climate sustainability unless we at least establish a broad public understanding of the physics of greenhouse gases.

Each time you encounter someone trying to claim human-induced global warming is a myth (e.g. because “Mars is warming too!”), you can save a lot of time and energy by just saying, oh yes, that’s myth #16 on the standard list of misunderstandings about climate change. Here’s the list, lovingly and painstakingly put together by John Cook.

Once you’ve got that out of the way, you can then challenge your assailant to identify a safe level of carbon dioxide in the atmosphere, and to get them to give evidence to justify that choice. If they don’t feel qualified to answer this question, then you get to a teachable moment. Take the opportunity to teach your assailant the difference between greenhouse gas emissions and greenhouse gas concentrations. That’s the single most important thing they have to understand. Here’s why:

  • We know that the earth warms by somewhere between 2 and 4.5°C (with a best estimate of about 3°C) for each doubling of CO2 concentrations in the atmosphere (this was first calculated over 100 years ago. The number has been refined a little as we’ve come to understand the physical processes better, but only within a degree or two)
  • CO2 is unlike any other pollutant: once it’s in the atmosphere it stays there for centuries (more specifically, it stays in the carbon cycle, being passed around between plants, soil, oceans, and atmosphere. But anyway, it only ever goes away when it eventually gets laid down as a new fossil layer, e.g. at the bottom of the ocean).
  • The earth’s temperature only responds slowly to changes in the level of greenhouse gases in the atmosphere. That means that even though we’ve seen warming of around 0.7°C over the last century, we’re still owed at least that much again due to the CO2 we have already added to the atmosphere.
  • The temperature is not determined by the amount of CO2 we emit; it’s determined by the total accumulation in the atmosphere – i.e. how thick the “blanket” is.
  • Because the carbon stays there for centuries, all new emissions increase the concentration, thus compounding the problem. The only sustainable level of net greenhouse gas emissions from human activities is zero.
  • If we ever manage to get to the point where net emissions of greenhouse gases from human activities is zero, the planet will eventually (probably, over centuries) return to pre-industrial atmospheric concentration levels (about 270 parts per million), as the carbon gets reburied. During this time, the earth will continue to warm.
  • Net emissions is, of course, the difference between gross emissions and any carbon we manage to remove from the system artificially. As no technology currently exists today for reliably and permanently removing carbon from the system, it would be prudent to aim for zero gross emissions. And the quicker we do it, the less the planet will warm in the meantime.
  • And 3°C global average temperature is about the difference between the last ice age (which ended about 12,000 years ago) and today’s climate. In the last ice age there were ice sheets 0.5km thick over much of North America and Europe. Now imagine how different the earth will be with a further 3°C of warming.

Okay, so that might be a little bit too much for just one teachable moment. What we really need is a simple elegant tool to illustrate all this. Anyone up to building an interactive visualization? John Sterman tried, but I don’t rate his tool high on the usability scale.

Survey studies are hard to do well. I’ve been involved in some myself, and have helped many colleagues to design them, and we nearly always end up with problems when it comes to the data analysis. They are a powerful way of answering base-rate questions (i.e. the frequency or severity of some phenomena) or for exploring subjective opinion (which is, of course, what opinion polls do). But most people who design surveys don’t seem to know what they are doing. My checklist for determining if a survey is the right way to approach a particular research question includes the following:

  • Is it clear exactly what population you are interested in?
  • Is there a way to get a representative sample of that population?
  • Do you have resources to obtain a large enough sample?
  • Is it clear what variables need to be measured?
  • Is it clear how to measure them?

Most research surveys have serious problems getting enough people to respond to ensure the results really are representative, and the people who do respond are likely to be a self-selecting group with particularly strong opinions about the topic. Professional opinion pollsters put a lot of work into adjustments for sampling bias, and still often get it wrong. Researchers rarely have the resources to do this (and almost never repeat a survey, so never have the data to do such adjustments anyway). There are also plenty of ways to screw up on the phrasing of the questions and answer modes, such that you can never be sure people have all understood the questions in the same way, and that the available response modes aren’t biasing their responses. (Kitchenham has a good how-to guide)

ClimateSight recently blogged about a fascinating, unpublished survey of whether climate scientists think the IPCC AR4 is an accurate representation our current understanding of climate science. The authors themselves blog about their efforts to get the survey published here, here and here. Although they acknowledge some weaknesses to do with sampling size and representativeness, they basically think the survey itself is sound. Unfortunately, it’s not. As I commented on ClimateSight’s post, methodologically, this survey is a disaster. Here’s why:

The core problem with the paper is the design of the question and response modes. At the heart of their design is a 7-point Likert scale to measure agreement with the conclusions of the IPCC AR4. But this doesn’t work as a design for many reasons:

1) The IPCC AR4 is a massive document, which a huge number of different observations. Any climate scientist will be able to point to bits that are done better and bits that are done worse. Asking about agreement with it, without spelling out which of its many conclusions you’re asking about is hopeless. When people say they agree or disagree with it, you have no idea which of its many conclusions they are reacting to.

2) The response mode used in the study has a built in bias. If the intent is to measure the degree to which scientists think the IPCC accurately reflects, say, the scale of the global warming problem (whatever that means), then central position on the 7-point scale should be “the IPCC got it right”. In the study, this is point 5 on the scale, which immediately introduces a bias because there are twice as many available response modes available in to the left of this position (“IPCC overstates the problem”) than there are to the right (“IPCC understates the problem”). In other words, the scale itself is biased towards one particular pole.

3) The study authors gave detailed descriptive labels to each position on the scale. Although it’s generally regarded as a good idea to give clear labels to each point on a Likert scale, the idea is that this should help users to understand that the intervals on the scale are to be interpreted as roughly equivalent. The labels need to be very simple. The set of labels in this study end up conflating a whole bunch of different ideas, each of which should be tested with a different question and a separate scale. For example, the labels in include ideas such as:

  • fabrication of the science,
  • false hypotheses,
  • natural variation,
  • validity of models,
  • politically motivated scares,
  • divertion of attention,
  • uncertainties,
  • scientists who know what they’re doing,
  • urgency of action,
  • damage to the environment,

…and so on. Conflating all of these onto a single scale makes analysis impossible, because you don’t know which of the many ideas associated with each response mode each respondent is agreeing or disagreeing with. A good survey instrument would ask about only one of these issues at once.

4) Point 5 on the scale (the one interpreted as agreeing with the IPCC) includes the phrase “the lead scientists know what they are doing”. Yet the survey is sent out to select group that includes many such lead scientists and their immediate colleagues. This form of wording immediately biases this group towards this response, regardless of what they think about the overall IPCC findings. Again, asking specifically about different findings in the IPCC report is much more likely to find out what they really think; this study is likely to mask the range of opinions.

5) And finally, as other people have pointed out, the sampling method is very suspect. Although the authors acknowledge that they didn’t do random sampling, and that this limits the kinds of analysis they can do, it also means that any quantitative summary of the responses is likely to be invalid. There’s plenty of reason to suspect that significant clusters of opinion chose not to participate because they saw the questionnaire (especially given some of the wording) as suspect. Given the context for this questionnaire, within a public discourse where everything gets distorted sooner or later, many climate scientists would quite rationally refuse to participate in any such study. Which means really we have no idea if the distribution shown in the study represents the general opinion of any particular group of scientists at all.

So, it’s not surprising no-one wants to publish it. Not because of any concerns for the impact of its findings, but simply because it’s not a valid scientific study. The only conclusions that can be drawn from this study are existence ones:

  1. there exist some people who think the IPCC underestimated (some unspecified aspect of) climate change;
  2. there exist some people who think the IPCC overestimated (some unspecified aspect of) climate change and
  3. there exist some people who think the IPCC scientists know what they are doing.

The results really say nothing about the relative sizes of these three groups, nor even whether the three groups overlap!

Now, the original research question is very interesting, and worth pursuing. Anyone want to work on a proper scientific survey to answer it?

I’m teaching our introductory software engineering course this term, for which the students will be working on a significant software development project over the term. The main aim of the course is to get the students thinking about and using good software development practices and tools, and we organise the term project as an agile development effort, with a number of small iterations during the term. The students have to figure out for  themselves what to build at each iteration.

For a project, I’ve challenged the students to design new uses for the Canadian Climate Change Senarios Network. This service makes available the data on possible future climate change scenarios from the IPCC datasets, for a variety of end users. The current site allows users to run basic queries over the data set, and have the results returned either as raw data, or in a variety of visualizations. The main emphasis is on regional scenarios for Canada, so the service offers some basic downscaling, and ability to couple the scenarios with other regional data sources, such as data from weather monitoring stations in the region. However, to use the current service, you need to know quite a bit about the nature of the data: it asks you which models you’re interested in; which years you want data for (assumes you know something about 30-year averages); which scenarios you want (assumes you know something about the standard IPCC scenarios); which region you want (in latitude and longitude); and which variables you want (assumes you know something about what these variables measure). The current design reflects the needs of the primary user group for which the service was developed – (expert) researchers working on climate impacts and adaptation.

The challenge for the students on my course is to extend the service for new user groups. For example, farmers who want to know something about likely effects of climate change on growing seasons, rainfall and heat stress in their local area. High school students studying climate and weather. Politicians who want to understand what the latest science tells us about the impacts of climate change on the constituencies they represent. Activists who want to present a simple clear message to policymakers about the need for policy changes. etc.

I have around 60 students on the course, working in teams of 4. I’m hoping that the various teams will come up with a variety of ideas for how to make this dataset useful to new user groups, and I’ve challenged them to be imaginative. But more suggestions are always welcome…

I’ve just been browsing the sessions for the AGU Fall Meeting, to be held in San Francisco in December. Abstracts are due by September 3. The following sessions caught my attention:

Plus some sessions that sound generally interesting:

The recording of my Software Engineering for the Planet talk is now available online. Having watched it, I’m not terribly happy with it – it’s too slow, too long, and I make a few technical mistakes. But hey, it’s there. For anyone already familiar with the climate science, I would recommend starting around 50:00 (slide 45) when I get to part 2 – what should we do?

[Update: A shorter (7 minute) version of the talk is now available]

The slides are also available as a pdf with my speaking notes (part 1 and part 2), along with the talk that Spencer gave in the original presentation at ICSE. I’d recommend these pdfs rather than the video of me droning on….

Having given the talk three times now, I have some reflections on how I’d do it differently. First, I’d dramatically cut down the first part on the climate science, and spend longer on the second half – what software researchers and software engineers can do to help. I also need to handle skeptics in the audience better. There’s always one or two, and they ask questions based on typical skeptic talking points. I’ve attempted each time to answer these questions patiently and honestly, but it slows me down and takes me off-track. I probably need to just hold such questions to the end.

Mistakes? There are a few obvious ones:

  • On slide 11, I present a synoptic view of the earth’s temperature record going back 500 million years (it’s this graph from wikipedia). I use it to put current climate change into perspective, but also also to make the point that small changes in the earth’s temperature can be dramatic – in particular, the graph indicates that the difference between the last ice age and the current inter-glacial is about 2°C average global temperature. I’m now no longer sure this is correct. Most textbooks say it was around 8°C colder in the last ice age, but these appear to be based on an assumption that temperature readings taken from ice cores at the poles represent global averages. The temperature change at the poles is always much greater than the global average, but it’s hard to compute a precise estimate of global average temperature from polar records. Hansen’s reconstructions seem to suggest 3°C-4°C. So the 2°C rise shown on the wikipedia chart is almost certainly an underestimate. But I’m still trying to find a good peer-reviewed account of this question.
  • On slide 22, I talk about Arrhenius’s initial calculation of climate sensitivity (to doubling of CO2) back in the 1880’s. His figure was 4ºC-5ºC, whereas the IPCC’s current estimates are 2ºC-4.5ºC. And I need to pronounce his name correctly.

What’s next? I need to turn the talk into a paper…

When I was at the EGU meeting in Vienna in April, I attended a session on geoengineering, run by Jason Blackstock. During the session I blogged the main points of Jason’s talk, the key idea of which is that it’s time to start serious research into the feasibility and consequences of geoengineering, because it’s now highly likely we’ll need a plan B, and we’re going to need a much better understanding of what’s involved before we do it. Jason mentioned a brainstorming workshop, and the full report is now available: Climate Engineering Responses to Climate Emergencies. The report is an excellent primer on what we know currently about geoengineering, particularly the risks. It picks out stratospheric aerosols as the most likely intervention (from the point of view of both cost/feasibility, and current knowledge of effectiveness).

I got the sense from the meeting that we have reached an important threshold in the climate science community – previously geoengineering was unmentionable, for fear that it would get in the way of the serious and urgent job of reducing emissions. Alex Steffen explains this fear very well, and goes over the history of how the mere possibility of geoengineering has been used as an excuse by the denialists for inaction. And of course, from a systems point of view, geoengineering can only ever be a distraction if it tackles temperature (the symptom) rather than carbon concentrations (the real problem).

But the point made by Jason, and in the report, is that we cannot rule out the likelihood of climate emergencies – either very rapid warming triggered by feedback effects, or sudden onset of unanticipated consequences of (gradual) warming. In other words, changes that occur too rapidly for even the most aggressive mitigation strategies (i.e. emissions reduction) to have an effect on. Geoengineering then can be seen as “buying us time” to allow the mitigation strategies to work – e.g slowing the warming by a decade or so, while we get on and decarbonize our energy supplies.

Now, maybe it’s because I’m looking out for them, but I’ve started to see a flurry of research interest in geoengineering. Oliver Morton’s article “Great White Hope” in April’s Nature gives a good summary of several meetings earlier this year, along with a very readable overview of some of the technology choices available. In June, the US National Academies announced a call for input on geoengineering which yielded a treasure trove of information – everything you’ve ever wanted to know about geoengineering. And yesterday, New Scientist reported that geoengineering has gone mainstream, with a lovely infographic illustrating some of the proposals.

Finally, along with technical issues of feasibility and risk, the possibility of geoengineering raises major new challenges for world governance. Who gets to decide which geoengineering projects should go ahead, and when, and what will we do about the fact that, by definition, all such projects will have a profound effect on human society, and those effects will be distributed unequally?

Update: Alan Robock has a brilliant summary in the Bulletin of the Atomic Scientists entitled 20 reasons why geo-engineering might be a bad idea.

In our climate brainstorming session last week, we invited two postdocs (Chris and Lawrence) from the atmospheric physics group to come and talk to us about their experiences of using climate models. Most of the discussion focussed on which models they use and why, and what problems they experience. They’ve been using the GFDL models, specifically AM2 and AM3 (atmosphere only models) for most of their research, largely because of legacy: they’re working with Paul Kushner, who is from GFDL,  and the group now has many years experience working with these models. However, they’re now faced with having to switch to NCAR’s Community Climate System Model (CCSM). Why? Because the university has acquired a new IBM supercomputer, and the GFDL models won’t run on it (without a large effort to port them). The resulting dilemma reveals a lot about the current state of climate model engineering:

  • If they stick with the GFDL models, they can’t make use of the new supercomputer, hence miss out on a great opportunity to accelerate their research (they could do many more model runs).
  • If they switch to CCSM, they lose a large investment in understanding and working with the GFDL models. This includes both their knowledge of the model (some of their research involves making changes to the code to explore how perturbations affect the runs), and the investment in tools and scripts for dealing with model outputs, diagnostics, etc.

Of course, the obvious solution would be to port the GFDL models to the new IBM hardware. But this turns out to be hard because the models were never designed for portability. Right now, the GFDL models won’t even compile on the IBM compiler, because of differences in how picky different compilers are over syntax and style checking – climate models tend to have many coding idiosyncrasies that are never fixed because the usual compiler never complains about them: e.g. see Jon’s analysis of static checking NASA’s modelE. And even if they fix all these and get the compiler to accept the code, they’re still faced with extensive testing to make sure the models’ runtime behaviour is correct on the new hardware.

There’s also a big difference in support available. GFDL doesn’t have the resources to support external users (particularly ambitious attempts to port the code). In contrast, NCAR has extensive support for the CCSM, because they have made community building an explicit goal. Hence, CCSM is much more like an open source project. Which sounds great, but it also comes at a cost. NCAR have to devote significant resources to supporting the community. And making the model open and flexible (for use by a broader community) hampers their ability to get the latest science into the model quickly. Which leads me to hypothesize that it is the diversity of your user-base that most restricts the ongoing rate of evolution of a software system. For a climate modeling centers like GFDL, if you don’t have to worry about developing for multiple platforms and diverse users, you can get new ideas into the model much quicker.

Which brings me to a similar discussion over the choice of weather prediction models in the UK. Bryan recently posted an article about the choice between WRF (NCAR’s mesoscale weather model) versus the UM (the UK Met Office’s model). Alan posted a lengthy response which echoes much of what I said above (but with much more detail): basically the WRF is well supported and flexible for a diverse community. The UM has many advantages (particularly speed), but is basically unsupported outside the Met Office. He concludes that someone should re-write the UM to run on other hardware (specifically massively parallel machines), and presumably set up the kind of community support that NCAR has. But funding for this seems unlikely.

Big news today: The G8 summit declares that climate change mitigation policies should aim to limit global temperature increases to no more than 2°C above 1900 levels. This is the limit that Europe adopted as a guideline a long time ago, and which many climate scientists generally regard as an important threshold. I’ve asked many climate scientists why this particular threshold, and the answer is generally because above this level many different positive feedback effects start to kick in, which will amplify the warming and take us into really scary scenarios.

Not all scientists agree this threshold is a sensible target for politicians though. For example, David Victor argues that the 2°C goal is a political delusion. The gist of his argument is that targets such as 2°C are neither safe (because nobody really knows what is safe) nor achievable. He suggests that rather than looking at long term targets such as a temperature threshold, or a cumulative emissions target, politicians need to focus on a series of short-term, credible promises, which, when achieved, will encourage greater efforts.

The problem with this argument is that it misses the opportunity to consider the bigger picture, and understand the enormity of the problem. First, although 2°C doesn’t sound like much, in the context of the history of the planet, it’s huge. James Hansen and colleagues put this into context best, by comparing current warming with the geological record. They point out that with the warming we have already experienced over the last century, the earth is now about as warm as the Holocene Maximum (about 5,000-9,000 years ago), and within 1°C of the maximum temperature of the last million years. If you look at Hansen’s figures (e.g. fig 5, shown below) you’ll see the difference in temperature between the ice ages and the interglacials is around 3°C. For example, the last ice age, which ended about 12,000 years ago shows up as the last big rise on this graph (a rise from 26°C to 29°C):


The wikipedia entry on the Geologic temperature record has a nice graph pasting together a number of geological temperature records to get the longer term view (but it’s not peer-reviewed, so it’s not clear how valid the concatenation is). Anyway, Hansen concludes that 1°C above the 2000 temperature is already in the region of dangerous climate change. If a drop of 3°C is enough to cause an ice age, a rise of 2°C is pretty significant for the planet.

Even more worrying is that some scientists think we’re already committed to more than 2°C rise, based on greenhouse gases already emitted in the past, irrespective of what we do from today onwards. For example, Ramanathan and Feng’s paper in PNAS – Formidable challenges ahead, sets up a scenario in which greenhouse gas concentrations are fixed at 2005 levels, (i.e. no new emissions after 2005) and discover that the climate eventually stabilizes at +2.4°C (with an 95% confidence interval of 1.4°C to 4.3°C). In other words, it is more than likely that we’ve already committed to more than 2°C even if we stopped burning all fossil fuels today. [Note: when you read these papers, take care to distinguish between emissions and concentrations]. Now of course, R&F made some assumptions that can be challenged. For example, they assumed the cooling effect of other forms of pollution (e.g atmospheric aerosols) is ignored. There’s a very readable editorial comment on this paper by Hans Schellnhuber in which he argues that, while it’s useful to be reminded how much our greenhouse gas warming is being masked by other kinds of air pollution, it is still possible to keep below 2°C if we halve emissions by 2050. Here’s his graph (A), compared with R&F’s (B):


The lower line on each graph is for constant concentrations from 2005 (i.e. no new emissions). The upper curves are projections for reducing emissions by 50% by 2050. The difference is that in (A), other forcings from atmospheric pollution are included (dirty air and smog, as per usual…).

But that’s not the end of the story. Both of the above graphs are essentially just “back of the envelope” calculations, based on data from previous studies such as those included in the IPCC 2007 assessment. More recent research has investigated these questions directly, using the latest models. The latest analysis is much more like R&F’s graph (B) than Shellenhuber’s graph (A). RealClimate has a nice summary of  two such papers in Nature, in April 2009. Basically, if developed countries cut their emissions by 80% by 2050, we still only get a 50% chance of sticking below the 2°C threshold. Another way of putting this is that a rise of 2°C is about the best we can hope for, even with the most aggressive climate policies imaginable. Parry et al argue we should be prepared for rises of around 4°C. And I’ve already blogged what that might be like.

So, nice to hear the the G8 leaders embrace the science. But what we really need is for them to talk about how bad it really is. And we need action. And fast.

(Update: Gareth Renowden has a lengthier post with more detail on this (framed by discussion of what NZ’s targets should be)

I thought this sounded very relevant: the 4th International Verification Methods Workshop. Of course, it’s not about software verification, but rather about verification of weather forecasts. The slides from the tutorials give a good sense of what verification means to this community (especially the first one, on verification basics). Much of it is statistical analysis of observational data and forecasts, but there are some interesting points on what verification actually means – for example, to do it properly you have to understand the user’s goals – a forecast (e.g. one that puts the rainstorm in the wrong place) might be useless for one purpose (e.g. managing flood defenses) but be very useful for another (e.g. aviation). Which means no verification technique is fully “objective”.

What I find interesting is that this really is about software verification – checking that large complex software systems (i.e. weather forecast models) do what they are supposed to do (i.e. accurately predict weather), but there is no mention anywhere of the software itself; all the discussion is about the problem domain. You don’t get much of that at software verification conferences…

First, we have to be clear what we mean by a climate model. Wikipedia offers a quick intro to types of climate model. For example:

  • zero dimension models, essentially just a set of equations for the earth’s radiation balance
  • 1-dimensional models – for example where you take latitude into account, as the angle of the sun’s rays matter)
  • EMICS – earth-system models of intermediate complexity
  • GCMs – General Circulation Models (a.k.a Global Climate Models), which model the atmosphere in four dimensions (3D+time), by dividing it into a grid of cubes, and solving the equations of fluid motion for each cube at each time step. While the core of a GCM is usually the atmosphere model, GCMs can be coupled to three dimensional ocean models, or run uncoupled, so that you can have A-GCMs (atmosphere only), and AO-GCMs (atmosphere and ocean). Ocean models are just called ocean models 🙂
  • Earth System Models – Take a GCM, and couple it to models of other earth system processes: sea ice, land ice, atmospheric chemistry, the carbon cycle, human activities such as energy consumption and economics, and so on.

Current research tends to focus on Earth System Models, but for the last round of the IPCC assessment, AO-GCMs were used to generate most of the forecast runs. Here are the 23 AO-GCMs used in the IPCC AR4 assessment, with whatever info I could find about availability of each model :

Now, if you were paying attention, you’ll have noticed that that wasn’t 23 bullet points. Some labs contributed runs from more than one version of their model(s), so it does add up somehow.

Short summary: easiest source code to access: (1) IPSL (includes Trac access!), (2) CCSM and (3) ModelE.

Future work: take a look at the additional models that took part in the Coupled Model Inter-comparison Project (CMIP-3), and see if any of them are also available.

Update: RealClimate has started compiling a fuller list of available codes and datasets.

This morning, while doing some research on availability of code for climate models, I came across a set of papers published by the Royal Society in March 2009 reporting on a meeting on the Environmental eScience Revolution. This looks like the best collection of papers I’ve seen yet on the challenges in software engineering for environmental and climate science. These will keep me going for a while, but here are the papers that most interest me:

And I’ll probably have to read the rest as well. Interestingly, I’ve met many of these authors. I’ll have to check whether any followup meetings are planned…