First, we have to be clear what we mean by a climate model. Wikipedia offers a quick intro to types of climate model. For example:

  • zero dimension models, essentially just a set of equations for the earth’s radiation balance
  • 1-dimensional models – for example where you take latitude into account, as the angle of the sun’s rays matter)
  • EMICS – earth-system models of intermediate complexity
  • GCMs – General Circulation Models (a.k.a Global Climate Models), which model the atmosphere in four dimensions (3D+time), by dividing it into a grid of cubes, and solving the equations of fluid motion for each cube at each time step. While the core of a GCM is usually the atmosphere model, GCMs can be coupled to three dimensional ocean models, or run uncoupled, so that you can have A-GCMs (atmosphere only), and AO-GCMs (atmosphere and ocean). Ocean models are just called ocean models :-)
  • Earth System Models – Take a GCM, and couple it to models of other earth system processes: sea ice, land ice, atmospheric chemistry, the carbon cycle, human activities such as energy consumption and economics, and so on.

Current research tends to focus on Earth System Models, but for the last round of the IPCC assessment, AO-GCMs were used to generate most of the forecast runs. Here are the 23 AO-GCMs used in the IPCC AR4 assessment, with whatever info I could find about availability of each model :

Now, if you were paying attention, you’ll have noticed that that wasn’t 23 bullet points. Some labs contributed runs from more than one version of their model(s), so it does add up somehow.

Short summary: easiest source code to access: (1) IPSL (includes Trac access!), (2) CCSM and (3) ModelE.

Future work: take a look at the additional models that took part in the Coupled Model Inter-comparison Project (CMIP-3), and see if any of them are also available.

Update: RealClimate has started compiling a fuller list of available codes and datasets.

This morning, while doing some research on availability of code for climate models, I came across a set of papers published by the Royal Society in March 2009 reporting on a meeting on the Environmental eScience Revolution. This looks like the best collection of papers I’ve seen yet on the challenges in software engineering for environmental and climate science. These will keep me going for a while, but here are the papers that most interest me:

And I’ll probably have to read the rest as well. Interestingly, I’ve met many of these authors. I’ll have to check whether any followup meetings are planned…

Here’s a very sketchy first second draft for a workshop proposal for the fall. I welcome all comments on this, together with volunteers to be on the organising team. Is this a good title for the workshop? Is the abstract looking good? What should I change?

Update: I’ve jazzed up and rearranged the list of topics, in response to Steffen’s comment to get a better balance between research likely to impact SE itself, vs. research likely to impact other fields.

The First International Workshop on Software Research and Climate Change (WSRCC-1)

In conjunction with: <http://onward-conference.org/> Onward Conference 2009 and <http://www.oopsla.org/oopsla2009/> Oopsla 2009

Workshop website: <http://www.cs.toronto.edu/wsrcc>

ABSTRACT

This workshop will explore the contributions that software research can make to the challenge of climate change. Climate change is likely to be the defining issue of the 21st Century. Recent studies indicate that climate change is accelerating, confirming the most pessimistic of scenarios identified by climate scientists. Our current use of fossil fuels commit the world to around 2°C average temperature rise during this century, and, unless urgent and drastic cuts are made, further heating is likely to trigger any of a number of climate change tipping points. The results will be a dramatic reduction of food production and water supplies, more extreme weather events, the spread of disease, sea level rise, ocean acidification, and mass extinctions. We are faced with the twin challenges of mitigation (avoiding the worst climate change effects by rapidly transitioning the world to a low-carbon economy) and adaptation (re-engineering the infrastructure of modern society so that we can survive and flourish on a hotter planet).

These challenges are global in nature, and pervade all aspects of society. To address them, we will need researchers, engineers, policymakers, and educators from many different disciplines to come the the table and ask what they can contribute. There are both short term challenges (such as how to deploy, as rapidly as possible, existing technology to produce renewable energy; how to design government policies and international treaties to bring greenhouse gas emissions under control) and long term challenges (such as how to complete the transition to a global carbon-neutral society by the latter half of this century). In nearly all these challenges, software has a major role to play as a critical enabling technology.

So, for the software research community, we can frame the challenge as follows: How can we, as experts in software technology, and as the creators of future software tools and techniques, apply our particular knowledge and experience to the challenge of climate change? How can we understand and exploit the particular intellectual assets of our community — our ability to:

  • think computationally;
  • understand and model complex inter-related systems;
  • build useful abstractions and problem decompositions;
  • manage and evolve large-scale socio-technical design efforts;
  • build the information systems and knowledge management tools that empower effective decision-making;
  • develop and verify complex control systems on which we now depend;
  • create user-friendly and task-appropriate interfaces to complex information and communication infrastructures.

In short, how can we apply our research strengths to make significant contributions to the problems of mitigation and adaptation of climate change?

This workshop will be the first in a series, intended to develop a community of researchers actively engaged in this challenge, and to flesh out a detailed research agenda that leverages existing research ideas and capabilities. Therefore we welcome any kind of response to this challenge statement.

WORKSHOP TOPICS

We welcome the active participation of software researchers and practitioners interested in any aspect of this challenge. The participants will themselves determine the scope and thrusts of this workshop, so this list of suggested topics is intended to act only as a starting point:

  • requirements analysis for complex global change problems;
  • integrating sustainability into software system design;
  • green IT, including power-aware computing and automated energy management;
  • developing control systems to create smart energy grids and improve energy conservation;
  • developing information systems to support urban planning, transport policies, green buildings, etc.;
  • software tools for open collaborative science, especially across scientific disciplines;
  • design patterns for successful emissions reduction strategies;
  • social networking tools to support rapid action and knowledge sharing among communities;
  • educational software for hands-on computational science;
  • knowledge management and decision support tools for designing and implementing climate change policies;
  • tools and techniques to accelerate the development and validation of earth system models by climate scientists;
  • data sharing and data management of large scientific datasets;
  • tools for creating and sharing visualizations of climate change data;
  • (more…?)

SUBMISSIONS AND PARTICIPATION

Our intent is to create a lively, interactive discussion, to foster brainstorming and community building. Registration will be open to all. However, we strongly encourage participants to submit (one or more) brief (1-page) responses to the challenge statement, either as:

  • Descriptions of existing research projects relevant to the challenge statement (preferably with pointers to published papers and/or online resources);
  • Position papers outlining potential research projects.

Be creative and forward-thinking in these proposals: think of the future, and think big!

There will be no formal publication of proceedings. Instead we will circulate all submitted papers to participants in advance of the workshop, via the workshop website, and invite participants to revise/update/embellish their contributions in response to everyone else’s contributions. Our plan is to write a post-workshop report, which will draw on both the submitted papers and the discussions during the workshop. This report will lay out a suggested agenda for both short-term and long-term research in response to the challenge, and act as a roadmap for subsequent workshops and funding proposals.

IMPORTANT DATES

Position paper submission deadline: September 25th, 2009

Workshop on Software Research and Climate Change: October 25th or 26th,  2009

WORKSHOP ORGANIZERS

+TBD

I have a couple of PhD theses that I have to read this week, so I probably won’t get to read the new Synthesis Report from the Copenhagen Conference for a few days. But it looks like it will be worth the wait. According to RealClimate, it’s a thorough look at how much the science has changed since the IPCC AR4 report in 2007, along with some serious discussion on what needs to be done to hold the world below the threshold of 2°C of warming.

I gave my talk on SE for the Planet again this afternoon, to the local audience. We recorded it, and will get the whole thing up on the web soon.

I mentioned during the talk that the global greenhouse emissions growth curve and the world population growth curve are almost identical, and speculated that this means effectively that emissions per capita has not changed over the last century or so. After the talk, Jonathan pointed out to me that it means no such thing. While globally the average emissions per capita might have remained roughly constant, the averages probably hide several very different trends. For example, in the industrialized world, emissions per capita appears to have grown dramatically, while population growth has slowed. In contrast, in the undeveloped world the opposite has happened: huge population growth, with very little emissions growth. When you average both trends you get an apparent static per capita emissions rate.

Anyway, this observation prompted me to go back and look at the data. I’d originally found this graph, which appears to show the growth curves are almost identical:

Greenhouse gas emissions versus population growth

Greenhouse gas emissions versus population growth

The problem is, I didn’t check the credibility of the source. The graph comes from the site World Climate Report, which turns out to be a denialist site, full of all sorts of misinformation. In this case, they appear to have cooked the graph (note the low resolution and wide aspect ratio) to make the curves look like they fit much better than they really do. To demonstrate this, I reconstructed them myself.

I got amazingly detailed population data by year from digital survivors. They’ve done a wonderful job of collating data from many different sources, although their averaging technique does lead to the occasional anomaly (e.g. in 1950, there’s a change in availability of source datasets, and it shows up as a tiny glitch on my graph). I got the CO2 emissions data from the US government’s Carbon Dioxide Information Analysis Center (CDIAC).

Here’s the graph from 1850 to 2006 (click it for a higher resolution version):

World Population vs. Global CO2 emissions 1850-2006

World Population vs. Global CO2 emissions 1850-2006

Notice that emissions grew much more sharply than population from 1950 onwards, with the only exceptions being during the economic recessions of the early 1980′s, early 1990′s, and around 2000. Since 2000, emissions have been growing at something close to double the population growth rate. So, I think that effectively explodes the myth that population growth alone explains emissions growth. It also demonstrates the importance of checking your sources before quoting them…

The US government has launched a very swanky new inter-agency website, www.globalchange.gov, intended to act as a central point for news and reports on climate change. And the centre-piece of the launch is a major new report on Climate Change Impacts in the US. It draws on the IPCC reports for much of the science, but it adds to that a lot of detail on how the impacts of climate change are already being seen across the US, and how much worse it’s going to get. Definitely worth reading, especially the parts on drought and heat waves. (The brochures and factsheets are good too)

Add to that a recent report from CNA on Risks to (US) National Security from energy dependence and climate change. It includes some fascinating snippets from senior retired US military admirals and generals, including for example, the threat to the US’s military capacity posed by sea level rise (e.g loss of coastal installations), and disruption to energy supplies. And that the biggest cause of conflict in the 21st century is likely to be mass migrations of entire populations, seeking water and food as supplies dwindle (but if you’ve read Climate Wars, you already knew this)

15. June 2009 · 1 comment · Categories: blogging · Tags:

Ever since I passed about 20 posts, I’ve been wishing for a contents listing for the blog. I think one of the weakest parts of blogging software is poor navigability. The dominant mode of access for most blogs is the (reverse) chronology. Which is fine, because that matches the dominant design metaphor. But it’s implemented badly in most blogging tools – by default you can go backwards one page full of posts at a time, with no ability to get a preview of what’s further back. Some people choose to add shortcuts of various kinds to their blogs: a calendar (actually a month-based index), a list of popular/favourite/recent posts, a list of recent comments, a tag cloud, etc. And of course, you can always search for keywords. These all help to address the navigability problem a little, but none of them really provide the missing synoptic view of past contents.

I think the net result is that blogs have an enforced ephemeral nature – once a post has scrolled off the bottom of the first page, it will probably never be seen by the casual visitor to the site – the only likely paths to it are hardlinks from other blog posts, or google hits.

Which is why I’ve always wanted a contents listing. A page with the titles (&links) to all the past blog posts, arranged in some convenient order. So that casual visitors to the blog can see what they’ve missed. And get a sense of what the blog is all about – a bit like wondering up and down the shelves in a library, except that nobody does that anymore.

Today, I found a tool that does most of what I want. It creates an automated sitemap, for human consumption (as opposed to the machine oriented sitemaps that Google feeds on). It makes use of the category labels, and I think it might take some time for me to figure out how to use the category headings for best effect. But I like it so far. See for yourself

11. June 2009 · 3 comments · Categories: psychology

I’ve been thinking a lot recently about why so few people seem to understand the severity of the climate crisis. Joe Romm argues that it’s a problem of rhetoric: the deniers tend to be excellent at rhetoric, while scientists are lousy at it. He suggests that climate scientists are pretty much doomed to lose in any public debates (and hence debates on this are a bad idea).

But even away from the public stage, it’s very frustrating trying to talk to people who are convinced climate change isn’t real, because, in general, they seem unable to recognize fallacies in their own arguments. One explanation seems to be the Dunning-Kruger effect – a cognitive bias in people’s subjective assessment of their (in)competence. The classic paper is “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments” (for which title, Dunning and Kruger were awarded an IgNoble award in 2000 :-) ). There’s also a brilliant youtube movie that explains the paper. The bottom line is that people who are most wrong are the least able to perceive it.

In a follow-up paper, they describe a number of experiments that investigate why people fail to recognize their own incompetence. Turns out one of the factors is that people take a “top-down” approach in assessing their competence. People tend to assess how well they did in some task based on their preconceived notions of how good they are at the skills needed, rather than any reflection on their actual performance. For example, in one experiment, the researchers gave subjects a particular test. Some were told it was a test of abstract reasoning ability (which the subjects thought they were good at). Others were told it was a test of programming ability (which the subjects thought they were bad at). It was, of course, the same test, and the subjects from both groups did equally well on it. But their estimation of how well they did depended on what kind of test they thought it was.

There’s also an interesting implication for why women tend to drop out of science and technology careers – women tend to rate themselves as less scientifically talented as men, regardless of their actual performance. This means that, when women are performing just as well as their male colleagues, they will still tend to rate themselves as doing less well, because of this top-down assessment bias.

To me, the most interesting part of the research is a whole bunch of graphs that look like this:

dunning-curve1

People who are in the bottom quartile of actual performance tend to dramatically over-estimate how well they did. People in the top quartile tend to slightly under-estimate how well they did.

This explains why scientists have a great difficulty convincing the general public about important matters such as climate change. The most competent scientists will systematically under-estimate their ability, and will be correspondingly modest when presenting their work. The incompetent (e.g. those who don’t understand the science) will tend to vastly over-inflate their professed expertise when presenting their ideas. No wonder the public can’t figure out who to believe. Furthermore, people who just don’t understand basic science also tend not to realize it.

Which also leads me to suggest that if you want to find the most competent people, look for people who tend to underestimate their abilities. It’s rational to believe people who admit they might be wrong.

Disclaimer: psychology is not my field – I might have completely misunderstood all this.

I posted some initial ideas for projects for our summer students awhile back. I’m pleased to say that the students have been making great progress in the last few weeks (despite, or perhaps because of, the fact that I haven’t been around much). Here’s what they’ve been up to:

Sarah Strong and Ainsley Lawson have been exploring how to take the ideas on visualizing the social network of a software development team (as embodied in tools such as Tesseract), and applying them as simple extensions to code browsers / version control tools. The aim is to see if we can add some value in the form of better awareness of who is working on related code, but without asking the scientists to adopt entirely new tools. Our initial target users are the climate scientists at the UK Met Office Hadley Centre, who currently use SVN/Trac as their code management environment.

Brent Mombourquette has been working on a Firefox extension that will capture the browsing history as a graph (pages and traversed links), which can then be visualized, saved, annotated, and shared with others. The main idea is to support the way in which scientists search/browse for resources (e.g. published papers on a particular topic), and to allow them to recall their exploration path to remember the context in which they obtained these resources. I should mention the key idea goes all the way back to the Vannevar Bush’s memex.

Maria Yancheva has been exploring the whole idea of electronic lab notebooks. She has been exploring the workflows used by the climate scientists when they configure and run their simulation models, and considering how a more structured form of wiki might help them. She has selected OpenWetWare as a good starting point, and is exploring how to add extensions to MediaWiki to make OWW more suitable for computational science, especially to keep track of model runs.

Samar Sabie has also been looking at MediaWiki extensions, specifically to find a way to add visualizations into wiki pages and blogs as simply as possible. The problem is that currently, adding something as simple as a table of data to a page requires extensive work with the markup language. The long term aim is to make the insertion of dynamic visualizations (such as those at ManyEyes), but the starting point is to try to make it as ridiculously simple as possible to insert a data table, link it to a graph, and select appropriate parameters to make the graph look good, with the idea that users can subsequently change the appearance in useful ways (which means cut and paste from Excel Spreadsheets won’t be good enough).

Oh, and they’ve all been regularly blogging their progress, so we’re practicing the whole open notebook science thingy.