Short notice, but an interesting talk tomorrow by Balaji of Princeton University and NOAA/GFDL. Balaji is head of the Modeling Systems Group at NOAA/GFDL. The talk is scheduled for 4 p.m., in the Physics building, room MP408.

Climate Computing: Computational, Data, and Scientific Scalability

V. Balaji
Princeton University

Climate modeling, in particular the tantalizing possibility of making projections of climate risks that have predictive skill on timescales of many years, is a principal science driver for high-end computing. It will stretch the boundaries of computing along various axes:

  • resolution, where computing costs scale with the 4th power of problem size along each dimension
  • complexity, as new subsystems are added to comprehensive earth system models with feedbacks
  • capacity, as we build ensembles of simulations to sample uncertainty, both in our knowledge and representation, and of that inherent in the chaotic system. In particular, we are interested in characterizing the “tail” of the pdf (extreme weather) where a lot of climate risk resides.

The challenge probes the limits of current computing in many ways. First, there is the problem of computational scalability, where the community is adapting to an era where computational power increases are dependent on concurrency of computing and no longer on raw clock speed. Second, we increasingly depend on experiments coordinated across many modeling centres which result in petabyte-scale distributed archives. The analysis of results from distributed archives poses the problem of data scalability.

Finally, while climate research is still performed by dedicated research teams, its potential customers are many: energy policy, insurance and re-insurance, and most importantly the study of climate
change impacts — on agriculture, migration, international security, public health, air quality, water resources, travel and trade — are all domains where climate models are increasingly seen as tools that
could be routinely applied in various contexts. The results of climate research have engendered entire fields of “downstream” science as societies try to grapple with the consequences of climate change. This poses the problem of scientific scalability: how to enable the legions of non-climate scientists, vastly outnumbering the climate research community, to benefit from climate data.

The talks surveys some aspects of current computational climate research as it rises to meet the simultaneous challenges of computational, data and scientific scalability.

Update: Neil blogged a summary of Balaji’s talk.

I thought I wouldn’t blog any more about the CRU emails story, but this one is very close to my heart, so I can’t pass it up. Brian Angliss, over at Scholars and Rogues, has written an excellent piece on the lack of context in the stolen emails, and the reliability of any conclusions that might be based on them. To support his analysis, he quotes extensively from the paper “the Secret Life of Bugs” by Jorge Aranda and Gena Venolia from last year’s ICSE, in which they convincingly demonstrated that electronic records of discussions about software bugs are frequently unreliable, and that there is a big difference between the recorded discussions and what you find when you actually track down the participants and ask them directly.

BTW Jorge will be defending his PhD thesis in a couple of weeks, and it’s full of interesting ideas about how software teams develop a shared understanding of the software they develop, and the implications that this has on team organisation. I’ll be mining it for ideas to explore in my own studies of climate modellers later this year…

Take a look at this recent poll from Nanos on priorities for the upcoming G8/G20 meetings. Canadians ranked Global Warming and Economic Recovery as the top two priorities for the meetings, but note that global warming beats economic recovery for the top response across nearly all categories of Canadians (with the exception of the old fogeys, in the 50+ age group, and westerners, who I guess are busy getting rich from the oil sands). Overall, 33.7% of Canadians ranked Global Warming as the top priority, while 27.2% named Economic Recovery.

There’s some other interesting results in the poll. In the breakdown by party voting preferences, the Block Quebecois and the NDP seem much more worried about Global Warming than Green Party supporters: 59.3% of BQ voters and 41.5% of NDP voters ranked it first, while only 33.8% of Green Party voters did. So much for the myth that the green party is a single issue party, eh?

Oh, and if you look at the results to the later questions, Global warming is clearly the issue on which Canada is perceived to be doing most badly in terms of Canada’s place in the world.

31. May 2010 · 5 comments · Categories: psychology

I’m fascinated by the cognitive biases that affect people’s perceptions of climate change. I’ve previously written about the Dunning-Kruger effect (the least competent people tend to vastly over-rate their competence), and Kahan and Braman’s studies on social epistemology (people tend to ignore empirical evidence if its conclusions contradict their existing worldview).

Now comes a study by Nyhan and Reifler in the journal Political Behaviour entitled “When Corrections Fail: The persistence of Political Misperceptions“. N&R point out that in the literature, there is plenty of evidence that people often “base their policy preferences on false, misleading or unsubstantiated information that they believe to be true”. Studies have shown that providing people with correct factual information doesn’t necessarily affect their beliefs. However, different studies disagree about the latter point, partly because it’s often not clear in these studies whether the subject’s beliefs changed at all, and partly because previous studies have differed over how the factual information is presented (and even what counts as ‘factual’).

So N&R set out to directly study whether corrective information does indeed change erroneous beliefs. Most importantly, they were interested in what happens when this corrective information isn’t presented directly as an authoritative account of the truth, but rather (as happens more often) when it is presented as part of a larger, more equivocal set of stories in the media. One obvious factor that causes people to preserve erroneous beliefs is through selective reading – people tend to seek out information that supports their existing beliefs; hence they often don’t encounter information that corrects their misperceptions. And even when people do encounter corrective information, they are more likely to reject it (e.g. by thinking up counter-arguments) if it contradicts their prior beliefs. It is this latter process that N&R investigated, and in particular, whether this process of thinking up counter-arguments can actually reinforce the misperception; they dub this a “correction backfire”.

Four studies were conducted. In each case, the subjects were presented with a report of a speech by a well-known public figure. When a factual correction was added to the article, those subjects who are most likely to agree with the contents of the speech were unmoved by the factual correction, and in several of the studies, the correction actually strengthened their belief in the erroneous information (i.e. a clear ‘correction backfire’):

  • Study 1 conducted in the fall of 2005, examined beliefs in whether Iraq had weapons of mass destruction prior to the US invasion. Subjects were presented with a newspaper article describing a speech by president Bush in which he talks about the risk of Iraq passing these weapons on to terrorist networks. In the correction treatment, the article goes on to describe the results of the Duelfer report, which concluded there were, in fact, no weapons of mass destruction. The result shows a clear correction backfire for conservatives – the correction significantly increased their belief that Iraq really did have such weapons, while for liberals, the correction clearly decreased their belief.
  • Study 2, conducted in the spring of 2006 repeats study 1, with some variation in the wording. This study again showed that the correction was ineffective for conservatives – it didn’t decrease their belief in the existence of the weapons. However, unlike study 1, it didn’t show a correction backfire, although a re-analysis of the results indicated that there was such a backfire among those conservatives who most strongly supported the Iraq war. This study also attempted to test the effect of the source of the newspaper report – i.e. does it matter if it’s presented as being from the New York Times (perceived by many conservatives to have a liberal bias) or Fox News (perceived as being conservative)? In this case, the source of the article made no significant difference.
  • Study 3, also conducted in 2006, examined the belief that the Bush tax cuts paid for themselves by stimulating enough economic growth to actually increase government revenues. Subjects were presented with an article in which president Bush indicated the tax cuts had helped to increase revenues. In the correction treatment, the article goes on to present the actual revenues, showing that tax revenues declined sharply (both absolutely, and as a proportion of GDP) in the years after the tax cuts were enacted. Again there was a clear correction backfire among conservatives – those receiving the article presenting the actual revenues actually increased their belief that the tax cuts paid for themselves.
  • Study 4, also from 2006, examined the belief that Bush banned stem cell research. Subjects were presented with an article describing speeches by senators Edwards and Kerry in which they suggest such a ban exists. In the corrective treatment, a paragraph was added to explain that Bush didn’t actually ban stem cell research, because his restrictions didn’t apply to privately funded research. The results were that the correction did not change liberal’s belief that there was such a ban, but there was no correction backfire (i.e. it didn’t increase their beliefs in the ban).

In summary, factual corrections in newspaper articles don’t appear to work for those who are ideologically motivated to hold the misperception, and in two out of the four studies, it actually strengthened the misperception. So, fact-checking on its own is not enough to overcome ideologically-driven beliefs. (h/t to Ben Goldacre for this)

How does this relate to climate change? Well, most media reports on climate change don’t even attempt any fact-checking anyway – they ignore the vast body of assessment reports by authoritative scientific bodies, and present a “he-said-she-said” slugfest between denialists and climate scientists. The sad thing is that the addition of fact-checking won’t, on it’s own, make any difference to those whose denial of climate change is driven by their ideological leanings. If anything, such fact-checking will make them even more entrenched…

Dear God, I would like to file a bug report

(clickety-click for the full xkcd cartoon)

I’ve been working with group of enthusiastic parents in our neighbourhood over the past year on a plan to make our local elementary school a prototype for low-energy buildings. As our discussions evolved, we ended up with a much more ambitious vision: to use the building and grounds of the school for renewable power generation projects (using solar, and geothermal energy) that could potentially power many of the neighbouring houses and condos – i.e. make the school a community energy hub. And of course, engage the kids in the whole process, so that they learn about climate and energy, even as we attempt to build solutions.

In parallel with our discussions, the school board has been beefing up its ambitions too, and has recently adopted a new Climate Change Action Plan. It makes for very interesting reading. I like the triple goal: mitigation, adaptation and education, largely because the last of these, education, is often missing from discussions about how to respond to climate change, and I firmly believe that the other two goals depend on it. The body of the report is a set of ten proposed actions to cut carbon emissions from the buildings and transportation operated by the school board, funded from a variety of sources (government grants, the feed-in tariff program, operational savings, carbon credits, etc). The report still needs some beefing up on the education front, but it’s a great start!

Here are two upcoming conferences, both relevant to the overlap of computer science and climate science:

…and I won’t make it to either as I’ll be doing my stuff at NCAR. I will get to attend this though:

I guess I’ll have to send some of my grad students off to the other conferences (hint, hint).

I’ve been busy the last few weeks setting up the travel details for my sabbatical. My plan is to visit three different climate modeling centers, to do a comparative study of their software practices. The goal is to understand how the software engineering culture and practices vary across different centers, and how the differences affect the quality and flexibility of the models. The three centers I’ll be visiting are:

I’ll spend 4 weeks at each centre, starting in July, running through to October, after which I’ll spend some time analyzing the data and writing up my observations. Here’s my research plan…

Our previous studies at the UK Met Office Hadley Center suggest that there are many features of software development for earth system modeling that make it markedly different from other types of software development, and which therefore affect the applicability of standard software engineering tools and techniques. Tools developed for commercial software tend not to cater for the demands of working with high performance code for parallel architectures, and usually do not fit well with the working practices of scientific teams. Scientific code development has challenges that don’t apply to other forms of software: the need to keep track of exactly which version of the program code was used in a particular experiment, the need to re-run experiments with precisely repeatable results, the need to build alternative versions of the software from a common code base for different kinds of experiments. Checking software “correctness” is hard because frequently the software must calculate approximate solutions to numerical problems for which there is no analytical solution. Because the overall goal is to build code to explore a theory, there is no oracle for what the outputs should be, and therefore conventional approaches to testing (and perhaps code quality in general) don’t apply.

Despite this potential mismatch, the earth system modeling community has adopted (and sometimes adapted) many tools and practices from mainstream software engineering. These include version control, bug tracking, automated build and test processes, release planning, code reviews, frequent regression testing, and so on. Such tools may offer a number of potential benefits:

  • they may increase productivity by speeding up the development cycle, so that scientists can get their ideas into working code much faster;
  • they may improve verification, for example using code analysis tools to identify and remove (or even prevent) software errors;
  • they may improve the understandability and modifiability of computational models (making it easier to continue to evolve the models);
  • they may improve coordination, allowing a broader community to contribute to and make use of a shared the code base for a wider variety of experiments;
  • they may improve scalability and performance, allowing code to be configured and optimized for a wider variety of high performance architectures (including massively parallel machines), and for a wider variety of grid resolutions.

This study will investigate which tools and practices have been adopted at the different centers, identify differences and similarities in how they are applied, and, as far as is possible, assess the effectiveness of these practices. We will also attempt to characterize the remaining challenges, and identify opportunities where additional tools and techniques might be adopted.

Specific questions for the study include:

  1. Verification – What techniques are used to ensure that the code matches the scientists’ understanding of what it should do? In traditional software engineering, this is usually taken to be a question of correctness (does the code do what it is supposed to?); however, for exploratory modeling it is just as often a question of understanding (have we adequately understood what happens when the model runs?). We will investigate the practices used to test the code, to validate it against observational data, and to compare different model runs against one another, and assess how effective these are at eliminating errors of correctness and errors of understanding.
  2. Coordination – How are the contributions from across the modeling community coordinated? In particular, we will examine the challenges of synchronizing the development processes for coupled models with the development processes of their component models, and how the differences in the priorities of different, overlapping communities of users affect this coordination.
  3. Division of responsibility – How are the responsibilities for coding, verification, and coordination distributed between different roles in the organization? In particular, we will examine how these responsibilities are divided across the scientists and other support roles such as ‘systems’ or ‘software engineering’ personnel. We will also explore expectations on the quality of contributed code from end-user scientists, and the potential for testing and review practices to affect the quality of contributed code.
  4. Planning and release processes – How do modelers decide on priorities for model development, how do they decide which changes to tackle in a particular release of the model, and how they navigate between computational feasibility and scientific priorities? We will also investigate how the change process is organized, how changes are propagated to different sub-communities.
  5. Debugging – How do scientists currently debug the models, what types of bugs do they find in their code currently, and how they find them? In particular, we will develop a categorization of model errors, to use as a basis for subsequent studies into new techniques for detecting and/or eliminating such errors.

The study will be conducted through a mix of interviews and observational studies, focusing on particular changes to the model codes developed at each center. The proposed methodology is to identify a number of candidate code changes, including recently completed changes and current work-in-progress, and to build a “life story” for each such change, covering how each change was planned and conducted, what techniques were applied, and what problems were encountered. This will lead to a more detailed description of the current software development practices, which can then be compared and contrasted with studies of practices used for other types of software. This end result will be an identification of opportunities where existing tools and techniques can be readily adapted (with some clear indication of the potential benefits), along with a longer-term research agenda for problem areas where no suitable solutions currently exist.

This week we’re demoing Inflo at the Ontario Centres of Excellence Discovery Conference 2010. It’s given me a chance to play a little more with the demo, and create some new sample calculations (with Jonathan valiantly adding new features on the fly in response to my requests!). The idea of Inflo is that it should be an open source calculation tool – one that supports a larger community of people discussing and reaching consensus on the best way to calculate the answer to some (quantifiable) question.

For the demo this week, I re-did the calculation on how much of the remaining global fossil fuel reserves we can burn and still keep global warming within the target threshold of a +2°C rise over pre-industrial levels. I first did this calculation in blog post back in the fall, but I’ve been keen to see if Inflo would provide a better way of sharing the calculation. Creating the model is still a little clunky (it is, after all, a very preliminary prototype), but I’m pleased with the results. Here’s a screenshot:

And here’s a live link to try it out. A few tips: the little grey circles under a node indicate there’s some hidden subtrees. Double-clicking on one of these will expand it, while double clicking on an expanded node will collapse everything below it, so you can explore the basis for each step in the calculation. The Node Editor tool bar on the left shows you the formula for the selected node, and any notes. Some of the comments in the “Description” field are hotlinks to data sources – mouseover the text to find them. Oh, and the arrows don’t always update properly when you change views – selecting a node in the graph should force them to update. Oh, and the units are propagated (and scaled for readability) automatically, which is why they sometime look a little odd, eg. “tonne of carbon” rather than “tonnes”. One of our key design decisions is to make the numbers as human-readable as possible, and always ensure correct units are displayed.

The demo should get across some of what we’re trying to do. The idea is to create a visual, web-based calculator that can be edited and shared; eventually we hope to build wikipedia-like communities who will curate the calculations, to ensure that the appropriate sources of data are used, and that the results can be trusted. We’ll need to add more facilities for version management of calculations, and for linking discussions to (portions of) the graphs.

Here’s another example: Jono’s carbon footprint analysis of whether you should print a document or read it on the screen (double click the top node to expand the calculation).

03. May 2010 · 1 comment · Categories: ICSE 2010

Today is our second workshop on software research and climate change, at ICSE 2010 in Cape Town. We’ve finalized the program, and we’re hoping to support some form of remote participation, but I’m still not sure how this will work out.

We had sixteen position papers and two videos submitted in the end, which I’m delighted about. To get everyone reading and discussing them prior to the workshop, we set up an open reviewing process, which I think went very well. Rather than the usual closed, anonymous reviews, we opened it up so that everyone could add reviews to any paper, and we encouraged everyone to review in their own name, rather than anonymously. The main problem we had was finding a suitable way of supporting this – until we hit upon the idea of creating a workshop blog, so each paper is a blog post, and the comment thread allows us to add reviews, and comment on each other’s reviews. This is nice because it means we can now make all the papers and reviews public, and continue the discussions during and after the workshop.

We’re trying out two different ways of supporting live remote participation – in the morning, the keynote talk (by Stephen Emmott of Microsoft Research) will be delivered via Microsoft’s LiveMeeting. We tested it out last week, and I’m pretty impressed with it (apart from the fact that there’s no client for the Mac). The setup we’ll be using is to have a video feed of Stephen giving the talk, displayed on a laptop screen at the front of the room, with his slides projected to the big screen. The laptop also has a webcam, so (if it works) Stephen will be able to see his audience too. I’ll document how well this works in a subsequent post.

For the last afternoon session, we’ll be trying out a live skype call. Feel free to send me your skype details if you’d like to participate. I’ve no idea if this will work (as it didn’t last time we tried), but hey, it’s worth exploring…

Excellent news: Jon Pipitone has finished his MSc project on the software quality of climate models, and it makes fascinating reading. I quote his abstract here:

A climate model is an executable theory of the climate; the model encapsulates climatological theories in software so that they can be simulated and their implications investigated. Thus, in order to trust a climate model one must trust that the software it is built from is built correctly. Our study explores the nature of software quality in the context of climate modelling. We performed an analysis of the reported and statically discoverable defects in several versions of leading global climate models by collecting defect data from bug tracking systems, version control repository comments, and from static analysis of the source code. We found that the climate models all have very low defect densities compared to well-known, similarly sized open-source projects. As well, we present a classification of static code faults and find that many of them appear to be a result of design decisions to allow for flexible configurations of the model. We discuss the implications of our findings for the assessment of climate model software trustworthiness.

The idea for the project came from an initial back-of-the-envelope calculation we did of the Met Office Hadley’s Centre’s Unified Model, in which we estimated the number of defects per thousand lines of code (a common measure of defect density in software engineering) to be extremely low – of the order of 0.03 defects/KLoC. By comparison, the shuttle flight software, reputedly the most expensive software per line of code ever built, clocked in at 0.1 defects/KLoC; most of the software industry does worse than this.

This initial result was startling, because the climate scientists who build this software don’t follow any of the software processes commonly prescribed in the software literature. Indeed, when you talk to them, many climate modelers are a little apologetic about this, and have a strong sense they ought to be doing more rigorous job with their software engineering. However, as we documented in our paper, climate modeling centres such as the UK Met Office do have a excellent software processes, that they have developed over many years to suit their needs. I’ve come to the conclusion that it has to be very different from mainstream software engineering processes because the context is so very different.

Well, obviously we were skeptical (scientists are always skeptical, especially when results seem to contradict established theory). So Jon set about investigating this more thoroughly for his MSc project. He tackled the question in three ways: (1) measuring defect density by using bug repositories, version history and change logs to quantify bug fixes; (2) assessing the software directly using static analysis tools and (3) interviewing climate modelers to understand how they approach software development and bug fixing in particular.

I think there are two key results of Jon’s work:

  1. The initial results on defect density bear up. Although not quite as startlingly low as my back of the envelope calculation, Jon’s assessment of three major GCMs indicate they all fall in the range commonly regarded as good quality software by industry standards.
  2. There are a whole bunch of reasons why result #1 may well be meaningless, because the metrics for measuring software quality don’t really apply well to large scale scientific simulation models.

You’ll have to read Jon’s thesis to get all the details, but it will be well worth it. The conclusion? More research needed. It opens up plenty of questions for a PhD project….

27. April 2010 · 2 comments · Categories: ICSE 2009

It’s nearly a year late, but I finally managed to put together the soundtrack and slides from the short version of the talk on Software Engineering for the Planet that I gave at the International Conference on Software Engineering last year. The full version has been around for a while, but I’m not happy with it because it’s slow and ponderous. To kick off a lunchtime brainstorming session we had at ICSE last year, I did a Pecha Kucha version of it in 6 minutes and 40 seconds (if you listen carefully, you can hear the lunch plates rattling). For anyone who has never done a Pecha Kucha talk, I highly recommend it – putting the slides on an automated timer really keeps you on your toes.

PS If you look carefully, you’ll notice I cheated slightly, rather than 20 slides with 20 seconds each, I packed in more by cutting them down to 10 seconds each for the last half of the talk. It surprises me that this actually seems to work!

After catching the start of yesterday’s Centre for Environment Research Day, I headed around the corner to catch the talk by Ray Pierrehumbert on “Climate Ethics, Climate Justice“. Ray is here all week giving the 2010 Noble lectures, “New Worlds, New Climates“. His theme for the series is the new perspectives we get about Earth’s climate from the discovery of hundreds of new planets orbiting nearby stars, advances in knowledge about solar system planets, and advances in our knowledge of the early evolution of Earth, especially new insights into the snowball earth. I missed the rest of the series, but made it today, and I’m glad I did, because the talk was phenomenal.

Ray began by pointing out that climate ethics might not seem to fit with the theme of the rest of the series, but it does, because future climate change will, in effect, make the earth into a different planet. And the scary thing is we don’t know too much about what that planet will be like. Which then brings us to questions of responsibility, particularly the question of how much we should be spending to avoid this.

Figure 1 from Rockstrom et al, Nature 461, 472-475 (24 Sept 2009). Original caption: The inner green shading represents the proposed safe operating space for nine planetary systems. The red wedges represent an estimate of the current position for each variable. The boundaries in three systems (rate of biodiversity loss, climate change and human interference with the nitrogen cycle), have already been exceeded.

Humans are a form of life, and are altering the climate in a major way. Some people talk about humans now having an impact of “geological proportions” on the planet. But in fact, we’re a force of far greater than geological proportions: we’re releasing around 20 times as much carbon per year than what nature can do (for example via volcanoes). We may cause a major catastrophe. And we need to consider not just CO2, but many other planetary boundaries – all biogeochemical boundaries.

But this is nothing new – this is what life does – it alters the planet. The mother of all planet altering lifeforms is blue-green algae. It radically changed atmospheric chemistry, even affecting composition of rocks. If the IPCC had been around at the end of the Archean Eon (2500 million years ago) to consider how much photosynthesis should be allowed, it would have been a much bigger question than we face today. The biosphere (eventually!) recovers from such catastrophes. There are plenty of examples: oxygenation by cyanobacteria, snowball earth, permo-triassic mass extinction (90% of species died out) and the KT dinosaur killer astreroid (although the latter wasn’t biogeochemically driven). So the earth does just fine in the long run, and such catastrophes often cause interesting things to happen, eg. opening up new niches for new species to evolve (e.g. humans!).

But normally these changes take tens of millions of years, and whichever species were at the top of the heap before usually lose out: the new kind of planet favours new kinds of species.

So what is new with the current situation? Most importantly we have foresight and we know about what we’re doing to the planet. This means we have to decide what kind of climate the planet will have, and we can’t avoid that decision, because even deciding to do nothing about it is a decision. We cannot escape the responsibility. For example, we currently have a climate that humans evolved to exist in. The conservative thing is to decide not to rock the boat – to keep the climate we evolved in. On the other hand we could decide a different climate would be preferable, and work towards it – e.g. would things be better (on balance) if the world were a little warmer or a little cooler. So we have to decide how much warming is tolerable. And we must consider irreversible decisions – e.g. preserving irreplaceable treasures (e.g. species that will go extinct). Or we could put the human species at the centre of the issue, and observe that (as far as we know) the human specifies is unique as the only intelligent life in the universe; the welfare of the human species might be paramount. So then the question then becomes: how should we preserve a world worth living in for humanity?

So far, we’re not doing any better than cyanobacteria. We consume resources and reproduce until everything is filled up and used up. Okay, we have a few successes, for example in controlling acid rain and CFCs. But on balance, we don’t do much better than the bacteria.

Consider carbon accounting. You can buy carbon credits, sometimes expressed in terms of tonnes of CO2, sometimes in terms of tonnes of carbon. From a physics point of view, it’s much easier to think in terms of carbon molecules, because it’s the carbon in various forms that matters – e.g. dissolved in the ocean making them more acidic, in CO2 in the atmosphere, etc. We’re digging up this carbon in various forms (coal, oil, gas) and releasing it into the atmosphere. Most of this came from biological sources in the first place, but has been buried over very long (geological) timescales. So, we can do the accounting in terms of billions of tonnes (Gt) of carbon. The pre-industrial atmosphere contained 600Gt carbon. Burning another 600Gt would be enough to double atmospheric concentrations (except that we have to figure out how much stays in the atmosphere, how much is absorbed by the oceans, etc). World cumulative emissions show an exponential growth over last century. We are currently at 300Gt cumulative emissions from fossil fuel. 1000Gt of cumulative emissions is an interesting threshold, because that’s about enough to warm the planet by 2°C (which is the EU’s stated upper limit). A straight projection of the current exponential trend takes us to 5000GtC by 2100. It’s not clear there is enough coal to get us there, but it is dangerous to assume that we’ll run out of resources before this. The worst scenario: we get to 5000GtC, wreck the climate, just as we run out of fossil fuels, so civilization collapses, at a time when we no longer have a tolerable climate to live in.

Of course, such exponential growth can never continue indefinitely. To demonstrate the point, Ray showed the video of The Impossible Hamster Club. The key question is whether we will voluntarily stop this growth in carbon emissions, and if we don’t, at what point will natural limits kick in and stop the growth for us?

There are four timescales for CO2 drawdown:

  • Uptake by the ocean mixed layer – a few decades
  • Uptake by the deep ocean – a few centuries
  • Carbonate dissolution (laying down new sediments on the ocean bed) – a few millenia
  • Silicate weathering (reaction between rocks and CO2 in the atmosphere that creates limestone) – a few hundred millenia.

Ray then showed the results of some simulations using the Hamburg carbon cycle model. The scenario they used is a ramp up to peak emissions in 2010, followed by a drop to either 4, 2, or 1Gt per year from then on. The graph of atmospheric concentrations out to the year 4000 shows that holding emissions stable at 2Gt/yr still causes concentrations to ramp up to 1000ppm. Even reducing to 1Gt/yr leads to an increase to around 600ppm by the year 4000. The obvious conclusion is that we have to reduce net emissions to approximately zero in order to keep the climate stable over the next few thousand years.

What does a cumulative emissions total of 5000GtC mean for our future? Peak atmospheric concentrations will reach over 2000ppm, and stay there for around 10,000 years, then slowly reducing on a longer timescale because of silicate weathering. Global mean temperature rises by around 10°C. Most likely, the greenland and west antarctic ice sheets will melt completely (it’s not clear what it would take to melt the east antarctic). So what we do this century will affect us for tens of thousands of years. Paul Crutzen coined the term anthropocene to label this new era in which humans started altering the climate. In the distant future, the change in the start of the anthropocene will look as dramatic as other geolological shifts – certainly bigger than the change at the end of the KT extinction.

This makes geoengineering by changing the earth’s albedo an abomination (Ray mentioned as an example the view put forward in that awful book Superfreakonimics). It’s morally reprehensible, because it leads to the Damocles world. The sword hanging over us is that for the next 10,o000 years, we’re committed to doing the sulphur seeding every two years, and continuing to do so no matter what unforutunate consequence such as drought, etc. happen as side effects.

But we will need longwave geoengineering – some way of removing CO2 from the atmosphere to deal with the last gigatonne or so of emissions, because these will be hard to get rid of no matter how big the push to renewable energy sources. That suggests we do need a big research program on air capture techniques.

So, the core questions for climate ethics are:

  • What is the right amount to spend to reduce emissions?
  • How should costs be divided up (e.g. US, Europe, Asia, etc)?
  • How to figure the costs of inaction?
  • When should it be spent?

There is often a confusion between fairness and expedience (e.g. Cass Sunstein, an Obama advisor, makes this mistake in his work on climate change justice). The argument goes that a carbon tax that falls primarily on the first world is, in effect, a wealth transfer to the developing world. It’s a form of foreign aid, therefore hard to sell politically to Americans, and therefore unfair. But the real issue is not about what’s expedient, the issue is about the right thing to do.

Not all costs can be measured by money, which makes cost-benefit analysis a poor tool for reasoning about climate change. For example, how can we account for loss of life, loss of civil liberties, etc in a cost/benefit analysis? Take for example the effect of capital punishment on crime reduction versus the injustice of executing the innocent. This cannot be a cost/benefit decision, it’s a question of social values. In economic theory the “contingent valuation” of non-market costs and benefits is hopelessly broken. Does it make sense to trade off polar bear extinction against Arctic oil revenue by assigning a monetary value to polar bears? A democratic process must make these value judgements – we cannot push them off to economic analysis in terms of cost-benefits. The problem is that the costs and benefits of planetary scale processes are not additive. Therefore cost/benefit is not a suitable tool for making value decisions.

Similarly the use of (growth in) GDP, which is used by economists as a proxy for a nation’s welfare. Bastiat introduced the idea of the broken window fallacy – the idea that damage to people’s property boosts GDP because it increases the need for work to be done to fix it, and hence increases money circulation. This argument is often used by conservatives to poohpooh the idea of green jobs – what’s good for jobs doesnt necessarily make people better off. But right now the entire economy is made out of broken windows: Hummers, Mcmansions, video screens in fastfood joints,… all of it is consumption that boosts GDP without actually improving life for anyone. (Perhaps we should try to measuring gross national happiness instead, like the Bhutanese).

And then there’s discounting – how do we compare the future with the present? The usual technique is to exponentially downweight future harms according to how far in the future they are. The rationale is you could equally well put the money in the bank, and collect interest to pay for future harms (i.e. generate a “richer future”, rather than spend the money now on mitigating the problem). But certain things cannot be replaced by money (e.g. human life, species extinction). Therefore they cannot be discounted. And of course, economists make the 9 billion tonne hamster mistake – they assume the economy can keep on growing forever. [Note: Ray has more to say on cost-benefit and discounting in his slides, which he skipped over in the talk through lack of time]

Fairness is a major issue. How do we approach this? For example, retributive justice – punish the bad guys? You broke it, you fix it? Whoerever suffers the least from fixing it moves first? Consider everyone to be equal?  Well, the Canadian climate policy appears to be: wait to see what Obama does, and do the same, unless we can get away with doing less.

What about China vs. the US, the two biggest emitters of greenhouse gases? The graph of annual CO2 emissions shows that China overtook the US in the last few years (while, for example, France held their emissions constant). But you have to normalize the emissions per capita, then the picture looks very different. And here’s an interesting observation: China has a per capita emissions very close to that of France, but doesn’t have French standard of living. Therefore there is clearly room for China to improve its standard of living without increasing per capita emissions, which means that emissions controls do not necessarily hold back development.

But because it’s cumulative emissions that really matter, we have to look at each nation’s cumulative per capita emissions. The calculation is tricky because we have to account for population growth. It turns out that the US has a bigger population growth problem than China, which, when added to the cumulative emissions, means US has much bigger responsibility to act. If we take the target of 1000GtC as the upper limit on cumulative emissions (to stay within the 2°C temperature rise), and allocate that equally to everyone, based on 2006 population figures, we get about 100 tonnes of carbon per capita as a lifetime allowance. The US has an overdraft on this limit (because the US has used up more than this), while China still has a carbon balance (it’s used up less). In other words, in terms of the thing that matters most, cumulative emissions, the US has used up more than it’s fair share of a valuable resource (slide 43 from Ray’s talk):

This graph shows the cumulative emissions per (2006) capita for the US and China. If we take 100 tonnes as the lifetime limit for each person (to keep within the global 1000Gt target), then the US has already used more than its fair share, and China has used much less.

This analysis makes it clear what the climate justice position is. The Chinese might argue that just to protect themselves and their climate, China might need to do something more than its fair share. In terms of a negotiation, arguing about everyone taking action together, might be expedient. But the right thing to do for the US is not just to reduce emissions to zero immediately, but to pay back that overdraft.

Some interesting questions from the audience:

Q: On geoengineering – why rule out attempts to change the albedo of the earth by sulphate particle seeding when we might need an “all of the above” approach? A: Ray’s argument is largely about what happens if it fails. For example, if the dutch dykes fail, in the worst case, the Dutch could move elsewhere. If global geoengineering fails, we don’t have an elsewhere to move to. Also, if you stop, you get hit all at once with the accumulated temperature rise. This makes Levitt’s suggestion of “burn it all and geoengineer to balance” to be morally reprehensible.

Q: Could you say more about the potential for air capture? A: It’s a very intriguing idea. All the schemes being trialed right now capture carbon in the form of CO2 gas, which would then need to be put down into mineral form somehow. A more interesting approach is to capture CO2 directly in mineral form, e.g. limestone. It’s not obviously crazy, and if it works it would help. It’s more like insurance, and investing in research in this likely to provide a backup plan in a way that albedo alteration does not.

Q: What about other ways of altering the albedo? A: Suggestions such as painting roofs & parking lots white will help reduce urban heat, mitigate effect of heatwaves, and also reduce use of airconditioners. Which is good, but it’s essentially a regional effect. The overall effect on the global scale is probably negligible. So it’s a good idea because it only has a regional impact.

Q: About nuclear – will we need it? A: Ray says probably yes. If it comes down to a choice between nuclear vs. coal, the choice has to be nuclear.

Finally, I should mention Ray has a new book coming out: Principles of Planetary Climate, and is a regular contributor to RealClimate.org.

This morning I attended the first part of the Centre for Environment’s Research Day, and I’m glad I did, because I caught the talk by Chris Kennedy from the Sustainable Infrastructure Group in the Dept of Civil Engineering, on “Greenhouse Gases from Global Cities”. He talked about a study he’s just published, on the contribution of ten major cities to GHG emissions. Chris points out that most of the solutions to climate change will have to focus on changing cities. Lots of organisations are putting together greenhouse gas inventories for cities, but everyone is doing it differently, measuring different things. Chris’s study examined how to come up with a consistent approach. For example, the approach taken in Paris is good at capturing lifecycle emissions, London is good at spatial issues, Tokyo is good at analyzing emissions over time. Each perspective useful, but the differences make comparisons hard. But there’s no obvious right way to do it. For example, how do you account for the timing of emissions release, e.g. for waste disposal? Do you care about current emissions as a snapshot, or future emissions that are committed because of waste generated today?

The IPCC guidelines for measuring emissions take a pure producer perspective. They focus only on emissions that occur within the jurisdiction of each territory. This ignores, for example, consumer emissions when the consumer of a product or service is elsewhere. It also ignores upstream emissions: e.g. electricity generation is generally done outside the city, but used within the city. Then there’s line loss in power transmission to the city; that should also get counted. In Paris, Le Bilan Carbon counts embodied emissions in building materials, maintenance of vehicles, refining of fuels, etc. but it ignores emissions by tourists, which is a substantial part of Paris’ economy.

In the study Chris and colleagues did, they studied ten cities, many iconic: Bankok, Barcelona, Cape Town, Denver, Geneva, London, Los Angeles, New York, Prague and Toronto. Ideally they would like to have studied metropolitan regions rather than cities, because it then becomes simpler to include transport emissions for commuting, which really should be part of the assessment of each city. The study relied partially on existing assessments for some of these cities and analyzed emissions in terms of electricity, heating/industrial fuels (lumped together, unfortunately), ground transport, aviation and marine fuels, industrial processes, and waste (the methodology is described here).

For example, for electricity, Toronto comes second in consumption (MWh per capita), after Denver, and is about double that of London and Prague. Mostly, this difference is due to the different climate, but also the amount of commerce and industry within the city. However, the picture for carbon intensity is very different, as there is a big mix of renewables (e.g. hydro) in Toronto’s power supply, and Geneva gets its power supply almost entirely from hydro. So you get some interesting combinations: Toronto has high consumption but low intensity, whereas Cape Town has low consumption and high intensity. So multiply the two: Denver is off the map  at 9 t eCO2 per capita, because it has high consumption and high intensity, while most others are in the same range, around 2-3 t eCO2 per capita. And Geneva is very low:

Climate has to be taken into account somehow, because there is an obvious relationship between energy used for heating and typical temperatures, which can be assessed by counting heating degree days:

Aviation is very interesting. Some assessments exclude it, on the basis that local government has no control. But Chris points out that when it comes down to it, local government has very little control over anything, so that argument doesn’t really wash. The UNFCC says domestic flights should be included, but this then has a small country bias – small countries tend to have very few domestic flights. A better approach is to include international flights as well, so that we count all flights taking off from that city. Chris’ methodology assesses this as jet fuel loaded at each airport. For this then, London is way out in the lead:

In summary, looking at total emissions, Denver is way out in front. In conversations with them, it was clear they had no idea – they think of themselves as a clean green city up in the mountains. No surprises that the North American cities all fare the worst, driven by a big chunk of emissions from ground transportation. The real surprises though are Bangkok and Cape Town, which compare with New York and Toronto for total emissions:

Chris concluded the talk with some data from Asian cities that were not included in the above study. In particular, Shanghai and Beijing are important in part because of their sheer size. For example, if Shanghai on its own was a country, it would come it about #25 in the world for total emissions.

One thing I found interesting from the paper that Chris didn’t have time to cover in the talk was the obvious relationship between population density and emissions from ground transportation fuels. Clearly, to reduce carbon emissions, cities need to become much denser (and should all be more like Barcelona):

Busy few days coming up in Toronto, around the celebration of Earth day tomorrow:

  • tonight, the Green Party is hosting an evening with Gwynne Dyer and Elizabeth May entitled “Finding Hope: Confronting Climate Wars“. Gwynne is, of course, the author of the excellent book Climate Wars, also available as a series of podcasts from the CBC;
  • tomorrow (April 22) the Centre for Environment has its Research Day, showcasing some of the research of the centre;
  • all week, the Centre for Global Change science is hosting Ray Pierrehumbert giving a lecture series “New Worlds, New Climates“. Tomorrow’s (April 22) looks particularly interesting: “Climate Ethics, Climate Justice”;
  • next Tuesday (April 27), the Centre for Ethics is running a public issues forum on “Climate Change and the Ethics of Responsibility“;

I won’t make it to all of these, but will blog those that I do make. Seems like ethics of climate change is a theme, which I think is very timely.

09. April 2010 · 9 comments · Categories: politics

My debate with George Monbiot is still going on in this thread. I’m raising this comment to be a separate blog post (with extra linky goodness), because I think it’s important, independently of any discussion of the CRU emails (and to point out that the other thread is still growing – go see!)

Like many other commentators, George Monbiot suggests that “to retain the moral high ground we have to be sure that we’ve got our own house in order. That means demanding the highest standards of scientific openness, transparency and integrity”.

It’s hard to argue with these abstract ideals. But I’ll try, because I think this assertion is not only unhelpful, but also helps to perpetuate several myths about science.

The argument that scientists should somehow be more virtuous (than regular folks) is a huge fallacy. Openness and transparency are great as virtues to strive for. But they cannot ever become a standard by which we judge individual scientists. For a start, no scientific field has ever achieved the levels of openness that are being demanded here. The data is messy, the meta-data standards are not in place, the resources to curate this data are not in place. Which means the “get our own house in order” argument is straight denialist logic – they would have it that we can’t act on the science until every last bit of data is out in the public domain. In truth, climate science has developed a better culture of data sharing, replication, and results checking than almost any other scientific field. Here’s one datapoint to back this up: in no other field of computational science are there 25+ teams around the world building the same simulation models independently, and systematically comparing their results on thousands of different scenarios in order to understand the quality of those simulations.

We should demand from scientists that they do excellent science. But we should not expect them to also somehow be superhuman. The argument that scientists should never exhibit human weaknesses is not just fallacious, it’s dangerous. It promotes the idea that science depends on perfect people to carry it out, when in fact the opposite is the case. Science is a process that compensates for the human failings of the people who engage in it, by continually questioning evidence, re-testing ideas, replicating results, collecting more data, and so on. Mistakes are made all the time. Individual scientists screw up. If they don’t make mistakes, they’re not doing worthwhile science. It’s vitally important that we get across to the public that this is how science works, and that errors are an important part of the process. Its the process that matters, not any individual scientist’s work. The results of this process are more trustworthy than any other way of producing knowledge, precisely because the process is robust in the face of error.

In the particular case [of the CRU emails], calling for scientists to take the moral high ground, and to be more virtuous, is roughly the equivalent of suggesting that victims of sexual assault should act more virtuous. And if you think this analogy is over the top, you haven’t understood the nature of the attacks on scientists like Mann, Santer, Briffa, and Jones. Look at Jones now: he’s contemplated suicide, he’s on drugs just to help him get through the day, and more drugs to allow him to sleep at night. These bastards have destroyed a brilliant scientist. And somehow the correct response is that scientists should strive to be more virtuous?! Oh yes, blame the victim.