I was talking to the folks at our campus sustainability office recently, and they were extolling the virtues of Green Revolving Funds. The idea is ridiculously simple, but turns out to be an important weapon in making sure that the savings from  energy efficiency don’t just disappear back into the black hole of University operational budgets. Once the fund is set up, it provides money for the capital costs of energy efficiency projects, so that they don’t have to compete with other kinds of projects for scarce capital. The money saved from reduced utility bills is then ploughed back into the fund to support more such projects. And the beauty of the arrangement is that you don’t then have to go through endless bureaucracy to get new projects going. According to wikipedia, this arrangement is increasingly common across university campuses in the US and Canada.

So I’m delighted to see the Toronto District School Board (TDSB) is proposing a revolving fund for this too. Here’s the motion to be put to the TDSB’s Operations and Facilities Management Committee next week. Note the numbers in there about savings already realised:

Motion – Environmental Legacy Fund

Whereas in February 2010, the Board approved the Go Green: Climate Change Action Plan that includes the establishment of an Environmental Advisory Committee and the Environmental Legacy Fund;

Whereas the current balance of the Environmental Legacy Fund includes revenues from the sale of carbon credits accrued through energy efficiency projects and from Feed-in-Tariff payments accruing from nine Ministry-funded solar electricity projects;

Whereas energy efficiency retrofit projects completed since 1990/91 have resulted in an estimated 33.9% reduction in greenhouse gas emissions to date and lowered the TDSB’s annual operating costs significantly, saving the Board a $22.43 million in 2010/11 alone; and

Whereas significant energy efficiency and renewable energy opportunities remain available to the TDSB which can provide robust annual operational savings, new revenue streams as well as other benefits including increasing the comfort and health of the Board’s learning spaces;

Therefore, the Environmental Advisory Committee recommends that:

  1. The Environmental Legacy Fund be utilized in a manner that advances projects that directly and indirectly reduce the Board’s greenhouse gas (GHG) emissions and lower the TDSB’s long-term operating costs;
  2. The Environmental Legacy Fund be structured and operated as a revolving fund;
  3. The Environmental Legacy Fund be replenished and  augmented from energy cost savings achieved, incentives and grant revenues secured for energy retrofit projects and renewable energy projects, and an appropriate level of renewable energy revenue as determined by the Board.,
  4. The TDSB establish criteria for how and how much of the Environmental Legacy Fund can be used to advance environmental initiatives that have demonstrated GHG reduction benefits but may not provide a short-term financial return and opportunity for replenishing the Fund.
  5. To ensure transparency and document success, the Board issue an annual financial statement, on the Environmental Legacy Fund along with a report on the energy and GHG savings attributable to projects financed by the Fund.

My wife and I have been (sort of) vegetarian for twenty years now. I say “sort of” because we do eat fish regularly, and we’re not obsessive about avoiding foods that might have animal stock in them. More simply, we avoid eating meat. However I often find it hard to explain this to people, partly because it doesn’t seem to fit with anyone’s standard notion of vegetarianism. Reaction tend to fall somewhere between being weirded out by the idea of a diet that’s not based on meat to a detailed quizzing of how I decide which creatures I’m willing to eat. The trouble is, our decision to avoid meat isn’t based on a concern for animals at all.

The reason we went meat-free many years ago was threefold: it’s cheaper, healthier and better for the planet. Once we’d done it, we also found ourselves eating a more varied and flavourful diet, but that was really just a nice side-effect.

The “better for the planet” part was based on several years of reading about issues in global equity and food production, which I have found hard to explain to people in a succinct way. In other words, yes, it would require me to be boring for an entire evening’s dinner party, which I would prefer not to do.

Now I can explain it much more succinctly, with just one graph (h/t to Neil for this):

As usual, the Onion nails it: Nuclear Energy Advocates Insist U.S. Reactors Completely Safe Unless Something Bad Happens

Alright, I can’t resist saying more. I grew up in Europe and remember Chernobyl vividly. The nuclear fallout reached as far as northern Britain, and for months we were reminded just how risky nuclear power is. Many years later, I studied systems failures in the aerospace industry, and learned just how hard it is for humans to prevent catastrophic failure in complex socio-technical systems. No, worse: I learned that human organisations often conspire to make complex systems less safe. Or, as Nancy Leveson puts it in her book, Safeware (a must read for anyone interested in complex systems failure):

“In most of the major accidents of the past 25 years, technical information on how to prevent the accident was known, and often even implemented. But in each case… [this was] negated by organisational or managerial flaws.”

The problem isn’t a question of whether small radiation leaks are dangerous or not. And it isn’t even a question of whether, on average, coal-fired power plants emit more radiation than nuclear plants (although this infographic from xkcd helps to put it into perspective). The problem is that human organizations are just inherently too flawed to manage the safe operation of something as inherently dangerous as a nuclear fission reactor. The failings of TEPCO aren’t really news: this is how normal accidents occur.

In the last few years, I gradually became convinced to ignore my objections to nuclear power, on the basis that if we’re going to switch from fossil fuels quick enough, we have to consider everything else. The risks from nuclear power pale into insignificance compared to the risks from climate change on our current fossil-fuel-intensive path. That’s not to say the risks aren’t there. But it’s a trade off between what’s essentially a local risk to people living within a hundred km or so of a nuclear power plant (which I do), versus a global risk to pretty much everyone on the planet. But just as we don’t have a suitable global governance structure for taking appropriate action on climate change, so we also don’t have a suitable global governance structure that’s able to weigh up the risks to, and represent the interests of, different groups of communities in considering whether to build more nuclear power plants to reduce the risk of climate change.

The bottom line is that nuclear power poses the same kinds of inter-generational ethical problems that climate change does: we build nuclear power plants now to enjoy our profligate energy-intensive lifestyle, and leave future generations to cope with the costs and challenges of decommisioning, site clean up, and long term storage of waste. We don’t know how to solve these problems right now, so do we really have the right to leave them to future generations to solve?

It seems to be me that nuclear power can only be part of the solution to climate change once we demonstrate that we’ve fully pursued every other avenue for clean energy and energy conservation, and once we’ve come up with an adequate governance structure for balancing the risks to different communities. Some people make an even stronger argument, based on the economics – e.g. Amory Lovins crunches the numbers and points out that nuclear power is both unsafe and uneconomic.

And, as the Onion reminds us, it’s only safe when nothing unforeseen happens.

Eugenia Kalnay has an interesting talk on a core problem that most people avoid when talking about climate: the growth in human population. It’s a difficult subject politically, because any analysis of the link between emissions growth and population growth invites the simple-minded response that de-population is the solution, which then quickly sinks into accusations that environmentalists are misanthropes.

In her talk, “Population and Climate Change: A Proposal“, Kalnay makes some excellent observations, for example that per dollar spent, family planning reduces four times as much carbon over the next 40-years as adoption of low-carbon technologies, and yet family planning is still not discussed at the COP meetings, because it is taboo. The cause and effect is a little complicated too. While it’s clear that more people means more fossil fuel emissions, it’s also the case that fossil fuels enabled the massive population growth – without fossil fuels the human population would be much smaller.

Kalnay then points out that, rather than thinking about coercive approaches to population control, there’s a fundamental human rights issue here: most women would prefer not to have lots of kids (especially not the averages of 6 or more in the developing world), but they simply have no choice. Kalnay cites a UN poll that shows “in many countries more than 80% of married women of reproductive age with 2 children, do not want to have more children”, and that estimates show that 40% of pregnancies worldwide are unwanted. And the most effective strategies to address this are education, access to birth control, and equal (economic) opportunities for women.

There’s also the risk of population collapse. Kalnay discussed the Club of Rome analysis that first alerted the world to the possibility of overshoot and collapse, and which was roundly dismissed by economists as absurd. But despite a whole lot of denialism, the models are still valid, and correspond well with what actually happened, and that rather than approaching the carrying capacity of the earth asymptotically, we have overshot. These dynamics models now show population collapse on most scenarios, rather than a slight overshoot and oscillation.

Kalnay concludes with a strong argument that we need to start including population dynamics into climate modelling, to help understand how different population growth scenarios impact emissions, and also to explore, from a scientific point of view, what the limits to growth really look like when we include earth system dynamics and resource depletion. And, importantly, she points out that you can’t do this by just modeling human population at the global level; we will need regional models to capture the different dynamics in different regions of the globe, as both the growth/decline rates, and the per capita emissions rates vary widely in different countries/regions.

I’ve been working with group of enthusiastic parents in our neighbourhood over the past year on a plan to make our local elementary school a prototype for low-energy buildings. As our discussions evolved, we ended up with a much more ambitious vision: to use the building and grounds of the school for renewable power generation projects (using solar, and geothermal energy) that could potentially power many of the neighbouring houses and condos – i.e. make the school a community energy hub. And of course, engage the kids in the whole process, so that they learn about climate and energy, even as we attempt to build solutions.

In parallel with our discussions, the school board has been beefing up its ambitions too, and has recently adopted a new Climate Change Action Plan. It makes for very interesting reading. I like the triple goal: mitigation, adaptation and education, largely because the last of these, education, is often missing from discussions about how to respond to climate change, and I firmly believe that the other two goals depend on it. The body of the report is a set of ten proposed actions to cut carbon emissions from the buildings and transportation operated by the school board, funded from a variety of sources (government grants, the feed-in tariff program, operational savings, carbon credits, etc). The report still needs some beefing up on the education front, but it’s a great start!

This week we’re demoing Inflo at the Ontario Centres of Excellence Discovery Conference 2010. It’s given me a chance to play a little more with the demo, and create some new sample calculations (with Jonathan valiantly adding new features on the fly in response to my requests!). The idea of Inflo is that it should be an open source calculation tool – one that supports a larger community of people discussing and reaching consensus on the best way to calculate the answer to some (quantifiable) question.

For the demo this week, I re-did the calculation on how much of the remaining global fossil fuel reserves we can burn and still keep global warming within the target threshold of a +2°C rise over pre-industrial levels. I first did this calculation in blog post back in the fall, but I’ve been keen to see if Inflo would provide a better way of sharing the calculation. Creating the model is still a little clunky (it is, after all, a very preliminary prototype), but I’m pleased with the results. Here’s a screenshot:

And here’s a live link to try it out. A few tips: the little grey circles under a node indicate there’s some hidden subtrees. Double-clicking on one of these will expand it, while double clicking on an expanded node will collapse everything below it, so you can explore the basis for each step in the calculation. The Node Editor tool bar on the left shows you the formula for the selected node, and any notes. Some of the comments in the “Description” field are hotlinks to data sources – mouseover the text to find them. Oh, and the arrows don’t always update properly when you change views – selecting a node in the graph should force them to update. Oh, and the units are propagated (and scaled for readability) automatically, which is why they sometime look a little odd, eg. “tonne of carbon” rather than “tonnes”. One of our key design decisions is to make the numbers as human-readable as possible, and always ensure correct units are displayed.

The demo should get across some of what we’re trying to do. The idea is to create a visual, web-based calculator that can be edited and shared; eventually we hope to build wikipedia-like communities who will curate the calculations, to ensure that the appropriate sources of data are used, and that the results can be trusted. We’ll need to add more facilities for version management of calculations, and for linking discussions to (portions of) the graphs.

Here’s another example: Jono’s carbon footprint analysis of whether you should print a document or read it on the screen (double click the top node to expand the calculation).

After catching the start of yesterday’s Centre for Environment Research Day, I headed around the corner to catch the talk by Ray Pierrehumbert on “Climate Ethics, Climate Justice“. Ray is here all week giving the 2010 Noble lectures, “New Worlds, New Climates“. His theme for the series is the new perspectives we get about Earth’s climate from the discovery of hundreds of new planets orbiting nearby stars, advances in knowledge about solar system planets, and advances in our knowledge of the early evolution of Earth, especially new insights into the snowball earth. I missed the rest of the series, but made it today, and I’m glad I did, because the talk was phenomenal.

Ray began by pointing out that climate ethics might not seem to fit with the theme of the rest of the series, but it does, because future climate change will, in effect, make the earth into a different planet. And the scary thing is we don’t know too much about what that planet will be like. Which then brings us to questions of responsibility, particularly the question of how much we should be spending to avoid this.

Figure 1 from Rockstrom et al, Nature 461, 472-475 (24 Sept 2009). Original caption: The inner green shading represents the proposed safe operating space for nine planetary systems. The red wedges represent an estimate of the current position for each variable. The boundaries in three systems (rate of biodiversity loss, climate change and human interference with the nitrogen cycle), have already been exceeded.

Humans are a form of life, and are altering the climate in a major way. Some people talk about humans now having an impact of “geological proportions” on the planet. But in fact, we’re a force of far greater than geological proportions: we’re releasing around 20 times as much carbon per year than what nature can do (for example via volcanoes). We may cause a major catastrophe. And we need to consider not just CO2, but many other planetary boundaries – all biogeochemical boundaries.

But this is nothing new – this is what life does – it alters the planet. The mother of all planet altering lifeforms is blue-green algae. It radically changed atmospheric chemistry, even affecting composition of rocks. If the IPCC had been around at the end of the Archean Eon (2500 million years ago) to consider how much photosynthesis should be allowed, it would have been a much bigger question than we face today. The biosphere (eventually!) recovers from such catastrophes. There are plenty of examples: oxygenation by cyanobacteria, snowball earth, permo-triassic mass extinction (90% of species died out) and the KT dinosaur killer astreroid (although the latter wasn’t biogeochemically driven). So the earth does just fine in the long run, and such catastrophes often cause interesting things to happen, eg. opening up new niches for new species to evolve (e.g. humans!).

But normally these changes take tens of millions of years, and whichever species were at the top of the heap before usually lose out: the new kind of planet favours new kinds of species.

So what is new with the current situation? Most importantly we have foresight and we know about what we’re doing to the planet. This means we have to decide what kind of climate the planet will have, and we can’t avoid that decision, because even deciding to do nothing about it is a decision. We cannot escape the responsibility. For example, we currently have a climate that humans evolved to exist in. The conservative thing is to decide not to rock the boat – to keep the climate we evolved in. On the other hand we could decide a different climate would be preferable, and work towards it – e.g. would things be better (on balance) if the world were a little warmer or a little cooler. So we have to decide how much warming is tolerable. And we must consider irreversible decisions – e.g. preserving irreplaceable treasures (e.g. species that will go extinct). Or we could put the human species at the centre of the issue, and observe that (as far as we know) the human specifies is unique as the only intelligent life in the universe; the welfare of the human species might be paramount. So then the question then becomes: how should we preserve a world worth living in for humanity?

So far, we’re not doing any better than cyanobacteria. We consume resources and reproduce until everything is filled up and used up. Okay, we have a few successes, for example in controlling acid rain and CFCs. But on balance, we don’t do much better than the bacteria.

Consider carbon accounting. You can buy carbon credits, sometimes expressed in terms of tonnes of CO2, sometimes in terms of tonnes of carbon. From a physics point of view, it’s much easier to think in terms of carbon molecules, because it’s the carbon in various forms that matters – e.g. dissolved in the ocean making them more acidic, in CO2 in the atmosphere, etc. We’re digging up this carbon in various forms (coal, oil, gas) and releasing it into the atmosphere. Most of this came from biological sources in the first place, but has been buried over very long (geological) timescales. So, we can do the accounting in terms of billions of tonnes (Gt) of carbon. The pre-industrial atmosphere contained 600Gt carbon. Burning another 600Gt would be enough to double atmospheric concentrations (except that we have to figure out how much stays in the atmosphere, how much is absorbed by the oceans, etc). World cumulative emissions show an exponential growth over last century. We are currently at 300Gt cumulative emissions from fossil fuel. 1000Gt of cumulative emissions is an interesting threshold, because that’s about enough to warm the planet by 2°C (which is the EU’s stated upper limit). A straight projection of the current exponential trend takes us to 5000GtC by 2100. It’s not clear there is enough coal to get us there, but it is dangerous to assume that we’ll run out of resources before this. The worst scenario: we get to 5000GtC, wreck the climate, just as we run out of fossil fuels, so civilization collapses, at a time when we no longer have a tolerable climate to live in.

Of course, such exponential growth can never continue indefinitely. To demonstrate the point, Ray showed the video of The Impossible Hamster Club. The key question is whether we will voluntarily stop this growth in carbon emissions, and if we don’t, at what point will natural limits kick in and stop the growth for us?

There are four timescales for CO2 drawdown:

  • Uptake by the ocean mixed layer – a few decades
  • Uptake by the deep ocean – a few centuries
  • Carbonate dissolution (laying down new sediments on the ocean bed) – a few millenia
  • Silicate weathering (reaction between rocks and CO2 in the atmosphere that creates limestone) – a few hundred millenia.

Ray then showed the results of some simulations using the Hamburg carbon cycle model. The scenario they used is a ramp up to peak emissions in 2010, followed by a drop to either 4, 2, or 1Gt per year from then on. The graph of atmospheric concentrations out to the year 4000 shows that holding emissions stable at 2Gt/yr still causes concentrations to ramp up to 1000ppm. Even reducing to 1Gt/yr leads to an increase to around 600ppm by the year 4000. The obvious conclusion is that we have to reduce net emissions to approximately zero in order to keep the climate stable over the next few thousand years.

What does a cumulative emissions total of 5000GtC mean for our future? Peak atmospheric concentrations will reach over 2000ppm, and stay there for around 10,000 years, then slowly reducing on a longer timescale because of silicate weathering. Global mean temperature rises by around 10°C. Most likely, the greenland and west antarctic ice sheets will melt completely (it’s not clear what it would take to melt the east antarctic). So what we do this century will affect us for tens of thousands of years. Paul Crutzen coined the term anthropocene to label this new era in which humans started altering the climate. In the distant future, the change in the start of the anthropocene will look as dramatic as other geolological shifts – certainly bigger than the change at the end of the KT extinction.

This makes geoengineering by changing the earth’s albedo an abomination (Ray mentioned as an example the view put forward in that awful book Superfreakonimics). It’s morally reprehensible, because it leads to the Damocles world. The sword hanging over us is that for the next 10,o000 years, we’re committed to doing the sulphur seeding every two years, and continuing to do so no matter what unforutunate consequence such as drought, etc. happen as side effects.

But we will need longwave geoengineering – some way of removing CO2 from the atmosphere to deal with the last gigatonne or so of emissions, because these will be hard to get rid of no matter how big the push to renewable energy sources. That suggests we do need a big research program on air capture techniques.

So, the core questions for climate ethics are:

  • What is the right amount to spend to reduce emissions?
  • How should costs be divided up (e.g. US, Europe, Asia, etc)?
  • How to figure the costs of inaction?
  • When should it be spent?

There is often a confusion between fairness and expedience (e.g. Cass Sunstein, an Obama advisor, makes this mistake in his work on climate change justice). The argument goes that a carbon tax that falls primarily on the first world is, in effect, a wealth transfer to the developing world. It’s a form of foreign aid, therefore hard to sell politically to Americans, and therefore unfair. But the real issue is not about what’s expedient, the issue is about the right thing to do.

Not all costs can be measured by money, which makes cost-benefit analysis a poor tool for reasoning about climate change. For example, how can we account for loss of life, loss of civil liberties, etc in a cost/benefit analysis? Take for example the effect of capital punishment on crime reduction versus the injustice of executing the innocent. This cannot be a cost/benefit decision, it’s a question of social values. In economic theory the “contingent valuation” of non-market costs and benefits is hopelessly broken. Does it make sense to trade off polar bear extinction against Arctic oil revenue by assigning a monetary value to polar bears? A democratic process must make these value judgements – we cannot push them off to economic analysis in terms of cost-benefits. The problem is that the costs and benefits of planetary scale processes are not additive. Therefore cost/benefit is not a suitable tool for making value decisions.

Similarly the use of (growth in) GDP, which is used by economists as a proxy for a nation’s welfare. Bastiat introduced the idea of the broken window fallacy – the idea that damage to people’s property boosts GDP because it increases the need for work to be done to fix it, and hence increases money circulation. This argument is often used by conservatives to poohpooh the idea of green jobs – what’s good for jobs doesnt necessarily make people better off. But right now the entire economy is made out of broken windows: Hummers, Mcmansions, video screens in fastfood joints,… all of it is consumption that boosts GDP without actually improving life for anyone. (Perhaps we should try to measuring gross national happiness instead, like the Bhutanese).

And then there’s discounting – how do we compare the future with the present? The usual technique is to exponentially downweight future harms according to how far in the future they are. The rationale is you could equally well put the money in the bank, and collect interest to pay for future harms (i.e. generate a “richer future”, rather than spend the money now on mitigating the problem). But certain things cannot be replaced by money (e.g. human life, species extinction). Therefore they cannot be discounted. And of course, economists make the 9 billion tonne hamster mistake – they assume the economy can keep on growing forever. [Note: Ray has more to say on cost-benefit and discounting in his slides, which he skipped over in the talk through lack of time]

Fairness is a major issue. How do we approach this? For example, retributive justice – punish the bad guys? You broke it, you fix it? Whoerever suffers the least from fixing it moves first? Consider everyone to be equal?  Well, the Canadian climate policy appears to be: wait to see what Obama does, and do the same, unless we can get away with doing less.

What about China vs. the US, the two biggest emitters of greenhouse gases? The graph of annual CO2 emissions shows that China overtook the US in the last few years (while, for example, France held their emissions constant). But you have to normalize the emissions per capita, then the picture looks very different. And here’s an interesting observation: China has a per capita emissions very close to that of France, but doesn’t have French standard of living. Therefore there is clearly room for China to improve its standard of living without increasing per capita emissions, which means that emissions controls do not necessarily hold back development.

But because it’s cumulative emissions that really matter, we have to look at each nation’s cumulative per capita emissions. The calculation is tricky because we have to account for population growth. It turns out that the US has a bigger population growth problem than China, which, when added to the cumulative emissions, means US has much bigger responsibility to act. If we take the target of 1000GtC as the upper limit on cumulative emissions (to stay within the 2°C temperature rise), and allocate that equally to everyone, based on 2006 population figures, we get about 100 tonnes of carbon per capita as a lifetime allowance. The US has an overdraft on this limit (because the US has used up more than this), while China still has a carbon balance (it’s used up less). In other words, in terms of the thing that matters most, cumulative emissions, the US has used up more than it’s fair share of a valuable resource (slide 43 from Ray’s talk):

This graph shows the cumulative emissions per (2006) capita for the US and China. If we take 100 tonnes as the lifetime limit for each person (to keep within the global 1000Gt target), then the US has already used more than its fair share, and China has used much less.

This analysis makes it clear what the climate justice position is. The Chinese might argue that just to protect themselves and their climate, China might need to do something more than its fair share. In terms of a negotiation, arguing about everyone taking action together, might be expedient. But the right thing to do for the US is not just to reduce emissions to zero immediately, but to pay back that overdraft.

Some interesting questions from the audience:

Q: On geoengineering – why rule out attempts to change the albedo of the earth by sulphate particle seeding when we might need an “all of the above” approach? A: Ray’s argument is largely about what happens if it fails. For example, if the dutch dykes fail, in the worst case, the Dutch could move elsewhere. If global geoengineering fails, we don’t have an elsewhere to move to. Also, if you stop, you get hit all at once with the accumulated temperature rise. This makes Levitt’s suggestion of “burn it all and geoengineer to balance” to be morally reprehensible.

Q: Could you say more about the potential for air capture? A: It’s a very intriguing idea. All the schemes being trialed right now capture carbon in the form of CO2 gas, which would then need to be put down into mineral form somehow. A more interesting approach is to capture CO2 directly in mineral form, e.g. limestone. It’s not obviously crazy, and if it works it would help. It’s more like insurance, and investing in research in this likely to provide a backup plan in a way that albedo alteration does not.

Q: What about other ways of altering the albedo? A: Suggestions such as painting roofs & parking lots white will help reduce urban heat, mitigate effect of heatwaves, and also reduce use of airconditioners. Which is good, but it’s essentially a regional effect. The overall effect on the global scale is probably negligible. So it’s a good idea because it only has a regional impact.

Q: About nuclear – will we need it? A: Ray says probably yes. If it comes down to a choice between nuclear vs. coal, the choice has to be nuclear.

Finally, I should mention Ray has a new book coming out: Principles of Planetary Climate, and is a regular contributor to RealClimate.org.

This morning I attended the first part of the Centre for Environment’s Research Day, and I’m glad I did, because I caught the talk by Chris Kennedy from the Sustainable Infrastructure Group in the Dept of Civil Engineering, on “Greenhouse Gases from Global Cities”. He talked about a study he’s just published, on the contribution of ten major cities to GHG emissions. Chris points out that most of the solutions to climate change will have to focus on changing cities. Lots of organisations are putting together greenhouse gas inventories for cities, but everyone is doing it differently, measuring different things. Chris’s study examined how to come up with a consistent approach. For example, the approach taken in Paris is good at capturing lifecycle emissions, London is good at spatial issues, Tokyo is good at analyzing emissions over time. Each perspective useful, but the differences make comparisons hard. But there’s no obvious right way to do it. For example, how do you account for the timing of emissions release, e.g. for waste disposal? Do you care about current emissions as a snapshot, or future emissions that are committed because of waste generated today?

The IPCC guidelines for measuring emissions take a pure producer perspective. They focus only on emissions that occur within the jurisdiction of each territory. This ignores, for example, consumer emissions when the consumer of a product or service is elsewhere. It also ignores upstream emissions: e.g. electricity generation is generally done outside the city, but used within the city. Then there’s line loss in power transmission to the city; that should also get counted. In Paris, Le Bilan Carbon counts embodied emissions in building materials, maintenance of vehicles, refining of fuels, etc. but it ignores emissions by tourists, which is a substantial part of Paris’ economy.

In the study Chris and colleagues did, they studied ten cities, many iconic: Bankok, Barcelona, Cape Town, Denver, Geneva, London, Los Angeles, New York, Prague and Toronto. Ideally they would like to have studied metropolitan regions rather than cities, because it then becomes simpler to include transport emissions for commuting, which really should be part of the assessment of each city. The study relied partially on existing assessments for some of these cities and analyzed emissions in terms of electricity, heating/industrial fuels (lumped together, unfortunately), ground transport, aviation and marine fuels, industrial processes, and waste (the methodology is described here).

For example, for electricity, Toronto comes second in consumption (MWh per capita), after Denver, and is about double that of London and Prague. Mostly, this difference is due to the different climate, but also the amount of commerce and industry within the city. However, the picture for carbon intensity is very different, as there is a big mix of renewables (e.g. hydro) in Toronto’s power supply, and Geneva gets its power supply almost entirely from hydro. So you get some interesting combinations: Toronto has high consumption but low intensity, whereas Cape Town has low consumption and high intensity. So multiply the two: Denver is off the map  at 9 t eCO2 per capita, because it has high consumption and high intensity, while most others are in the same range, around 2-3 t eCO2 per capita. And Geneva is very low:

Climate has to be taken into account somehow, because there is an obvious relationship between energy used for heating and typical temperatures, which can be assessed by counting heating degree days:

Aviation is very interesting. Some assessments exclude it, on the basis that local government has no control. But Chris points out that when it comes down to it, local government has very little control over anything, so that argument doesn’t really wash. The UNFCC says domestic flights should be included, but this then has a small country bias – small countries tend to have very few domestic flights. A better approach is to include international flights as well, so that we count all flights taking off from that city. Chris’ methodology assesses this as jet fuel loaded at each airport. For this then, London is way out in the lead:

In summary, looking at total emissions, Denver is way out in front. In conversations with them, it was clear they had no idea – they think of themselves as a clean green city up in the mountains. No surprises that the North American cities all fare the worst, driven by a big chunk of emissions from ground transportation. The real surprises though are Bangkok and Cape Town, which compare with New York and Toronto for total emissions:

Chris concluded the talk with some data from Asian cities that were not included in the above study. In particular, Shanghai and Beijing are important in part because of their sheer size. For example, if Shanghai on its own was a country, it would come it about #25 in the world for total emissions.

One thing I found interesting from the paper that Chris didn’t have time to cover in the talk was the obvious relationship between population density and emissions from ground transportation fuels. Clearly, to reduce carbon emissions, cities need to become much denser (and should all be more like Barcelona):

In a blog post that was picked up by the Huffington post, Bill Gates writes about why we need innovation, not insulation. He sets up the piece as a choice of emphasis between two emissions targets: 30% reduction by 2025, and 80% reduction by 2050. He argues that the latter target is much more important, and hence we should focus on big R&D efforts to innovate our way to zero-carbon energy sources for transportation and power generation. In doing so, he pours scorn on energy conservation efforts, arguing, in effect, that they are a waste of time. Which means Bill Gates didn’t do his homework.

What matters is not some arbitrary target for any given year. What matters is the path we choose to get there. This is a prime example of the communications failure over climate change. Non-scientists don’t bother to learn the basic principles of climate science, and scientists completely fail to get the most important ideas across in a way that helps people make good judgements about strategy.

The key problem in climate change is not the actual emissions in any given year. It’s the cumulative emissions over time. The carbon we emit by burning fossil fuels doesn’t magically disappear. About half is absorbed by the oceans (making them more acidic). The rest cycles back and forth between the atmosphere and the biosphere, for centuries. And there is also tremendous lag in the system. The ocean warms up very slowly, so it take decades for the Earth to reach a new equilibrium temperature once concentrations in the atmosphere stabilize. This means even if we could immediately stop adding CO2 to the atmosphere today, the earth would keep warming for decades, and wouldn’t cool off again for centuries. It’s going to be tough adapting to the warming we’re already committed to. For every additional year that we fail to get emissions under control we compound the problem.

What does this mean for targets? It means that it matters much more how soon we get started on reducing emissions rather than eventual destination at any particular future year. Because any reduction in annual emissions achieved in the next few years means that we save that amount of emissions every year going forward. The longer we take to get the emissions under control, the harder we make the problem.

A picture might help:

Emissions pathways to give 67% chance of limiting global warming to 2ºC

Three different emissions pathways to give 67% chance of limiting global warming to 2ºC (From the Copenhagen Diagnosis, Figure 22)

The graph shows three different scenarios, each with the same cumulative emissions (i.e. the area under each curve is the same). If we get emissions to peak next year (the green line), it’s a lot easier to keep cumulative emissions under control. If we delay, and allow emissions to continue to rise until 2020, then we can forget about 80% reductions by 2050. We’ll have set ourselves the much tougher task of 100% emissions reductions by 2040!

The thing is, there are plenty of good analyses of how to achieve early emissions reductions by deploying existing technology. Anyone who argues we should put our hopes in some grand future R&D effort to invent new technologies clearly does not understand the climate science. Or perhaps can’t do calculus.

Yesterday, I posted that the total budget of fossil fuel emissions we can ever emit is 1 trillion tonnes of Carbon. And that we’ve burnt through about half of that since the dawn of industrialization. Today, I read in the Guardian that existing oil reserves may have been deliberately overestimated by the International Energy Agency. George Monbiot explains how frightening this could be, given the likely impact of lower oil supplies on food production. Madeleine Bunting equates the reluctance to discuss this with the head-in-the-sand attitude that preceded last year’s financial crisis. Looks like the more pessimistic of the peak oil folks may have had it right all along.

None of these articles however makes the link to climate change (Monbiot only mentions it in passing in response to comments). So, which problem is bigger, peak oil or climate change? Does one cancel out the other? Should I stop worrying about the trillionth tonne, if the oil simply doesn’t exist to get there?

A back of the envelope calculation tells me that more than half of the world’s estimated remaining reserves of fossil fuels have to stay buried in the ground if we are to stay within a trillion tonnes. Here’s the numbers:

  • Oil: The Energy Watch Group estimates there are 854 Gb (gigabarrels) of oil left, while industry official figures put it at well over 1200Gb). Let’s split the difference and say 1,000Gb (1×10^12). Jim Bliss calculates that each barrel of crude oil releases about 100kg of carbon. That gives us 0.1 trillion tonnes of Carbon from oil.
  • Coal: Wikipedia tells us there are just under 1 trillion tonnes of proved recoverable coal reserves, and that coal has a carbon intensity of about 0.8, so that gives us 0.8 trillion tonnes of Carbon from coal.
  • Natural Gas: The US EIA gives the world’s natural gas reserves as about somewhat over 6,000 trillion cubic feet, which converts to about 170 trillion cubic meters. Each cubic meter gives about 0.5kg Carbon, so we have 85 trillion kg, or 0.08 trillion tonnes of Carbon from gas.

That all adds up to about 1 trillion tonnes of carbon from estimated fossil fuel reserves, the vast majority of which is coal. If we want a 50:50 chance of staying below 2ºC temperature rise, we can only burn half this much over the next few centuries. If we want better odds, say a 1-in-4 chance of exceeding 2ºC, we can only burn a quarter of it.

Conclusion: More than one half of all remaining fossil fuel reserves must remain unused. So peak oil and peak coal won’t save us. I would even go so far as to say that the peak oil folks are only about half as worried as they should be!

I posted a few times already about Allen et al’s paper on the Trillionth Tonne, ever since I saw Chris Jones present it at the EGU meeting in April. Basically, the work gets to the heart of the global challenge. If we want to hold temperatures below a 2°C rise, the key factor is not how much we burn in fossil fuels each year, but the cumulative emissions over centuries (because once we release carbon molecules from being buried under the ground, they tend to stay in the carbon cycle for centuries).

Allen et. al. did a probablistic analysis, and found that cumulative emissions of about 1 trillion tonnes of carbon give us a most likely peak temperature rise of 2ºC (with a 90% confidence interval of 1.3 – 3.9°C). We’ve burnt about half of this total since the beginning of the industrial revolution, so basically, we mustn’t burn more than another 1/2 trillion tonnes. We’ll burn through that in less than 30 years at current emissions growth rates. Clearly, we can’t keep burning fossil fuels at the current rate and then just stop on a dime when we get to a trillion tonnes. We have to follow a reduction curve that gets us reducing emissions steadily over the next 50-60 years, until we get to zero net emissions. (One implication of this analysis is that a large amount of existing oil and coal reserves have to stay buried in the ground, which will be hard to ensure given how much money there is to be made in digging it up and selling it).

Anyway, there’s now a website with a set of counters to show how well we’re doing: Trillionthtonne.org. Er, not so well right now, actually.

While at Microsoft last week, Gina Venolia introduced me to George. Well, not literally, as he wasn’t there, but I met his proxy. Gina and co have been experimenting with how to make a remote team member feel part of the team, without the frequent travel, in the Embodied Social Proxies project. The current prototype kit comes pretty close to getting that sense of presence (Gina also has a great talk on this project):

The kit cost about $4000 to put together, and includes:

  • a monitor for a life-sized headshot;
  • two cameras – a very wide angle camera to capture the whole room, plus a remote control camera to pan and zoom (e.g. for seeing slides);
  • noise canceling telecom unit for audio;
  • adjustable height rig to allow George to sit or stand;
  • and of course, wheels, so he can be pushed around to different workspaces.

Now, the first question I had was: could this solve our problem of allowing remote participants to join in a hands-on workshop at a conference? At the last workshop on software research and climate change, we had the great idea that remote participants could appear on a laptop via skype, and be carried around between breakout sessions by a local buddy. Of course, skype wasn’t up to the job, and our remote participants ended up having their own mini-workshop. I suspect the wireless internet at most conferences won’t handle this either – the connections tend to get swamped.

But I still think the idea has legs (well, not literally!). See, $4000 is about what it would cost in total travel budget to send someone to Cape Town for the next ICSE. If we can buy much of the kit we need to create a lightweight version of the ESP prototype locally in Cape Town, and use a laptop for the monitor, we could even throw away much of the kit at the end of the conference and still come in under the typical travel budget (not that we would throw it away though!). I think the biggest challenges will be getting a reliable enough internet connection (we’ll probably need to set up our own routers), and figuring out how to mount the kit onto some local furniture for some degree of portability.

Well, if we’re serious about finding solutions to climate change, we have to explore ideas like this.

PS Via this book (thx, Greg) I learned the word “detravelization”. No idea if the chapter on detravelization is any good (because Safari books online doesn’t work at UofT), but I’m clearly going to have a love-hate relationship with a word that’s simultaneously hideous and perfectly apt.

This week, Ontario’s new Feed-in Tariff (FIT) program kicks in. The program sets specific prices that the province will pay to people who develop their own renewable power sources and sell the energy back to the grid. The key idea is that it sets up a guaranteed return on investment for people to build renewable capacity, and at a premium price, too.

The prices are set at different levels for different types of power generation and for different sizes of installations, with each price point designed to make it attractive for people to invest (with presumably some weighting in favour of the power mix the province would like to aim for). For example, a homeowner who puts solar panels on the roof will be paid 80c per kilowatt hour ($0.80/kWh), with the price guaranteed for 20 years. That’s for installations lower than 10kW; the price goes down for bigger installations (e.g. for 44c/kWh for rooftop solar larger than 500kW).

Current electricity prices in Ontario are around $0.08/Kwh, so the province is paying 10 times the current market rate for small-scale solar generation. Which makes is a pretty major subsidy. However, the entire program is intended to be revenue neutral. The creation of a large network of small suppliers may prevent the province having to build so many new power stations (the province recently turned down bids of $26 billion for new nuclear plants), and allow it phase out the existing coal plants within the next few years.

So what does this mean for the homeowner? A typical household solar installation will be well below 10kW. I grabbed a few ballpark figures from the web. A small household solar installation might generate about 12kWh per day, i.e. about $10 per day, or about $3,500 per year at the FiT rate; while the average household consumption is about 12,000kWh per year, or about $1,000 at current market prices. So the panels will pay for themselves within a few years, and then become a source of revenue!

The idea of a Feed-in Tariff program isn’t new – they’ve worked well in Europe for a number of years, and indeed the province of Ontario has had one in place since 2006. However the old program was criticised for setting rates too low, especially for small-scale generation; the new program increases the rates dramatically – for example the new small scale solar rate is twice the old rate.

Full details of the new program are at the Ontario Power Authority’s site.

I gave my talk on SE for the Planet again this afternoon, to the local audience. We recorded it, and will get the whole thing up on the web soon.

I mentioned during the talk that the global greenhouse emissions growth curve and the world population growth curve are almost identical, and speculated that this means effectively that emissions per capita has not changed over the last century or so. After the talk, Jonathan pointed out to me that it means no such thing. While globally the average emissions per capita might have remained roughly constant, the averages probably hide several very different trends. For example, in the industrialized world, emissions per capita appears to have grown dramatically, while population growth has slowed. In contrast, in the undeveloped world the opposite has happened: huge population growth, with very little emissions growth. When you average both trends you get an apparent static per capita emissions rate.

Anyway, this observation prompted me to go back and look at the data. I’d originally found this graph, which appears to show the growth curves are almost identical:

Greenhouse gas emissions versus population growth

Greenhouse gas emissions versus population growth

The problem is, I didn’t check the credibility of the source. The graph comes from the site World Climate Report, which turns out to be a denialist site, full of all sorts of misinformation. In this case, they appear to have cooked the graph (note the low resolution and wide aspect ratio) to make the curves look like they fit much better than they really do. To demonstrate this, I reconstructed them myself.

I got amazingly detailed population data by year from digital survivors. They’ve done a wonderful job of collating data from many different sources, although their averaging technique does lead to the occasional anomaly (e.g. in 1950, there’s a change in availability of source datasets, and it shows up as a tiny glitch on my graph). I got the CO2 emissions data from the US government’s Carbon Dioxide Information Analysis Center (CDIAC).

Here’s the graph from 1850 to 2006 (click it for a higher resolution version):

Notice that emissions grew much more sharply than population from 1950 onwards, with the only exceptions being during the economic recessions of the early 1980’s, early 1990’s, and around 2000. Since 2000, emissions have been growing at more than double the population growth rate. So, I think that effectively explodes the myth that population growth alone explains emissions growth. It also demonstrates the importance of checking your sources before quoting them…