Last week I was at the 2012 AGU Fall Meeting. I plan to blog about many of the talks, but let me start with the Tyndall lecture given by Ray Pierrehumbert, on “Successful Predictions”. You can see the whole talk on youtube, so here I’ll try and give a shorter summary.

Ray’s talk spanned 120 years of research on climate change. The key message is that science is a long, slow process of discovery, in which theories (and their predictions) tend to emerge long before they can be tested. We often learn just as much from the predictions that turned out to be wrong as we do from those that were right. But successful predictions eventually form the body of knowledge that we can be sure about, not just because they were successful, but because they build up into a coherent explanation of multiple lines of evidence.

Here are the sucessful predictions:

1896: Svante Arrhenius correctly predicts that increases in fossil fuel emissions would cause the earth to warm. At that time, much of the theory of how atmospheric heat transfer works was missing, but nevertheless, he got a lot of the process right. He was right that surface temperature is determined by the balance between incoming solar energy and outgoing infrared radiation, and that the balance that matters is the radiation budget at the top of the atmosphere. He knew that the absorption of infrared radiation was due to CO2 and water vapour, and he also knew that CO2 is a forcing while water vapour is a feedback. He understood the logarithmic relationship between CO2 concentrations in the atmosphere and surface temperature. However, he got a few things wrong too. His attempt to quantify the enhanced greenhouse effect was incorrect, because he worked with a 1-layer model of the atmosphere, which cannot capture the competition between water vapour and CO2, and doesn’t account for the role of convection in determining air temperatures. His calculations were incorrect because he had the wrong absorption characteristics of greenhouse gases. And he thought the problem would be centuries away, because he didn’t imagine an exponential growth in use of fossil fuels.

Arrhenius, as we now know, was way ahead of his time. Nobody really considered his work again for nearly 50 years, a period we might think of as the dark ages of climate science. The story perfectly illustrates Paul Hoffman’s tongue-in-cheek depiction of how scientific discoveries work: someone formulates the theory, other scientists then reject it, ignore it for years, eventually rediscover it, and finally accept it. These “dark ages” weren’t really dark, of course – much good work was done in this period. For example:

  • 1900: Frank Very worked out the radiation balance, and hence the temperature, of the moon. His results were confirmed by Pettit and Nicholson in 1930.
  • 1902-14: Arthur Schuster and Karl Schwarzschild used a 2-layer radiative-convective model to explain the structure of the sun.
  • 1907: Robert Emden realized that a similar radiative-convective model could be applied to planets, and Gerard Kuiper and others applied this to astronomical observations of planetary atmospheres.

This work established the standard radiative-convective model of atmospheric heat transfer. This treats the atmosphere as two layers; in the lower layer, convection is the main heat transport, while in the upper layer, it is radiation. A planet’s outgoing radiation comes from this upper layer. However, up until the early 1930′s, there was no discussion in the literature of the role of carbon dioxide, despite occasional discussion of climate cycles. In 1928, George Simpson published a memoir on atmospheric radiation, which assumed water vapour was the only greenhouse gas, even though, as Richardson pointed out in a comment, there was evidence that even dry air absorbed infrared radiation.

1938: Guy Callendar is the first to link observed rises in CO2 concentrations with observed rises in surface temperatures. But Callendar failed to revive interest in Arrhenius’s work, and made a number of mistakes in things that Arrhenius had gotten right. Callendar’s calculations focused on the radiation balance at the surface, whereas Arrhenius had (correctly) focussed on the balance at the top of the atmosphere. Also, he neglected convective processes, which astrophysicists had already resolved using the radiative-convective model. In the end, Callendar’s work was ignored for another two decades.

1956: Gilbert Plass correctly predicts a depletion of outgoing radiation in the 15 micron band, due to CO2 absorption. This depletion was eventually confirmed by satellite measurements. Plass was one of the first to revisit Arrhenius’s work since Callendar, however his calculations of climate sensitivity to CO2 were also wrong, because, like Callendar, he focussed on the surface radiation budget, rather than the top of the atmosphere.

1961-2: Carl Sagan correctly predicts very thick greenhouse gases in the atmosphere of Venus, as the only way to explain the very high observed temperatures. His calculations showed that greenhouse gasses must absorb around 99.5% of the outgoing surface radiation. The composition of Venus’s atmosphere was confirmed by NASA’s Venus probes in 1967-70.

1959: Burt Bolin and Erik Eriksson correctly predict the exponential increase in CO2 concentrations in the atmosphere as a result of rising fossil fuel use. At that time they did not have good data for atmospheric concentrations prior to 1958, hence their hindcast back to 1900 was wrong, but despite this, their projection for changes forward to 2000 were remarkably good.

1967: Suki Manabe and Dick Wetherald correctly predict that warming in the lower atmosphere would be accompanied by stratospheric cooling. They had built the first completely correct radiative-convective implementation of the standard model applied to Earth, and used it to calculate a +2C equilibrium warming for doubling CO2, including the water vapour feedback, assuming constant relative humidity. The stratospheric cooling was confirmed in 2011 by Gillett et al.

1975: Suki Manabe and Dick Wetherald correctly predict that the surface warming would be much greater in the polar regions, and that there would be some upper troposphere amplification in the tropics. This was the first coupled general circulation model (GCM), with an idealized geography. This model computed changes in humidity, rather than assuming it, as had been the case in earlier models. It showed polar amplification, and some vertical amplification in the tropics. The polar amplification was measured, and confirmed by Serreze et al in 2009. However, the height gradient in the tropics hasn’t yet been confirmed (nor has it yet been falsified – see Thorne 2008 for an analysis)

1989: Ron Stouffer et. al. correctly predict that the land surface will warm more than the ocean surface, and that the southern ocean warming would be temporarily suppressed due to the slower ocean heat uptake. These predictions are correct, although these models failed to predict the strong warming we’ve seen over the antarctic peninsula.

Of course, scientists often get it wrong:

1900: Knut Angström incorrectly predicts that increasing levels of CO2 would have no effect on climate, because he thought the effect was already saturated. His laboratory experiments weren’t accurate enough to detect the actual absorption properties, and even if they were, the vertical structure of the atmosphere would still allow the greenhouse effect to grow as CO2 is added.

1971: Rasool and Schneider incorrectly predict that atmospheric cooling due to aerosols would outweigh the warming from CO2. However, their model had some important weaknesses, and was shown to be wrong by 1975. Rasool and Schneider fixed their model and moved on. Good scientists acknowledge their mistakes.

1993: Richard Lindzen incorrectly predicts that warming will dry the troposphere, according to his theory that a negative water vapour feedback keeps climate sensitivity to CO2 really low. Lindzen’s work attempted to resolve a long standing conundrum in climate science. In 1981, the CLIMAP project reconstructed temperatures at the last Glacial maximum, and showed very little tropical cooling. This was inconsistent the general circulation models (GCMs), which predicted substantial cooling in the tropics (e.g. see Broccoli & Manabe 1987). So everyone thought the models must be wrong. Lindzen attempted to explain the CLIMAP results via a negative water vapour feedback. But then the CLIMAP results started to unravel, and newer proxies demonstrated that it was the CLIMAP data that was wrong, rather than the models. It eventually turns out the models were getting it right, and it was the CLIMAP data and Lindzen’s theories that were wrong. Unfortunately, bad scientists don’t acknowledge their mistakes; Lindzen keeps inventing ever more arcane theories to avoid admitting he was wrong.

1995: John Christy and Roy Spencer incorrectly calculate that the lower troposphere is cooling, rather than warming. Again, this turned out to be wrong, once errors in satellite data were corrected.

In science, it’s okay to be wrong, because exploring why something is wrong usually advances the science. But sometimes, theories are published that are so bad, they are not even wrong:

2007: Courtillot et. al. predicted a connection between cosmic rays and climate change. But they couldn’t even get the sign of the effect consistent across the paper. You can’t falsify a theory that’s incoherent! Scientists label this kind of thing as “Not even wrong”.

Finally, there are, of course, some things that scientists didn’t predict. The most important of these is probably the multi-decadal fluctuations in the warming signal. If you calculate the radiative effect of all greenhouse gases, and the delay due to ocean heating, you still can’t reproduce the flat period in the temperature trend in that was observed in 1950-1970. While this wasn’t predicted, we ought to be able to explain it after the fact. Currently, there are two competing explanations. The first is that the ocean heat uptake itself has decadal fluctuations, although models don’t show this. However, it’s possible that climate sensitivity is at the low end of the likely range (say 2°C per doubling of CO2), it’s possible we’re seeing a decadal fluctuation around a warming signal. The other explanation is that aerosols took some of the warming away from GHGs. This explanation requires a higher value for climate sensitivity (say around 3°C), but with a significant fraction of the warming counteracted by an aerosol cooling effect. If this explanation is correct, it’s a much more frightening world, because it implies much greater warming as CO2 levels continue to increase. The truth is probably somewhere between these two. (See Armour & Roe, 2011 for a discussion)

To conclude, climate scientist have made many predictions about the effect of increasing greenhouse gases that have proven to be correct. They have earned a right to be listened to, but is anyone actually listening? If we fail to act upon the science, will future archaeologists wade through AGU abstracts and try to figure out what went wrong? There are signs of hope – in his re-election acceptance speech, President Obama revived his pledge to take action, saying “We want our children to live in an America that …isn’t threatened by the destructive power of a warming planet.”

The second speaker at our Workshop on City Science was Andrew Wisdom from Arup, talking about Cities as Systems of Systems. Andrew began with the observation that cities are increasingly under pressure, as the urban population continues to grow, and cities struggle to provide adequate infrastructure for their populations to thrive. But a central part of his message is that the way we think about things tends to create the way they are, and this is especially so with how we think about our cities.

As an exercise, he first presented a continuum of worldviews, from Technocentric at one end, to Ecocentric at the other end:

  • In the Techno-centric view, humans are dissociated from the earth. Nature has no inherent value, and we can solve everything with ingenuity and technology. This worldview tends to view the earth as an inert machine to be exploited.
  • In the Eco-centric view, the earth is alive and central to the web of life. Humans are an intrinsic part of nature, but human activity is already exceeding the limits of what the planet can support, to the point that environmental problems are potentially catastrophic. Hence, we need to get rid of materialism, eliminate growth, and work to restore balance.
  • Somewhere in the middle is a Sustain-centric view, which accepts that the earth provides an essential life support system, and that nature has some intrinsic value. This view accepts that limits are being reached, that environmental problems tend to take decades to solve, and that more growth is not automatically good. Humans can replace some but not all natural processes, and we have to focus more on quality of life as a measure of success.

As an exercise, Andrew asked the audience to imagine this continuum spread along one wall of the room, and asked us each to go and stand where we felt we fit on the spectrum. Many of the workshop participants positioned themselves somewhere between the eco-centric and sustain-centric views, with a small cluster at the extreme eco-centric end, and another cluster just to the techno-centric side of sustain-centric. Nobody stood at the extreme techno-centric end of the room!

Then, he asked us to move to where we think the city of Toronto sits, and then where we think Canada sits, and finally where we feel the world sits. For the first two of these, everyone shifted a long way towards the technocentric end of the spectrum (and some discussion ensued to the effect that both our mayor and our prime minister are a long way off the chart altogether – they are both well known for strong anti-environmentalist views). For the whole world, people didn’t move much from the “Canada” perspective. An immediate insight was that we (workshop attendees) are far more towards the ecocentric end of the spectrum than either our current city or federal governments, and perhaps the world in general. So if our governments (and by extension the voters who elect them) are out of step with our own worldviews, what are the implications? Should we, as researchers, be aiming to shift people’s perspectives?

One problem that arises from one’s worldview is how people understand messages about environmental problems. For example, people with a technocentric perspective tend to view discussions of sustainability as being about sacrifice – ‘wearing a hair shirt’, consume less, etc. Which then leads to a waning interest in these topics. For example, analysis of google trends on terms like global warming and climate change show spikes in 2007 around the release of Al Gore’s movie and the IPCC assessment, but declining interest since then.

Jeb Brugmann, the previous speaker, talked about the idea of a Consumptive city versus a Generative city, which is a change in perspective that alters how we view cities, changes what we choose to measure, and hence affects the way our cities evolve.

Changes in the indices we pay attention to can have a dramatic impact. For example, a study in Melbourne created that VAMPIRE index (Vulnerability Assessment for Mortgage, Petroleum and Inflation Risks and Expenses), which shows the relative degree of socio-economic stress in suburbs in Brisbane, Sydney, Melbourne, Adelaide and Perth. The pattern that emerges is that in the Western suburbs of Melbourne, there are few jobs, and many people paying off mortgages, all having to commute and hour and a half to the east of the city for work.

Our view of a city tend to create structures that compartmentalize different systems into silos, and then we attempt to optimize within these silos. For example, zoning laws create chunks of land with particular prescribed purposes, and then we end up trying to optimize within each zone. When zoning laws create the kind of problem indicated by the Melbourne VAMPIRE index, there’s little the city can do about it if they continue to think in terms of zoning. The structure of these silos has become fossilized into the organizational structure of government. Take transport, for example. We tend to look at existing roads, and ask how to widen them to handle growth in traffic; we rarely attempt to solve traffic issues by asking bigger questions about why people choose to drive. Hence, we miss the opportunity to solve traffic problems by changing the relationship between where people live and where they work. Re-designing a city to provide more employment opportunities in neighbourhoods that are suffering socio-economic stress is far more likely to help than improving the transport corridors between those neighbourhoods and other parts of the city.

Healthcare is another example. The outcome metrics typically used for hospital use include average length of stay, 30-day unplanned readmission rate, cost of readmission, etc. Again, these metrics create a narrow view of the system – a silo – that we then try to optimize within. However, if you compare European and American healthcare systems, there are major structural difference. The US system is based on formula funding, in which ‘clients’ are classified in terms of type of illness, standard interventions for that illness, and associated costs. Funding is then allocated to service providers based on this classification scheme. In Europe, service provides are funded directly, and are able to decide at the local level how best to allocate that funding to serve the needs of the population they care for. The European model is a much more flexible system that treats patients real needs, rather than trying to fit each patient into a pre-defined category. In the US, the medical catalogue of disorders becomes an accounting scheme for allocating funds, and the result is that in the US, medical care costs going up faster than any other country. If you plot life expectancy against health spending, the US is falling far behind:

The problem is that the US health system views illness as a problem to be solved. If you think in terms of wellbeing rather than illness, you broaden the set of approaches you can use. For example, there are significant health benefits to pet ownership, providing green space within cities, and so on, but these are not fundable with the US system. There are obvious connections between body mass index and the availability of healthy foods, the walkability of neighbourhoods, and so on, but these don’t fit into a healthcare paradigm that allocates resources according to disease diagnosis.

Andrew then illustrated the power of re-thinking cities as systems-of-systems through several Arup case studies:

  • Dongtan eco-city. This city was designed from the ground up to be food positive, and energy positive (ie. intended to generate more food and more clean energy than it uses). The design makes it more preferable to walk or bike than to drive a car. A key design tool was the use of an integrated model that captures the interactions of different systems within the city. [Dongtan is, incidentally, a classic example of how the media alternately overhypes and then trashtalks major sustainability initiatives, when the real story is so much more interesting].
  • Low2No, Helsinki, a more modest project that aims to work within the existing city to create carbon negative buildings and energy efficient neighbourhoods step by step.
  • Werribee, a suburb of Melbourne, which is mainly an agricultural town, particularly known for its broccoli farming. But with fluctuating prices, farmers have had difficulty selling their broccoli. In an innovative solution that turns this problem into an opportunity, Arup developed a new vision that uses local renewable energy, water and waste re-processing to build a self-sufficient hothouse food production and research facility that provides employment and education along with food and energy.

In conclusion, we have to understand how our views of these systems constrain us to particular pathways, and we have to understand the connections between multiple systems if we want to understand the important issues. In many cases, we don’t do well at recognizing good outcomes, because our worldviews lead us to the wrong measures of success, and then we use these measures to create silos, attempting to optimize within them, rather than seeing the big picture. Understanding the systems, and understanding how these systems shape our thinking is crucial. However, the real challenges then lie in using this understanding to frame effective policy and create effective action.

After Andrew’s talk, we moved into a hands-on workshop activity, using a set of cards developed by Arup called Drivers of Change. The cards are fascinating – there are 189 cards in the deck, each of which summarizes a key issue (e.g. urban migration, homelessness, clean water, climate change, etc), and on the back, distills some key facts and figures. Our exercise was to find connections between the cards – each person had to pick one card that interested him or her, and then team up with two other people to identify how their three cards are related. It was a fascinating and thought-provoking exercise, that really got us thinking about systems-of-systems. I’m now a big fan of the cards and plan to use them in the classroom. (I bought a deck at Indigo for $45, although I note that, bizarrely, Amazon has them selling for over $1000!).

We held a 2-day workshop at U of T last week entitled “Finding Connections – Towards a Holistic View of City Systems“. The workshop brought together a multi-disciplinary group of people from academia, industry, government, and the non-profit sector, all of whom share a common interest in understanding how cities work as systems-of-systems, and how to make our cities more sustainable and more liveable. A key theme throughout the workshop was how to make sure the kinds of research we do in universities does actually end up being useful to decision-makers – i.e. can we strengthen evidence-based policymaking (and avoid, as one of the participants phrased it, “policy-based evidence-making”).

I plan to blog some of the highlights of the workshop, starting with the first keynote speaker.

The workshop kicked off with an inspiring talk by Jeb Brugmann, entitled “The Productive City”. Jeb is an expert in urban sustainability and climate change mitigation, and has a book out called “Welcome to the Urban Revolution: How Cities are Changing the World“. (I should admit the book’s been sitting on the ‘to read’ pile on my desk for a while – now I have to read it!).

Jeb’s central message was that we need to look at cities and sustainability in a radically different way. Instead of thinking of sustainability as about saving energy, living more frugally, and making sacrifices, we should be looking out how we re-invent cities as places that produce resources rather than consume them. And he offered a number of case studies that demonstrate how this is already possible.

Jeb started his talk with the question: How will 9 billion people thrive on Earth? He then took us back to a UN meeting in 1990, the World Congress of Local Governments for a Sustainable Future. This meeting was the first time that city governments around the world came together to grapple with the question of sustainable development. To emphasis how new this was, Jeb recollected lengthy discussions at the meeting on basic questions such as how to translate the term ‘sustainable development’ into French, German, etc.

The meeting had two main outcomes:

  • Initial work on Agenda 21, getting communities engaged in collaborative sustainable decision making. [Note: Agenda 21 was subsequently adopted by 178 countries at the Rio Summit in 1992. More interestingly, if you google for Agenda 21 these days, you're likely to find a whole bunch of nutball right-wing conspiracy theories about it being an agenda to destroy American freedom.]
  • A network of city governments dedicated to developing action on climate change [This network became ICLEI - Local Governments for Sustainability]. Jeb noted how the ambitions of the cities participating in ICLEI have grown over the years. Initially, many of these cities set targets around 20% reduction in greenhouse gas emissions. Over the years since, these target have grown. For example, Chicago now has a target of 80% reduction. This is significant because these targets have been through city councils, and have been discussed and agreed on by those councils.

An important idea arising out of these agreements is the concept of the ecological footprint - sometimes expressed as how many earths are needed to support us if everyone had the same resource consumption as you. The problem is that you get into definitional twists on how you measure this, and that gets in the way of actually using it as a productive planning tool.

Here’s another way of thinking about the problem. Cities currently have hugely under-optimized development patterns. For example, cities with seven times more outspill growth (suburban sprawl) compared to infill growth. But there are emergent pressures on industry to optimize use of urban space and urban geography. Hence, we should start to examine under-used urban assets. If we can identify space within the city that doesn’t generate value, we can reinvent it. For example, the laneways of Melbourne, which in the 1970′s and 80′s were derelict, have now been regenerated for a rich network of local stores and businesses, and ended up as a major tourist attraction.

We also tend to dramatically underestimate the market viability of energy efficient, sustainable buildings. For example, in Hannover, a successful project built an entire division of eco-homes using Passivhaus standards at similar rental price to the old 1960s apartment buildings.

The standard view of cities, built into the notion of ecological footprint, is that cities are extraction engines – the city acts as a machine that extracts resources from the surrounding environment, processes these resources to generate value, and produces waste products that must be disposed of. Most work on sustainable cities frames the task as an attempt to reduce the impact of this process, by designing eco-efficient cities. For example, the use of secondary production (e.g. recycling) and designed dematerialization (reduction of waste in the entire product lifecycle) to reduce the inflow of resources and the outflow of wastes.

Jeb argues a more audacious goal is needed: We should transform our cities into net productive systems. Instead of focussing on reducing the impact of cities, we should use urban ecology and secondary production so that the city becomes a net positive resource generator. This is far more ambitious than existing projects that aim to create individual districts that are net zero (e.g. that produce as much energy as they consume, through local solar and wind generation). The next goal should be productive cities: cities that produce more resources than they consume; cities that process more waste than they produce.

Jeb then went on to crunch the numbers for a number of different types of resource (energy, food, metals, nitrogen), to demonstrate how a productive city might fill the gap between rising demand and declining supply:

Energy demand. Current European consumption is around 74GJ/capita. Imagine by 2050, we have 9 billion people on the planet, all living like Europeans do now – we’ll need 463EJ to supply them all. Plot this growth in demand over time, and you have a wedge analysis. Using IEA numbers of projected growth in renewable energy supply, to provide the wedges, there’s still a significant shortfall. We’ll need to close the gap via urbanrenewable energy generation, using community designs of the type piloted in Hannover. Cities have to become net producers of energy.

Here’s the analysis (click each chart for full size):

Food. We can do a similar wedge analysis for food. Current food production globally produces around 2,800kcal/captia. But as the population grows, this current level of production produces steadily less food per person. Projected increases in crop yields, crop intensity, and conversion of additional arable land, and reduction of waste would still leave a significant gap if we wish to provide a comfortable 3100kcal/capita. While urban agriculture is unlikely to displace rural farm production, it can play a crucial role in closing the gap between production and need, as the population grows. For example, Havana has a diversified urban agriculture that supplies close to 75% of vegetables from within the urban environment. Vancouver has been very strategic about building its urban agricultural production, with one out of every seven jobs in Vancouver in food production.

Other examples include landfill mining to produce iron and other metals, and urban production of nitrogen fertilizer from municipal biosolids.

In summary, we’ve always underestimated just how much we can transform cities. While we remain stuck in a mindset that cities are extraction engines, we will miss opportunities for more radical re-imagings of the role of global cities. So a key research challenge is to develop a new post-”ecological footprint” analysis. There are serious issues of scaling and performance measurement to solve, and at every scale there are technical, policy, and social challenges. But as cities house ever more of the growing population, we need this kind of bold thinking.

My first year seminar course, PMU199 Climate Change: Software, Science and Society is up and running again this term. The course looks at the role of computational models in both the science and the societal decision-making around climate change. The students taking the course come from many different departments across arts and science, and we get to explore key concepts in a small group setting, while developing our communication skills.

As an initial exercise, this year’s cohort of students have written their first posts for the course blog (assignment: write a blog post on any aspect of climate change that interests you). Feel free to comment on their posts, but please keep it constructive – the students get a chance to revise their posts before we grade them (and if you’re curious, here’s the rubric).

Incidentally, for the course this year, I’ve adopted Andrew Dessler’s new book, Introduction to Modern Climate Change as the course text. The book was just published earlier this year, and I must say, it’s by far the best introductory book on climate science that I’ve seen. My students tell me they really like the book (despite the price), as it explains concepts simply and clearly, and they especially like the fact that it covers policy and society issues as well as the science. I really like the discussion in chapter 1 on who to believe, in which the author explains that readers ought to be skeptical of anyone writing on this topic (including himself), and then lays out some suggestions for how to decide who to believe. Oh, and I love the fact that there’s an entire chapter later in the book devoted to the idea of exponential growth.

At the CMIP5 workshop earlier this week, one of Ed Hawkins‘ charts caught my eye, because he changed how we look at model runs. We’re used to seeing climate models used to explore the range of likely global temperature responses under different future emissions scenarios, and the results presented as a graph of changing temperature over time. For example, this iconic figure from the last IPCC assessment report (click for the original figure and caption at the IPCC site):

These graphs tend to focus too much on the mean temperature response in each scenario (where ‘mean’ means ‘the multi-model mean’). I tend to think the variance is more interesting – both within each scenario (showing differences in the various CMIP3 models on the same scenarios), and across the different scenarios (showing how our future is likely to be affected by the energy choices implicit in each scenario). A few months ago, I blogged about the analysis that Hawkins and Sutton did on these variabilities, to explore how the different sources of uncertainty change as you move from near term to long term. The analysis shows that in the first few decades, the differences in the models dominate (which doesn’t bode well for decadal forecasting – the models are all over the place). But by the end of the century, the differences between the emissions scenarios dominates (i.e. the spread of projections from the different scenarios is significantly bigger than the  disagreements between models). Ed presented an update on this analysis for the CMIP5 models this week, which looks very similar.

But here’s the new thing that caught my eye: Ed included a graph of temperature responses tipped on its side, to answer a different question: how soon will the global temperature exceed the policymaker’s adopted “dangerous” threshold of 2°C, under each emissions scenario. And, again, how big is the uncertainty? This idea was used in a paper last year by Joshi et. al., entitled Projections of when temperature change will exceed 2 °C above pre-industrial levels. Here’s their figure 1:

Figure 1 from Joshi et al, 2011

By putting the dates on the Y-axis and temperatures on the X-axis, and cutting off the graph at 2°C, we get a whole new perspective on what the models runs are telling us. For example, it’s now easy to see that in all these scenarios, we pass the 2°C threshold well before the end of the century (whereas the IPCC graph above completely obscures this point), and under the higher emissions scenarios, we get to 3°C by the end of the century.

A wonderful example of how much difference the choice of presentation makes. I guess I should mention, however, that the idea of a 2°C threshold is completely arbitrary. I’ve asked many different scientists where the idea came from, and they all suggest it’s something the policymakers dreamt up, rather than anything arising out of scientific analysis. The full story is available in Randalls, 2011, “History of the 2°C climate target”.

In the talk I gave this week at the workshop on the CMIP5 experiments, I argued that we should do a better job of explaining how climate science works, especially the day-to-day business of working with models and data. I think we have a widespread problem that people outside of climate science have the wrong mental models about what a climate scientist does. As with any science, the day-to-day work might appear to be chaotic, with scientists dealing with the daily frustrations of working with large, messy datasets, having instruments and models not work the way they’re supposed to, and of course, the occasional mistake that you only discover after months of work. This doesn’t map onto the mental model that many non-scientists have of “how science should be done”, because the view presented in school, and in the media, is that science is about nicely packaged facts. In reality, it’s a messy process of frustrations, dead-end paths, and incremental progress exploring the available evidence.

Some climate scientists I’ve chatted to are nervous about exposing more of this messy day-to-day work. They already feel under constant attack, and they feel that allowing the public to peer under the lid (or if you prefer, to see inside the sausage factory) will only diminish people’s respect for the science. I take the opposite view – the more we present the science as a set of nicely polished results, the more potential there is for the credibility of the science to be undermined when people do manage to peek under the lid (e.g. by publishing internal emails). I think it’s vitally important that we work to clear away some of the incorrect mental models people have of how science is (or should be) done, and give people a better appreciation for how our confidence in scientific results slowly emerges from a slow, messy, collaborative process.

Giving people a better appreciation of how science is done would also help to overcome some of games of ping pong you get in the media, where each new result in a published paper is presented as a startling new discovery, overturning previous research, and (if you’re in the business of selling newspapers, preferably) overturning an entire field. In fact, it’s normal for new published results to turn out to be wrong, and most of the interesting work in science is in reconciling apparently contradictory findings.

The problem is that these incorrect mental models of how science is done are often well entrenched, and the best that we can do is to try to chip away at them, by explaining at every opportunity what scientists actually do. For example, here’s a mental model I’ve encountered from time to time about how climate scientists build models to address the kinds of questions policymakers ask about the need for different kinds of climate policy:

This view suggests that scientists respond to a specific policy question by designing and building software models (preferably testing that the model satisfies its specification), and then running the model to answer the question. This is not the only (or even the most common?) layperson’s view of climate modelling, but the point is that there are many incorrect mental models of how climate models are developed and used, and one of the things we should strive to do is to work towards dislodging some of these by doing a better job of explaining the process.

With respect to climate model development, I’ve written before about how models slowly advance based on a process that roughly mimics the traditional view of “the scientific method” (I should acknowledge, for all the philosophy of science buffs, that there really isn’t a single, “correct” scientific method, but let’s keep that discussion for another day). So here’s how I characterize the day to day work of developing a model:

Most of the effort is spent identifying and diagnosing where the weaknesses in the current model are, and looking for ways to improve them. Each possible improvement then becomes an experiment, in which the experimental hypothesis might look like:

“if I change <piece of code> in <routine>, I expect it to have <specific impact on model error> in <output variable> by <expected margin> because of <tentative theory about climactic processes and how they’re represented in the model>”

The previous version of the model acts as a control, and the modified model is the experimental condition.

But of course, this process isn’t just a random walk – it’s guided at the next level up by a number of influences, because the broader climate science community (and to some extent the meteorological community) are doing all sorts of related research, which then influences model development. In the paper we wrote about the software development processes at the UK Met Office, we portrayed it like this:

But I could go even broader and place this within a context in which a number of longer term observational campaigns (“process studies”) are collecting new types of observational data to investigate climate processes that are still poorly understood. This then involves the interaction several distinct communities. Christian Jakob portrays it like this:

Although the point of Jakob’s paper is to argue that the modelling and process studies communities don’t currently do enough of this kind of interactions, so there’s room for improvement in how the modelling influences the kinds of process studies needed, and how the results from process studies feed back into model development.

So, how else should we be explaining the day-to-day work of climate scientists?

I’m attending a workshop this week in which some of the initial results from the Fifth Coupled Model Intercomparison Project (CMIP5) will be presented. CMIP5 will form a key part of the next IPCC assessment report – it’s a coordinated set of experiments on the global climate models built by labs around the world. The experiments include hindcasts to compare model skill on pre-industrial and 20th Century climate, projections into the future for 100 and 300 years, shorter term decadal projections, paleoclimate studies, plus lots of other experiments that probe specific processes in the models. (For more explanation, see the post I wrote on the design of the experiments for CMIP5 back in September).

I’ve been looking at some of the data for the past CMIP exercises. CMIP1 originally consisted of one experiment – a control run with fixed forcings. The idea was to compare how each of the models simulates a stable climate. CMIP2 included two experiments, a control run like CMIP1, and a climate change scenario in which CO2 levels were increased by 1% per year. CMIP3 then built on these projects with a much broader set of experiments, and formed a key input to the IPCC Fourth Assessment Report.

There was no CMIP4, as the numbers were resynchronised to match the IPCC report numbers (also there was a thing called the Coupled Carbon Cycle Climate Model Intercomparison Project, which was nicknamed C4MIP, so it’s probably just as well!), so CMIP5 will feed into the fifth assessment report.

So here’s what I have found so far on the vital statistics of each project. Feel free to correct my numbers and help me to fill in the gaps!

CMIP
(1996 onwards)
CMIP2
(1997 onwards)
CMIP3
(2005-2006)
CMIP5
(2010-2011)
Number of Experiments 1 2 12 110
Centres Participating 16 18 15 24
# of Distinct Models 19 24 21 45
# of Runs (Models X Expts) 19 48 211 841
Total Dataset Size ?? ?? 36 TeraByte 3.3 PetaByte
Total Downloads from archive ?? ?? 1 PetaByte
Number of Papers Published 47 595
Users ?? ?? 6700

[Update:] I’ve added a row for number of runs, i.e. the sum of the number of experiments run on each model (in CMIP3 and CMIP5, centres were able to pick a subset of the experiments to run, so you can’t just multiply models and experiments to get the number of runs). Also, I ought to calculate the total number of simulated years that represents (If a centre did all the CMIP5 experiments, I figure it would result in at least 12,000 simulated years).

Oh, one more datapoint from this week. We came up with an estimate that by 2020, each individual experiment will generate an Exabyte of data. I’ll explain how we got this number once we’ve given the calculations a bit more of a thorough checking over.

As today is the deadline for proposing sessions for the AGU fall meeting in December, we’ve submitted a proposal for a session to explore open climate modeling and software quality. If we get the go ahead for the session, we’ll be soliciting abstracts over the summer. I’m hoping we’ll get a lively session going with lots of different perspectives.

I especially want to cover the difficulties of openness as well as the benefits, as we often hear a lot of idealistic talk on how open science would make everything so much better. While I think we should always strive to be more open, it’s not a panacea. There’s evidence that open source software isn’t necessarily better quality, and of course, there’re plenty of people using lack of openness as a political weapon, without acknowledging just how many hard technical problems there are to solve along the way, not least because there’s a lack of consensus over the meaning of openness among it’s advocates.

Anyway, here’s our session proposal:

TITLE: Climate modeling in an open, transparent world

AUTHORS (FIRST NAME INITIAL LAST NAME): D. A. Randall1, S. M. Easterbrook4, V. Balaji2, M. Vertenstein3

INSTITUTIONS (ALL): 1. Atmospheric Science, Colorado State University, Fort Collins, CO, United States. 2. Geophysical Fluid Dynamics Laboratory, Princeton, NJ, United States. 3. National Center for Atmospheric Research, Boulder, CO, United States. 4. Computer Science, University of Toronto, Toronto, ON, Canada.

Description: This session deals with climate-model software quality and transparent publication of model descriptions, software, and results. The models are based on physical theories but implemented as software systems that must be kept bug-free, readable, and efficient as they evolve with climate science. How do open source and community-based development affect software quality? What are the roles of publication and peer review of the scientific and computational designs in journals or other curated online venues? Should codes and datasets be linked to journal articles? What changes in journal submission standards and infrastructure are needed to support this? We invite submissions including experience reports, case studies, and visions of the future.

This week, I’m featuring some of the best blog posts written by the students on my first year undergraduate course, PMU199 Climate Change: Software, Science and Society. This post is by Harry, and it first appeared on the course blog on January 29.

Projections from global climate models indicate that continued 21st century increases in emissions of greenhouse gases will cause the temperature of the globe to increase by a few degrees. These global changes in a few degrees could have a huge impact on our planet. Whether a few global degrees cooler could lead to another ice age, a few global degrees warmer enables the world to witness more of nature’s most terrifying phenomenon.

According to Anthony D. Del Genio the surface of the earth heats up from sunlight and other thermal radiation, the amount of energy accumulated must be offset to maintain a stable temperature. Our planet does this by evaporating water that condenses and rises upwards with buoyant warm air. This removes any excess heat from the surface and into higher altitudes. In cases of powerful updrafts, the evaporated water droplets easily rise upwards, supercooling them to a temperature between -10 and -40°C. The collision of water droplets with soft ice crystals forms a dense mixture of ice pellets called graupel. The densities of graupel and ice crystals and the electrical charges they induce are two essential factors in producing what people see as lightning.

Ocean and land differences in updrafts also cause higher lightning frequencies. Over the course of the day, heat is absorbed by the oceans and hardly warms up. Land surfaces, on the other hand, cannot store heat and so they warm significantly from the beginning of the day. The great deal of the air above land surfaces is warmer and more buoyant than that over the oceans, creating strong convective storms as the warm air rises. The powerful updrafts, as a result of the convective storms, are more prone to generate lightning.

According to the general circulation model by Goddard Institute for Space Studies, one of the two experiments conducted indicates that a 4.2°C global warming suggests an increase of 30% in global lightning activity. The second experiment indicated that a 5.9°C global cooling would cause a 24% decrease in global lightning frequencies. The summaries of the experiments signifies a 5-6% change in global lightning frequency for every 1°C of global warming or cooling.

As 21st century projections of carbon dioxide and other greenhouse gases emission remain true, the earth continues to warm and the ocean evaporates more water. This is largely because the drier land surface is unable to evaporate water at the same extent as the oceans, causing the land to warm more. This should cause stronger convective storms and produce higher lightning occurrence.

Greater lightning frequencies can contribute to a warmer earth. Lightning provides an abundant source of nitrogen oxides, which is a precursor for ozone production in the troposphere. The presence of ozone in the upper troposphere acts as a greenhouse gas that absorbs some of the infrared energy emitted by earth. Because tropospheric ozone traps some of the escaping heat, the earth warms and the occurence of lightning is even greater. Lightning frequencies creates a positive feedback process on our climate system. The impact of ozone on the climate is much stronger than carbon, especially on a per-molecule basis, since ozone has a radiative forcing effect that is approximately 1,000 times as powerful as carbon dioxide. Luckily, the presence of ozone in the troposphere on a global scale is not as prevalent as carbon and its atmospheric lifetime averages to 22 days.

"Climate simulations, which were generated from four Global General Circulation Models (GCM), were used to project forest fire danger levels with relation to global warming."

Lightning occurs more frequently around the world, however lightning only affects a very local scale. The  local effect of lightning is what has the most impact on people. In the event of a thunderstorm, an increase in lightning frequencies places areas with high concentration of trees at high-risk of forest fire. Such areas in Canada are West-Central and North-western woodland areas where they pose as major targets for ignition by lightning. In fact, lightning accounted for 85% of that total area burned from 1959-1999. To preserve habitats for animals and forests for its function as a carbon sink, strenuous pressure on the government must be taken to ensure minimized forest fire in the regions. With 21st century estimates of increased temperature, the figure of 85% of area burned could dramatically increase, burning larger lands of forests. This is attributed to the rise of temperatures simultaneously as surfaces dry, producing more “fuel” for the fires.

Although lightning has negative effects on our climate system and the people, lightning also has positive effects on earth and for life. The ozone layer, located in the upper atmosphere, prevents ultraviolet light from reaching earth’s surface. Also, lightning causes a natural process known as nitrogen fixation. This process has a fundamental role for life because fixed nitrogen is required to construct basic building blocks of life (e.g. nucleotides for DNA and amino acids for proteins).

Lightning is an amazing and natural occurrence in our skies. Whether it’s a sight to behold or feared, we’ll see more of it as our earth becomes warmer.

This week, I’m featuring some of the best blog posts written by the students on my first year undergraduate course, PMU199 Climate Change: Software, Science and Society. The first is by Terry, and it first appeared on the course blog on January 28.

A couple of weeks ago, Professor Steve was talking about the extra energy that we are adding to the earth system during one of our sessions (and on his blog). He showed us this chart from the last IPCC report in 2007 that summarizes the various radiative forces from different sources:

Notice how aerosols account for most of the negative radiative forcing. But what are aerosols? What is their direct effect, their contribution in the cloud albedo effect, and do they have any other impact?

More »

Ever since I wrote about peak oil last year, I’ve been collecting references to “Peak X”. Of course, the key idea, Hubbert’s Peak applies to any resource extraction, where the resource is finite. So it’s not surprising that wikipedia now has entries on:

And here’s a sighting of a mention of Peak Gold.

Unlike peak oil, some of these curves can be dampened by the appropriate recycling. But what of stuff we normally think of as endlessly renewable:

  • Peak Water – it turns out that we haven’t been managing the world’s aquifers and lakes sustainably, despite the fact that that’s where our freshwater supplies come from (See Peter Gleick’s 2010 paper for a diagnosis and possible solutions)
  • Peak Food – similarly, global agriculture appears to be unsustainable, partly because food policy and speculation have wrecked local sustainable farming practices, but also because of population growth (See Jonathan Foley’s 2011 paper for a diagnosis and possible solutions).
  • Peak Fish – although overfishing is probably old news to everyone now.
  • Peak Biodiversity (although here it’s referred to as Peak Nature, which I think is sloppy terminology)

Which also leads to pressure on specific things we really care about, such as:

Then there is a category of things that really do need to peak:

And just in case there’s too much doom and gloom in the above, there are some more humorous contributions:

And those middle two, by the wonderful Randall Monroe make me wonder what he was doing here:

I can’t decide whether that last one is just making fun of the the singularity folks, or whether it’s a clever ruse to get people realize Hubbert’s Peak must kick in somewhere!

I was talking with Eric Yu yesterday about a project to use goal modeling as a way of organizing the available knowledge on how to solve a specific problem, and we thought that geo-engineering would make an interesting candidate to try this out on: It’s controversial, a number of approaches have been proposed, there are many competing claims made for them, and it’s hard to sort through such claims.

So, I thought I would gather together the various resources I have on geo-engineering:

Introductory Overviews:

Short commentaries:

Books:

Specific studies / papers:

Sometime in May, I’ll be running a new graduate course, DGC 2003 Systems Thinking for Global Problems. The course will be part of the Dynamics of Global Change graduate program, a cross-disciplinary program run by the Munk School of Global Affairs.

Here’s my draft description of the course:

The dynamics of global change are complex, and demand new ways of conceptualizing and analyzing inter-relationships between multiple global systems. In this course, we will explore the role of systems thinking as a conceptual toolkit for studying the inter-relationships between problems such as globalization, climate change, energy, health & wellbeing, and food security. The course will explore the roots of systems thinking, for example in General Systems Theory, developed by Karl Bertalanffy to study biological systems, and in Cybernetics, developed by Norbert Wiener to explore feedback and control in living organisms, machines, and organizations. We will trace this intellectual history to recent efforts to understand planetary boundaries, tipping points in the behaviour of global dynamics, and societal resilience. We will explore the philosophical roots of systems thinking as a counterpoint to the reductionism used widely across the natural sciences, and look at how well it supports multiple perspectives, trans-disciplinary synthesis, and computational modeling of global dynamics. Throughout the course, we will use global climate change as a central case study, and apply systems thinking to study how climate change interacts with many other pressing global challenges.

I’m planning to get the students to think about issues such as the principle of complementarity, and second-order cybernetics, and of course, how to understand the dynamics of non-linear systems, and the idea of leverage points. We’ll take a quick look at how earth system models work, but not in any detail, because it’s not intended to be physics or computing course; I’m expecting most of the students to be from political science, education, etc.

The hard part will be picking a good core text. I’m leaning towards Donnella Meadows’s book, Thinking in Systems, although I just received my copy of the awesome book Systems Thinkers, by Magnus Ramage and Karen Shipp (I’m proud to report that Magnus was once a student of mine!).

Anyway, suggestions for material to cover, books & papers to include, etc are most welcome.

Our paper on defect density analysis of climate models is now out for review at the journal Geoscientific Model Development (GMD). GMD is an open review / open access journal, which means the review process is publicly available (anyone can see the submitted paper, the reviews it receives during the process, and the authors’ response). If the paper is eventually accepted, the final version will also be freely available.

The way this works at GMD is that the paper is first published to Geoscientific Model Development Discussions (GMDD) as an un-reviewed manuscript. The interactive discussion is then open for a fixed period (in this case, 2 months). At that point the editors will make a final accept/reject decision, and, if accepted, the paper is then published to GMD itself. During the interactive discussion period, anyone can post comments on the paper, although in practice, discussion papers often only get comments from the expert reviewers commissioned by the editors.

One of the things I enjoy about the peer-review process is that a good, careful review can help improve the final paper immensely. As I’ve never submitted before to a journal that uses an open review process, I’m curious to see how the open reviewing will help – I suspect (and hope!) it will tend to make reviewers more constructive.

Anyway, here’s the paper. As it’s open review, anyone can read it and make comments (click the title to get to the review site):

Assessing climate model software quality: a defect density analysis of three models

J. Pipitone and S. Easterbrook
Department of Computer Science, University of Toronto, Canada

Abstract. A climate model is an executable theory of the climate; the model encapsulates climatological theories in software so that they can be simulated and their implications investigated. Thus, in order to trust a climate model one must trust that the software it is built from is built correctly. Our study explores the nature of software quality in the context of climate modelling. We performed an analysis of defect reports and defect fixes in several versions of leading global climate models by collecting defect data from bug tracking systems and version control repository comments. We found that the climate models all have very low defect densities compared to well-known, similarly sized open-source projects. We discuss the implications of our findings for the assessment of climate model software trustworthiness.

19. April 2011 · 2 comments · Categories: politics

For those outside Canada, in case you haven’t heard, we’re in the middle of a general election. Canada has a parliamentary system, modelled after the British one, with a first-past-the-post system for electing representatives (members of parliament), where party with the most seats after the election is invited to form a government, and its leader to become Prime Minister. For the last few parliaments we’ve had minority governments, first Liberal, then Conservative.

Somewhere along the way, many people just stopped voting: from turnouts in the high 70s back in the 60′s, we’ve had 64.7% and then 58.8% turnout respectively in the last two elections – the last being the lowest turnout ever. There maybe many different reasons for this lack of enthusiasm, although listening to the main parties whining about each other during this election, it’s not hard to see why so many people just don’t bother. But one thing is clear: young people are far less likely to vote than any other age group.

So it was great to see last week Rick Mercer with a brilliant call for young voters to use their votes to “scare the hell out of the people who run this country”:

And his message seems to have resonated. Students on campuses across the country have been using social networking to organise vote mobs, making videos along the way as they challenge others to do the same. But here’s the interesting thing. The young people of this country have a very different set of preferences to the general population:

Just look at how the projected composition of parliament would look it it were up to the youngsters: the Liberals and the Green Party virtually neck-and-neck for most votes, and instead of the greens being shut out of parliament, they’d hold 43 seats! Of course, the projected seat count also throws into sharp focus what’s wrong with our current voting system: the Bloq, with lowest share of the vote of any of the parties would still hold 60 seats. And the Liberals with just 2% more of the votes than the greens would still get more than twice as many seats. Nevertheless, I like this picture much more than the parliaments we’ve had in the last few elections.

So, if you’re eligible to vote, and you’re anywhere around half my age, make my day – help change our parliament for the better!