I’ve been exploring how Canada’s commitments to reduce greenhouse gas emissions stack up against reality, especially in the light of the government’s recent decision to stick with the emissions targets set by the previous administration.

Once upon a time, Canada was considered a world leader on climate and environmental issues. The Montreal Protocol on Substances that Deplete the Ozone Layer, signed in 1987, is widely regarded as the most successful international agreement on environmental protection ever. A year later, Canada hosted a conference on The Changing Atmosphere: Implications for Global Security, which helped put climate change on the international political agenda. This conference was one of the first to identify specific targets to avoid dangerous climate change, recommending a global reduction in greenhouse gas emissions of 20% by 2005. It didn’t happen.

It took another ten years before an international agreement to cut emissions was reached: the Kyoto Protocol in 1997. Hailed as a success at the time, it became clear over the ensuing years that with non-binding targets, the agreement was pretty much a sham. Under Kyoto, Canada agreed to cut emissions to 6% below 1990 levels by the 2008-2012 period. It didn’t happen.

At the Copenhagen talks in 2009, Canada proposed an even weaker goal: 17% below 2005 levels (which corresponds to 1.5% above 1990 levels) by 2020. Given that emissions have risen steadily since then, it probably won’t happen. By 2011, facing an embarrassing gap between its Kyoto targets and reality, the Harper administration formally withdrew from Kyoto – the only country ever to do so.

Last year, in preparation for the Paris talks, the Harper administration submitted a new commitment: 30% below 2005 levels by 2030. At first sight it seems better than previous goals. But it includes a large slice of expected international credits and carbon sequestered in wood products, as Canada incorporates Land Use, Land Use Change and Forestry (LULUCF) into its carbon accounting. In terms of actual cuts in greenhouse gas emissions, the target represents approximately 8% above 1990 levels.

The new government, elected in October 2015, trumpeted a renewed approach to climate change, arguing that Canada should be a world leader again. At the Paris talks in 2015, the Trudeau administration proudly supported both the UN’s commitment to keep global temperatures below 2°C of warming (compared to the pre-industrial average), and voiced strong support for an even tougher limit of 1.5°C. However, the government has chosen to stick with the Harper administration’s original Paris targets.

It is clear that that the global commitments under the Paris agreement fall a long way short of what is needed to stay below 2°C, and Canada’s commitment has been rated as one of the weakest. Based on IPCC assessments, to limit warming below 2°C, global greenhouse gas emissions will need to be cut by about 50% by 2030, and eventually reach zero net emissions globally (which will probably mean zero use of fossil fuels, as assumptions about negative emissions seem rather implausible). As Canada has much greater wealth and access to resources than most nations, much greater per capita emissions than all but a few nations, and much greater historical responsibility for emissions than most nations, a “fair” effort would have Canada cutting emissions much faster than the global average, to allow room for poorer nations to grow their emissions, at least initially, to alleviate poverty. Carbon Action Tracker suggests 67% below 1990 emissions by 2030 is a fair target for Canada.

Here’s what all of this looks like – click for bigger version. Note: emissions data from Government of Canada; the Toronto 1988 target was never formally adopted, I added it just for comparison. Global 2°C pathway 2030 target from SEI;  Emissions projection, LULUCF adjustment, and “fair” 2030 target from CAT.


Several things jump out at me from this chart. First, the complete failure to implement policies that would have allowed us to meet any of these targets. The dip in emissions from 2008-2010, which looked promising for a while, was due to the financial crisis and economic downturn, rather than any actual climate policy. Second, the similar slope of the line to each target, which represents the expected rate of decline from when the target was proposed to when it ought to be attained. At no point has there been any attempt to make up lost ground after each failed target. Finally, in terms of absolute greenhouse gas emissions, each target is worse than the previous ones. Shifting the baseline from 1990 to 2005 masks much of this, and shows that successive governments are more interested in optics than serious action on climate change.

At no point has Canada ever adopted science-based targets capable of delivering on its commitment to keep warming below 2°C.

Today I’ve been tracking down the origin of the term “Greenhouse Effect”. The term itself is problematic, because it only works as a weak metaphor: both the atmosphere and a greenhouse let the sun’s rays through, and then trap some of the resulting heat. But the mechanisms are different. A greenhouse stays warm by preventing warm air from escaping. In other words, it blocks convection. The atmosphere keeps the planet warm by preventing (some wavelengths of) infra-red radiation from escaping. The “greenhouse effect” is really the result of many layers of air, each absorbing infra-red from the layer below, and then re-emitting it both up and down. The rate at which the planet then loses heat is determined by the average temperature of the topmost layer of air, where this infra-red finally escapes to space. So not really like a greenhouse at all.

So how did the effect acquire this name? The 19th century French mathematician Joseph Fourier is usually credited as the originator of the idea in the 1820’s. However, it turns out he never used the term, and as James Fleming (1999) points out, most authors writing about the history of the greenhouse effect cite only secondary sources on this, without actually reading any of Fourier’s work. Fourier does mention greenhouses in his 1822 classic “Analytical Theory of Heat”, but not in connection with planetary temperatures. The book was published in French, so he uses the french “les serres”, but it appears only once, in a passage on properties of heat in enclosed spaces. The relevant paragraph translates as:

In general the theorems concerning the heating of air in closed spaces extend to a great variety of problems. It would be useful to revert to them when we wish to foresee and regulate the temperature with precision, as in the case of green-houses, drying-houses, sheep-folds, work-shops, or in many civil establishments, such as hospitals, barracks, places of assembly” [Fourier, 1822; appears on p73 of the edition translated by Alexander Freeman, published 1878, Cambridge University Press]

In his other writings, Fourier did hypothesize that the atmosphere plays a role in slowing the rate of heat loss from the surface of the planet to space, hence keeping the ground warmer than it might otherwise be. However, he never identified a mechanism, as the properties of what we now call greenhouse gases weren’t established until John Tyndall‘s experiments in the 1850’s. In explaining his hypothesis, Fourier refers to a “hotbox”, a device invented by the explorer de Saussure, to measure the intensity of the sun’s rays. The hotbox had several layers of glass in the lid which allowed the sun’s rays to enter, but blocked the escape of the heated air via convection. But it was only a metaphor. Fourier understood that whatever the heat trapping mechanism in the atmosphere was, it didn’t actually block convection.

Svante Arrhenius was the first to attempt a detailed calculation of the effect of changing levels of carbon dioxide in the atmosphere, in 1896, in his quest to test a hypothesis that the ice ages were caused by a drop in CO2. Accordingly, he’s also sometime credited with inventing the term. However, he also didn’t use the term “greenhouse” in his papers, although he did invoke a metaphor similar to Fourier’s, using the Swedish word “drivbänk”, which translates as hotbed (Update: or possibly “hothouse” – see comments).

So the term “greenhouse effect” wasn’t coined until the 20th Century. Several of the papers I’ve come across suggest that the first use of the term “greenhouse” in this connection in print was in 1909, in a paper by Wood. This seems rather implausible though, because the paper in question is really only a brief commentary explaining that the idea of a “greenhouse effect” makes no sense, as a simple experiment shows that greenhouses don’t work by trapping outgoing infra-red radiation. The paper is clearly reacting to something previously published on the greenhouse effect, and which Wood appears to take way too literally.

A little digging produces a 1901 paper by Nils Ekholm, a Swedish meteorologist who was a close colleague of Arrhenius, which does indeed use the term ‘greenhouse’. At first sight, he seems to use the term more literally than is warranted, although in subsequent paragraphs, he explains the key mechanism fairly clearly:

The atmosphere plays a very important part of a double character as to the temperature at the earth’s surface, of which the one was first pointed out by Fourier, the other by Tyndall. Firstly, the atmosphere may act like the glass of a green-house, letting through the light rays of the sun relatively easily, and absorbing a great part of the dark rays emitted from the ground, and it thereby may raise the mean temperature of the earth’s surface. Secondly, the atmosphere acts as a heat store placed between the relatively warm ground and the cold space, and thereby lessens in a high degree the annual, diurnal, and local variations of the temperature.

There are two qualities of the atmosphere that produce these effects. The one is that the temperature of the atmosphere generally decreases with the height above the ground or the sea-level, owing partly to the dynamical heating of descending air currents and the dynamical cooling of ascending ones, as is explained in the mechanical theory of heat. The other is that the atmosphere, absorbing but little of the insolation and the most of the radiation from the ground, receives a considerable part of its heat store from the ground by means of radiation, contact, convection, and conduction, whereas the earth’s surface is heated principally by direct radiation from the sun through the transparent air.

It follows from this that the radiation from the earth into space does not go on directly from the ground, but on the average from a layer of the atmosphere having a considerable height above sea-level. The height of that layer depends on the thermal quality of the atmosphere, and will vary with that quality. The greater is the absorbing power of the air for heat rays emitted from the ground, the higher will that layer be, But the higher the layer, the lower is its temperature relatively to that of the ground ; and as the radiation from the layer into space is the less the lower its temperature is, it follows that the ground will be hotter the higher the radiating layer is.” [Ekholm, 1901, p19-20]

At this point, it’s still not called the “greenhouse effect”, but this metaphor does appear to have become a standard way of introducing the concept. But in 1909, the English scientist, John Henry Poynting confidently introduces the term “greenhouse effect”, in his criticism of Percival Lowell‘s analysis of the temperature of the planets. He uses it in scare quotes throughout the paper, which suggests the term is newly minted:

Prof. Lowell’s paper in the July number of the Philosophical Magazine marks an important advance in the evaluation of planetary temperatures, inasmuch as he takes into account the effect of planetary atmospheres in a much more detailed way than any previous wrlter. But he pays hardly any attention to the “blanketing effect,” or, as I prefer to call it, the “greenhouse effect” of the atmosphere.” [Poynting, 1907, p749]

And he goes on:

The ” greenhouse effect” of the atmosphere may perhaps be understood more easily if we first consider the case of a greenhouse with horizontal roof of extent so large compared with its height above the ground that the effect of the edges may be neglected. Let us suppose that it is exposed to a vertical sun, and that the ground under the glass is “black” or a full absorber. We shall neglect the conduction and convection by the air in the greenhouse. [Poynting, 1907, p750]

He then goes on to explore the mathematics of heat transfer in this idealized greenhouse. Unfortunately, he ignores Ekholm’s crucial observation that it is the rate of heat loss at the upper atmosphere that matters, so his calculations are mostly useless. But his description of the mechanism does appear to have taken hold as the dominant explanation. The following year, Frank Very published a response (in the same journal), using the term “Greenhouse Theory” in the title of the paper. He criticizes Poynting’s idealised greenhouse as way too simplistic, but suggests a slightly better metaphor is a set of greenhouses stacked one above another, each of which traps a little of the heat from the one below:

It is true that Professor Lowell does not consider the greenhouse effect analytically and obviously, but it is nevertheless implicitly contained in his deduction of the heat retained, obtained by the method of day and night averages. The method does not specify whether the heat is lost by radiation or by some more circuitous process; and thus it would not be precise to label the retaining power of the atmosphere a “greenhouse effect” without giving a somewhat wider interpretation to this name. If it be permitted to extend the meaning of the term to cover a variety of processes which lead to identical results, the deduction of the loss of surface heat by comparison of day and night temperatures is directly concerned with this wider “greenhouse effect.” [Very, 1908, p477]

Between them, Poynting and Very are attempting to pin down whether the “greenhouse effect” is a useful metaphor, and how the heat transfer mechanisms of planetary atmospheres actually work. But in so doing, they help establish the name. Wood’s 1909 comment is clearly a reaction to this discussion, but one that fails to understand what is being discussed. It’s eerily reminiscent of any modern discussion of the greenhouse effect: whenever any two scientists discuss the details of how the greenhouse effect works, you can be sure someone will come along sooner or later claiming to debunk the idea by completely misunderstanding it.

In summary, I think it’s fair to credit Poynting as the originator of the term “greenhouse effect”, but with a special mention to Ekholm for both his prior use of the word “greenhouse”, and his much better explanation of the effect. (Unless I missed some others?)


Arrhenius, S. (1896). On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground. Philosophical Magazine and Journal of Science, 41(251). doi:10.1080/14786449608620846

Ekholm, N. (1901). On The Variations Of The Climate Of The Geological And Historical Past And Their Causes. Quarterly Journal of the Royal Meteorological Society, 27(117), 1–62. doi:10.1002/qj.49702711702

Fleming, J. R. (1999). Joseph Fourier, the “greenhouse effect”, and the quest for a universal theory of terrestrial temperatures. Endeavour, 23(2), 72–75. doi:10.1016/S0160-9327(99)01210-7

Fourier, J. (1822). Théorie Analytique de la Chaleur (“Analytical Theory of Heat”). Paris: Chez Firmin Didot, Pere et Fils.

Fourier, J. (1827). On the Temperatures of the Terrestrial Sphere and Interplanetary Space. Mémoires de l’Académie Royale Des Sciences, 7, 569–604. (translation by Ray Pierrehumbert)

Poynting, J. H. (1907). On Prof. Lowell’s Method for Evaluating the Surface-temperatures of the Planets; with an Attempt to Represent the Effect of Day and Night on the Temperature of the Earth. Philosophical Magazine, 14(84), 749–760.

Very, F. W. (1908). The Greenhouse Theory and Planetary Temperatures. Philosophical Magazine, 16(93), 462–480.

Wood, R. W. (1909). Note on the Theory of the Greenhouse. Philosophical Magazine, 17, 319–320. Retrieved from http://scienceblogs.com/stoat/2011/01/07/r-w-wood-note-on-the-theory-of/

This week I’m reading my way through three biographies, which neatly capture the work of three key scientists who laid the foundation for modern climate modeling: Arrhenius, Bjerknes and Callendar.


Crawford, E. (1996). Arrhenius: From Ionic Theory to the Greenhouse Effect. Science History Publications.
A biography of Svante Arrhenius, the Swedish scientist who, in 1895, created the first computational climate model, and spent almost a full year calculating by hand the likely temperature changes across the planet for increased and decreased levels of carbon dioxide. The term “greenhouse effect” hadn’t been coined back then, and Arrhenius was more interested in the question of whether the ice ages might have been caused by reduced levels of CO2. But nevertheless, his model was a remarkably good first attempt, and produced the first quantitative estimate of the warming expected from human’s ongoing use of fossil fuels.
Friedman, R. M. (1993). Appropriating the Weather: Vilhelm Bjerknes and the Construction of a Modern Meteorology. Cornell University Press.
A biography of Vilhelm Bjerknes, the Norwegian scientist, who, in 1904, identified the primitive equations, a set of differential equations that form the basis of modern computational weather forecasting and climate models. The equations are, in essence, an adaption of the equations of fluid flow and thermodynamics, adapted to represent the atmosphere as a fluid on a rotating sphere in a gravitational field. At the time, the equations were little more than a theoretical exercise, and we had to wait half a century for the early digital computers, before it became possible to use them for quantitative weather forecasting.
Fleming, J. R. (2009). The Callendar Effect: The Life and Work of Guy Stewart Callendar (1898-1964). University of Chicago Press.
A biography of Guy S. Callendar, the British scientist, who, in 1938, first compared long term observations of temperatures with measurements of rising carbon dioxide in the atmosphere, to demonstrate a warming trend as predicted by Arrhenius’ theory. It was several decades before his work was taken seriously by the scientific community. Some now argue that we should use the term “Callendar Effect” to describe the warming from increased emissions of carbon dioxide, because the term “greenhouse effect” is too confusing – greenhouse gases were keeping the planet warm long before we started adding more, and anyway, the analogy with the way that glass traps heat in a greenhouse is a little inaccurate.

Not only do the three form a neat ABC, they also represent the three crucial elements you need for modern climate modelling: a theoretical framework to determine which physical processes are likely to matter, a set of detailed equations that allow you to quantify the effects, and comparison with observations as a first step in validating the calculations.

I’m heading off to Florence this week for the International Conference on Software Engineering (ICSE). The highlight of the week will be a panel session I’m chairing, on the Karlskrona Manifesto. The manifesto itself is something we’ve been working on since last summer – a group of us wrote the first draft at the Requirements Engineering conference in Karlskrona, Sweden, last summer (hence the name). This week we’re launching a website for the manifesto, and we’ve published a longer technical paper about it at ICSE.

The idea of the manifesto is to inspire deeper analysis of the roles and responsibilities of technology designers (and especially software designers), given that software systems now shape so much of modern life. We rarely stop to think about the unintended consequences of very large numbers of people using our technologies, nor do we ask whether, on balance, an idea that looks cool on paper will merely help push us even further into unsustainable behaviours. The position we take in the manifesto is that, as designers, our responsibility for the consequences of our designs are much broader than most of us acknowledge, and it’s time to do something about it.

For the manifesto, we ended up thinking about sustainability in terms of five dimensions:

  • Environmental sustainability: the long term viability of natural systems, including ecosystems, resource consumption, climate, pollution food, water, and waste.
  • Social sustainability: the quality of social relationships and the factors that tend to improve or erode trust in society, such as social equity, democracy, and justice.
  • Individual sustainability: the health and well-being of people as individuals, including mental and physical well-being, education, self-respect, skills, and mobility.
  • Economic sustainability: the long term viability of economic activities, such as businesses and nations, including issues such as investment, wealth creation and prosperity.
  • Technical sustainability: the ability to sustain technical systems and their infrastructures, including software maintenance, innovation, obsolescence, and data integrity.

There are of course, plenty of other ways of defining sustainability (which we discuss in the paper), and some hard constraints in some dimensions – e.g. we cannot live beyond the resource limits of the planet, no matter how much progress we make towards sustainability in other other dimensions. But a key insight is that all five dimensions matter, and none of them can be treated in isolation. For example, we might think we’re doing fine in one dimension – economic, say, as we launch a software company with a sound business plan that can make a steady profit – but often we do so only by incurring a debt in other dimensions, perhaps harming the environment by contributing to the mountains of e-waste, or harming social sustainability by replacing skilled jobs with subsistence labour.

The manifesto characterizes a set of problems in how technologists normally think about sustainability (if they do), and ends with a set of principles for sustainability design:

  • Sustainability is systemic. Sustainability is never an isolated property. Systems thinking has to be the starting point for the transdisciplinary common ground of sustainability.
  • Sustainability has multiple dimensions. We have to include those dimensions into our analysis if we are to understand the nature of sustainability in any given situation.
  • Sustainability transcends multiple disciplines. Working in sustainability means working with people from across many disciplines, addressing the challenges from multiple perspectives.
  • Sustainability is a concern independent of the purpose of the system. Sustainability has to be considered even if the primary focus of the system under design is not sustainability.
  • Sustainability applies to both a system and its wider contexts. There are at least two spheres to consider in system design: the sustainability of the system itself and how it affects sustainability of the wider system of which it will be part.
  • Sustainability requires action on multiple levels. Some interventions have more leverage on a system than others. Whenever we take action towards sustainability, we should consider opportunity costs: action at other levels may offer more effective forms of intervention.
  • System visibility is a necessary precondition and enabler for sustainability design. The status of the system and its context should be visible at different levels of abstraction and perspectives to enable participation and informed responsible choice.
  • Sustainability requires long-term thinking. We should assess benefits and impacts on multiple timescales, and include longer-term indicators in assessment and decisions.
  • It is possible to meet the needs of future generations without sacrificing the prosperity of the current generation. Innovation in sustainability can play out as decoupling present and future needs. By moving away from the language of conflict and the trade-off mindset, we can identify and enact choices that benefit both present and future.

You can read the full manifesto at sustainabilitydesign.org, and watch for the twitter tags  and .  I’m looking forward to lots of constructive discussions this week.

For our course about the impacts of the internet, we developed an exercise to get our students thinking critically about the credibility of things they find on the web. As a number of colleagues have expressed in interest in this, I thought I would post it here. Feel free to use it and adapt it!

Near the beginning of the course, we set the students to read the chapter “Crap Detection 101: How to Find What You Need to Know, and How to Decide If It’s True” from Rheingold & Weeks’ book NetSmart. During the tutorial, we get them working in small groups, and give them several, carefully selected web pages to test their skills on. We pick webpages that we think are not too easy nor too hard, and use a mix of credible and misleading ones. It’s a real eye-opener exercise for our students.

To guide them in the activity, we give them the following list of tips (originally distilled from the book by our TA, Matt King, who wrote the first draft of the worksheet).

Tactics for Detecting Crap on the Internet

Here’s a checklist of tactics to use to help you judge the credibility of web pages. Different tactics will be useful for different web pages – use your judgment to decide which tactics to try first. If you find some of these don’t apply, or don’t seem to give you useful information, think about why that is. Make notes about the credibility of each webpage you explored, and which tactics you used to determine its credibility.

  1. Authorship
    • Is the author of a given page named? Who is s/he?
    • What do others say about the author?
  2. Sources cited
    • Does the article include links (or at least references) to sources?
    • What do these sources tell us about credibility and/or bias?
  3. Ownership of the website
    • Can you find out who owns the site? (e.g. look it up using www.easywhois.com)
    • What is the domain name? Does the “pedigree” of a site convince us of its trustworthiness?
    • Who funds the owner’s activities? (e.g. look them up on http://www.sourcewatch.org)
  4. Connectedness
    • How much traffic does this site get? (e.g. use www.alexa.com for stats/demographics)
    • Do the demographics tell you anything about the website’s audience? (see alexa.com again)
    • Do other websites link to this page? (e.g. google with the search term “link: http://*paste URL here*”)? If so, who are the linkers?
    • Is the page ranked highly when searched for from at least two search engines?
  5. Design & Interactivity
    • Does the website’s design and other structural features (such as grammar) tell us anything about its credibility?
    • Does the page have an active comment section? If so, does the author respond to comments?
  6. Triangulation
    • Can you verify the content of a page by “triangulating” its claims with at least two or three other reliable sources?
    • Do fact-checking sites have anything useful on this topic? (e.g. try www.factcheck.org)
    • Are there topic-specific sites that do factchecking? (e.g. www.snopes.com for urban legends, www.skepticalscience.com for climate science). Note: How can you tell whether these sites are credible?
  7. Check your own biases
    • Overall, what’s your personal stake in the credibility of this page’s content?
    • How much time do you think you should allocate to verifying its reliability?

(Download the full worksheet)

It’s been a while since I’ve written about the question of climate model validation, but I regularly get asked about it when I talk about the work I’ve been doing studying how climate models are developed. There’s an upcoming conference organized by the Rotman Institute of Philosophy, in London, Ontario, on Knowledge and Models in Climate Science, at which many of my favourite thinkers on this topic will be speaking. So I thought it was a good time to get philosophical about this again, and define some terms that I think help frame the discussion (at least in the way I see it!).

Here’s my abstract for the conference:

Constructive and External Validity for Climate Modeling

Discussion of validity of scientific computational models tend to treat “the model” as a unitary artifact, and ask questions about its fidelity with respect to observational data, and its predictive power with respect to future situations. For climate modeling, both of these questions are problematic, because of long timescales and inhomogeneities in the available data. Our ethnographic studies of the day-to-day practices of climate modelers suggest an alternative framework for model validity, focusing on a modeling system rather than any individual model. Any given climate model can be configured for a huge variety of different simulation runs, and only ever represents a single instance of a continually evolving body of program code. Furthermore, its execution is always embedded in a broader social system of scientific collaboration which selects suitable model configurations for specific experiments, and interprets the results of the simulations within the broader context of the current body of theory about earth system processes.

We propose that the validity of a climate modeling system should be assessed with respect to two criteria: Constructive Validity, which refers to the extent to which the day-to-day practices of climate model construction involve the continual testing of hypotheses about the ways in which earth system processes are coded into the models, and External Validity, which refers to the appropriateness of claims about how well model outputs ought to correspond to past or future states of the observed climate system. For example, a typical feature of the day-to-day practice of climate model construction is the incremental improvement of the representation of specific earth system processes in the program code, via a series of hypothesis-testing experiments. Each experiment begins with a hypothesis (drawn from current or emerging theories about the earth system) that a particular change to the model code ought to result in a predicable change to the climatology produced by various runs of the model. Such a hypothesis is then tested empirically, using the current version of the model as a control, and the modified version of the model as the experimental case. Such experiments are then replicated for various configurations of the model, and results are evaluated in a peer review process via the scientific working groups who are responsible for steering the ongoing model development effort.

Assessment of constructive validity for a modeling system would take account of how well the day-to-day practices in a climate modeling laboratory adhere to rigorous standards for such experiments, and how well they routinely test the assumptions that are built into the model in this way. Similarly, assessment of the external validity of the modeling system would take account of how well knowledge of the strengths and weaknesses of particular instances of the model are taken into account when making claims about the scope of applicability of model results. We argue that such an approach offers a more coherent approach to questions of model validity, as it corresponds more directly with the way in which climate models are developed and used.

For more background, see:

I’ll be heading off to Stockholm in August to present a paper at the 2nd International Conference on Information and Communication Technologies for Sustainability (ICT4S’2014). The theme of the conference this year is “ICT and transformational change”, which got me thinking about how we think about change, and especially whether we equip students in computing with the right conceptual toolkit to think about change. I ended up writing a long critique of Computational Thinking, which has become popular lately as a way of describing what we teach in computing undergrad programs. I don’t think there’s anything wrong with computational thinking in small doses. But when an entire university program teaches nothing but computational thinking, we turn out generations of computing professionals who are ill-equipped to think about complex societal issues. This then makes them particularly vulnerable to technological solutionism. I hope the paper will provoke some interesting discussion!

Here’s the abstract for my paper (click here for the full paper):

From Computational Thinking to Systems Thinking: A conceptual toolkit for sustainability computing

Steve Easterbrook, University of Toronto

If information and communication technologies (ICT) are to bring about a transformational change to a sustainable society, then we need to transform our thinking. Computer professionals already have a conceptual toolkit for problem solving, sometimes known as computational thinking. However, computational thinking tends to see the world in terms a series of problems (or problem types) that have computational solutions (or solution types). Sustainability, on the other hand, demands a more systemic approach, to avoid technological solutionism, and to acknowledge that technology, human behaviour and environmental impacts are tightly inter-related. In this paper, I argue that systems thinking provides the necessary bridge from computational thinking to sustainability practice, as it provides a domain ontology for reasoning about sustainability, a conceptual basis for reasoning about transformational change, and a set of methods for critical thinking about the social and environmental impacts of technology. I end the paper with a set of suggestions for how to build these ideas into the undergraduate curriculum for computer and information sciences.

At the beginning of March, I was invited to give a talk at TEDxUofT. Colleagues tell me the hardest part of giving these talks is deciding what to talk about. I decided to see if I could answer the question of whether we can trust climate models. It was a fascinating and nerve-wracking experience, quite unlike any talk I’ve given before. Of course, I’d love to do another one, as I now know more about what works and what doesn’t.

Here’s the video and a transcript of my talk. [The bits in square brackets in are things I intended to say but forgot!] 

Computing the Climate: How Can a Computer Model Forecast the Future? TEDxUofT, March 1, 2014.

Talking about the weather forecast is a great way to start a friendly conversation. The weather forecast matters to us. It tells us what to wear in the morning; it tells us what to pack for a trip. We also know that weather forecasts can sometimes be wrong, but we’d be foolish to ignore them when they tell us a major storm is heading our way.

[Unfortunately, talking about climate forecasts is often a great way to end a friendly conversation!] Climate models tell us that by the end of this century, if we carry on burning fossil fuels at the rate we have been doing, and we carry on cutting down forests at the rate we have been doing, the planet will warm by somewhere between 5 to 6 degrees centigrade. That might not seem much, but, to put it into context, in the entire history of human civilization, the average temperature of the planet has not varied by more than 1 degree. So that forecast tells us something major is coming, and we probably ought to pay attention to it.

But on the other hand, we know that weather forecasts don’t work so well the longer into the future we peer. Tomorrow’s forecast is usually pretty accurate. Three day and five day forecasts are reasonably good. But next week? They always change their minds before next week comes. So how can we peer 100 years into the future and look at what is coming with respect to the climate? Should we trust those forecasts? Should we trust the climate models that provide them to us?

Six years ago, I set out to find out. I’m a professor of computer science. I study how large teams of software developers can put together complex pieces of software. I’ve worked with NASA, studying how NASA builds the flight software for the Space Shuttle and the International Space Station. I’ve worked with large companies like Microsoft and IBM. My work focusses not so much on software errors, but on the reasons why people make those errors, and how programmers then figure out they’ve made an error, and how they know how to fix it.

To start my study, I visited four major climate modelling labs around the world: in the UK, in Paris, Hamburg, Germany and in Colorado. Each of these labs have typically somewhere between 50-100 scientists who are contributing code to their climate models. And although I only visited four of these labs, there are another twenty or so around the world, all doing similar things. They run these models on some of the fastest supercomputers in the world, and many of the models have been in construction, the same model, for more than 20 years.

When I started this study, I asked one of my students to attempt to measure how many bugs there are in a typical climate model. We know from our experience with software there are always bugs. Sooner or later the machine crashes. So how buggy are climate models? More specifically, what we set out to measure is what we call “defect density” – How many errors are there per thousand lines of code. By this measure, it turns out climate models are remarkably high quality. In fact, they’re better than almost any commercial software that’s ever been studied. They’re about the same level of quality as the Space Shuttle flight software. Here’s my results (For the actual results you’ll have to read the paper):


We know it’s very hard to build a large complex piece of software without making mistakes.  Even the space shuttle’s software had errors in it. So the question is not “is the software perfect for predicting the future?”. The question is “Is it good enough?” Is it fit for purpose?

To answer that question, we’d better understand what that purpose of a climate model is. First of all, I’d better be clear what a climate model is not. A climate model is not a projection of trends we’ve seen in the past extrapolated into the future. If you did that, you’d be wrong, because you haven’t accounted for what actually causes the climate to change, and so the trend might not continue. They are also not decision-support tools. A climate model cannot tell us what to do about climate change. It cannot tell us whether we should be building more solar panels, or wind farms. It can’t tell us whether we should have a carbon tax. It can’t tell us what we ought to put into an international treaty.

What it does do is tell us how the physics of planet earth work, and what the consequences are of changing things, within that physics. I could describe it as “computational fluid dynamics on a rotating sphere”. But computational fluid dynamics is complex.

I went into my son’s fourth grade class recently, and I started to explain what a climate model is, and the first question they asked me was “is it like Minecraft?”. Well, that’s not a bad place to start. If you’re not familiar with Minecraft, it divides the world into blocks, and the blocks are made of stuff. They might be made of wood, or metal, or water, or whatever, and you can build things out of them. There’s no gravity in Minecraft, so you can build floating islands and it’s great fun.

Climate models are a bit like that. To build a climate model, you divide the world into a number of blocks. The difference is that in Minecraft, the blocks are made of stuff. In a climate model, the blocks are really blocks of space, through which stuff can flow. At each timestep, the program calculates how much water, or air, or ice is flowing into, or out of, each block, and if so, in which directions? It calculates changes in temperature, density, humidity, and so on. And whether stuff such as dust, salt, and pollutants are passing through or accumulating in each block. We have to account for the sunlight passing down through the block during the day. Some of what’s in each block might filter some of the the incoming sunlight, for example if there are clouds or dust, so some of the sunlight doesn’t get down to the blocks below. There’s also heat escaping upwards through the blocks, and again, some of what is in the block might trap some of that heat — for example clouds and greenhouse gases.

As you can see from this diagram, the blocks can be pretty large. The upper figure shows blocks of 87km on a side. If you want more detail in the model, you have to make the blocks smaller. Some of the fastest climate models today look more like the lower figure:


Ideally, you want to make the blocks as small as possible, but then you have many more blocks to keep track of, and you get to the point where the computer just can’t run fast enough. A typical run of a climate model, to simulate a century’s worth of climate, you might have to wait a couple of weeks on some of the fastest supercomputers for that run to complete. So the speed of the computer limits how small we can make the blocks.

Building models this way is remarkably successful. Here’s video of what a climate model can do today. This simulation shows a year’s worth of weather from a climate model. What you’re seeing is clouds and, in orange, that’s where it’s raining. Compare that to a year’s worth of satellite data for the year 2013. If you put them side by side, you can see many of the same patterns. You can see the westerlies, the winds at the top and bottom of the globe, heading from west to east, and nearer the equator, you can see the trade winds flowing in the opposite direction. If you look very closely, you might even see a pulse over South America, and a similar one over Africa in both the model and the satellite data. That’s the daily cycle as the land warms up in the morning and the moisture evaporates from soils and plants, and then later on in the afternoon as it cools, it turns into rain.

Note that the bottom is an actual year, 2013, while the top, the model simulation is not a real year at all – it’s a typical year. So the two don’t correspond exactly. You won’t get storms forming at the same time, because it’s not designed to be an exact simulation; the climate model is designed to get the patterns right. And by and large, it does. [These patterns aren’t coded into this model. They emerge as a consequences of getting the basic physics of the atmosphere right].

So how do you build a climate model like this? The answer is “very slowly”. It takes a lot of time, and a lot of failure. One of the things that surprised me when I visited these labs is that the scientists don’t build these models to try and predict the future. They build these models to try and understand the past. They know their models are only approximations, and they regularly quote the statistician, George Box, who said “All models are wrong, but some are useful”. What he meant is that any model of the world is only an approximation. You can’t get all the complexity of the real world into a model. But even so, even a simple model is a good way to test your theories about the world.

So the way that modellers work, is they spend their time focussing on places where the model does isn’t quite right. For example, maybe the model isn’t getting the Indian monsoon right. Perhaps it’s getting the amount of rain right, but it’s falling in the wrong place. They then form a hypothesis. They’ll say, I think I can improve the model, because I think this particular process is responsible, and if I improve that process in a particular way, then that should fix the simulation of the monsoon cycle. And then they run a whole series of experiments, comparing the old version of the model, which is getting it wrong, with the new version, to test whether the hypothesis is correct. And if after a series of experiments, they believe their hypothesis is correct, they have to convince the rest of the modelling team that this really is an improvement to the model.

In other words, to build the models, they are doing science. They are developing hypotheses, they are running experiments, and using peer review process to convince their colleagues that what they have done is correct:


Climate modellers also have a few other weapons up their sleeves. Imagine for a moment if Microsoft had 25 competitors around the world, all of whom were attempting to build their own versions of Microsoft Word. Imagine further that every few years, those 25 companies all agreed to run their software on a very complex battery of tests, designed to test all the different conditions under which you might expect a word processor to work. And not only that, but they agree to release all the results of those tests to the public, on the internet, so that anyone who wanted to use any of that software can pore over all the data and find out how well each version did, and decide which version they want to use for their own purposes. Well, that’s what climate modellers do. There is no other software in the world for which there are 25 teams around the world trying to build the same thing, and competing with each other.

Climate modellers also have some other advantages. In some sense, climate modelling is actually easier than weather forecasting. I can show you what I mean by that. Imagine I had a water balloon (actually, you don’t have to imagine – I have one here):


I’m going to throw it at the fifth row. Now, you might want to know who will get wet. You could measure everything about my throw: Will I throw underarm, or overarm? Which way am I facing when I let go of it? How much swing do I put in? If you could measure all of those aspects of my throw, and you understand the physics of how objects move, you could come up with a fairly accurate prediction of who is going to get wet.

That’s like weather forecasting. We have to measure the current conditions as accurately as possible, and then project forward to see what direction it’s moving in:


If I make any small mistakes in measuring my throw, those mistakes will multiply as the balloon travels further. The further I attempt to throw it, the more room there is for inaccuracy in my estimate. That’s like weather forecasting. Any errors in the initial conditions multiply up rapidly, and the current limit appears to be about a week or so. Beyond that, the errors get so big that we just cannot make accurate forecasts.

In contrast, climate models would be more like releasing a balloon into the wind, and predicting where it will go by knowing about the wind patterns. I’ll make some wind here using a fan:


Now that balloon is going to bob about in the wind from the fan. I could go away and come back tomorrow and it will still be doing about the same thing. If the power stays on, I could leave it for a hundred years, and it might still be doing the same thing. I won’t be able to predict exactly where that balloon is going to be at any moment, but I can predict, very reliably, the space in which it will move. I can predict the boundaries of its movement. And if the things that shape those boundaries change, for example by moving the fan, and I know what the factors are that shape those boundaries, I can tell you how the patterns of its movements are going to change – how the boundaries are going to change. So we call that a boundary problem:


The initial conditions are almost irrelevant. It doesn’t matter where the balloon started, what matters is what’s shaping its boundary.

So can these models predict the future? Are they good enough to predict the future? The answer is “yes and no”. We know the models are better at some things than others. They’re better at simulating changes in temperature than they are at simulating changes in rainfall. We also know that each model tends to be stronger in some areas and weaker in others. If you take the average of a whole set of models, you get a much better simulation of how the planet’s climate works than if you look at any individual model on its own. What happens is that the weaknesses in any one model are compensated for by other models that don’t have those weaknesses.

But the results of the models have to be interpreted very carefully, by someone who knows what the models are good at, and what they are not good at – you can’t just take the output of a model and say “that’s how it’s going to be”.

Also, you don’t actually need a computer model to predict climate change. The first predictions of what would happen if we keep on adding carbon dioxide to the atmosphere were produced over 120 years ago. That’s fifty years before the first digital computer was invented. And those predictions were pretty accurate – what has happened over the twentieth century has followed very closely what was predicted all those years ago. Scientists also predicted, for example, that the arctic would warm faster than the equatorial regions, and that’s what happened. They predicted night time temperatures would rise faster than day time temperatures, and that’s what happened.

So in many ways, the models only add detail to what we already know about the climate. They allow scientists to explore “what if” questions. For example, you could ask of a model, what would happen if we stop burning all fossil fuels tomorrow. And the answer from the models is that the temperature of the planet will stay at whatever temperature it was when you stopped. For example, if we wait twenty years, and then stopped, we’re stuck with whatever temperature we’re at for tens of thousands of years. You could ask a model what happens if we dig up all known reserves of fossil fuels, and burn them all at once, in one big party? Well, it gets very hot.

More interestingly, you could ask what if we tried blocking some of the incoming sunlight to cool the planet down, to compensate for some of the warming we’re getting from adding greenhouse gases to the atmosphere? There have been a number of very serious proposals to do that. There are some who say we should float giant space mirrors. That might be hard, but a simpler way of doing it is to put dust up in the stratosphere, and that blocks some of the incoming sunlight. It turns out that if you do that, you can very reliably bring the average temperature of the planet back down to whatever level you want, just by adjusting the amount of the dust. Unfortunately, some parts of the planet cool too much, and others not at all. The crops don’t grow so well, and everyone’s weather gets messed up. So it seems like that could be a solution, but when you study the model results in detail, there are too many problems.

Remember that we know fairly well what will happen to the climate if we keep adding CO2, even without using a computer model, and the computer models just add detail to what we already know. If the models are wrong, they could be wrong in either direction. They might under-estimate the warming just as much as they might over-estimate it. If you look at how well the models can simulate the past few decades, especially the last decade, you’ll see some of both. For example, the models have under-estimated how fast the arctic sea ice has melted. The models have underestimated how fast the sea levels have risen over the last decade. On the other hand, they over-estimated the rate of warming at the surface of the planet. But they underestimated the rate of warming in the deep oceans, so some of the warming ends up in a different place from where the models predicted. So they can under-estimate just as much as they can over-estimate. [The less certain we are about the results from the models, the bigger the risk that the warming might be much worse than we think.]

So when you see a graph like this, which comes from the latest IPCC report that just came out last month, it doesn’t tell us what to do about climate change, it just tells us the consequences of what we might choose to do. Remember, humans aren’t represented in the models at all, except in terms of us producing greenhouse gases and adding them to the atmosphere.


If we keep on increasing our use of fossil fuels — finding more oil, building more pipelines, digging up more coal, we’ll follow the top path. And that takes us to a planet that by the end of this century, is somewhere between 4 and 6 degrees warmer, and it keeps on getting warmer over the next few centuries. On the other hand, the bottom path, in dark blue, shows what would happen if, year after year from now onwards, we use less fossil fuels than we did the previous year, until about mid-century, when we get down to zero emissions, and we invent some way to start removing that carbon dioxide from the atmosphere before the end of the century, to stay below 2 degrees of warming.

The models don’t tell us which of these paths we should follow. They just tell us that if this is what we do, here’s what the climate will do in response. You could say that what the models do is take all the data and all the knowledge we have about the climate system and how it works, and put them into one neat package, and its our job to take that knowledge and turn it into wisdom. And to decide which future we would like.

My department is busy revising the set of milestones our PhD students need to meet in the course of their studies. The milestones are intended to ensure each student is making steady progress, and to identify (early!) any problems. At the moment they don’t really do this well, in part because the faculty all seem to have different ideas about what we should expect at each milestone. (This is probably a special case of the general rule that if you gather n professors together, they will express at least n+1 mutually incompatible opinions). As a result, the students don’t really know what’s expected of them, and hence spend far longer in the PhD program than they would need to if they received clear guidance.

Anyway, in order to be helpful, I wrote down what I think are the set of skills that a PhD student needs to demonstrate early in the program, as a prerequisite for becoming a successful researcher:

  1. The ability to select a small number of significant research contributions from a larger set of published papers, and justify that selection.
  2. The ability to articulate a rationale for selection of these papers, on the basis of significance of the results, novelty of the approach, etc.
  3. The ability to relate the papers to one another, and to other research in the literature.
  4. The ability to critique the research methods used in these papers, the strengths and weaknesses of these methods, and likely threats to validity, whether acknowledged in the papers or not.
  5. The ability to suggest alternative approaches to answering the research questions posed in these papers.
  6. The ability to identify limitations on the results reported in the papers, along with their implications.
  7. The ability to identify and prioritize lines of investigation for further research, based on limitations of the research described in the papers and/or important open problems that the papers fail to answer.

My suggestion is that at the end of the first year of the PhD program, each student should demonstrate development of these skills by writing a short report that selects and critiques a handful (4-6) of papers in a particular subfield. If a student can’t do this well, they’re probably not going to succeed in the PhD program.

My proposal has now gone to the relevant committee (“where good ideas go to die™”), so we’ll see what happens…

Imagine for a moment if Microsoft had 24 competitors around the world, each building their own version of Microsoft Word. Imagine further that every few years, they all agreed to run their software through the same set of very demanding tests of what a word processor ought to be able to do in a large variety of different conditions. And imagine that all these competing  companies agreed that all the results from these tests would be freely available on the web, for anyone to see. Then, people who want to use a word processor can explore the data and decide for themselves which one best serves their purpose. People who have concerns about the reliability of word processors can analyze the strengths and weaknesses of each company’s software. Then think about what such a process would do to the reliability of word processors. Wouldn’t that be a great world to live in?

Well, that’s what climate modellers do, through a series of model inter-comparison projects. There are around 25 major climate modelling labs around the world developing fully integrated global climate models, and hundreds of smaller labs building specialized models of specific components of the earth system. The fully integrated models are compared in detail every few years through the Coupled Model Intercomparison Projects. And there are many other model inter-comparison projects for various specialist communities within climate science.

Have a look at how this process works, via this short paper on the planning process for CMIP6.

What’s the difference between forecasting the weather and predicting future climate change? A few years ago, I wrote a long post explaining that weather forecasting is an initial value problem, while climate is a boundary value problem. This is a much shorter explanation:

Imagine I were to throw a water balloon at you. If you could measure precisely how I threw it, and you understand the laws of physics correctly, you could predict precisely where it will go. If you could calculate it fast enough, you would know whether you’re going to get wet, or whether I’ll miss. That’s an initial value problem. The less precise your measurements of the initial value (how I throw it), the less accurate your prediction will be. Also, the longer the throw, the more the errors grow. This is how weather forecasting works – you measure the current conditions (temperature, humidity, wind speed, and so on) as accurately as possible, put them into a model that simulates the physics of the atmosphere, and run it to see how the weather will evolve. But the further into the future that you want to peer, the less accurate your forecast, because the errors on the initial value get bigger. It’s really hard to predict the weather more than about a week into the future:

Weather as an initial value problem

Now imagine I release a helium balloon into the air flow from a desk fan, and the balloon is on a string that’s tied to the fan casing. The balloon will reach the end of its string, and bob around in the stream of air. It doesn’t matter how exactly I throw the balloon into the airstream – it will keep on bobbing about in the same small area. I could leave it there for hours and it will do the same thing. This is a boundary value problem. I won’t be able to predict exactly where the balloon will be at any moment, but I will be able to tell you fairly precisely the boundaries of the space in which it will be bobbing. If anything affects these boundaries (e.g. because I move the fan a little), I should also be able to predict how this will shifts the area in which the balloon will bob. This is how climate prediction works. You start off with any (reasonable) starting state, and run your model for as long as you like. If your model gets the physics right, it will simulate a stable climate indefinitely, no matter how you initialize it:

Climate as a boundary value problem

But if the boundary conditions change, because, for example, we alter the radiative balance of the planet, the model should also be able to predict fairly accurately how this will shift the boundaries on the climate:

Climate change as a change in boundary conditions


We cannot predict what the weather will do on any given day far into the future. But if we understand the boundary conditions and how they are altered, we can predict fairly accurately how the range of possible weather patterns will be affected. Climate change is a change in the boundary conditions on our weather systems.

A few weeks ago, Mark Higgins, from EUMETSAT, posted this wonderful video of satellite imagery of planet earth for the whole of the year 2013. The video superimposes the aggregated satellite data from multiple satellites on the top of NASA’s ‘Blue Marble Next Generation’ ground maps, to give a consistent picture of large scale weather patterns (Original video here – be sure to listen to Mark’s commentary):

When I saw the video, it reminded me of something. Here’s the output from the CAM3, the atmospheric component of the global climate model CESM, run at very high resolution (Original video here):

I find it fascinating to play these two videos at the same time, and observe how the model captures the large scale weather patterns of the planet. The comparison isn’t perfect, because the satellite data measures the cloud temperature (the colder the clouds, the whiter they are shown), while the climate model output shows total water vapour & rain (i.e. warmer clouds are a lot more visible, and precipitation is shown in orange). This means the tropical regions look much drier in the satellite imagery than they do in the model output.

But even so, there are some remarkable similarities. For example, both videos clearly show the westerlies, the winds that flow from west to east at the top and bottom of the map (e.g. pushing rain across the North Atlantic to the UK), and they both show the trade winds, which flow from east to west, closer to the equator. Both videos also show how cyclones form in the regions between these wind patterns. For example, in both videos, you can see the typhoon season ramp up in the Western Pacific in August and September – the model has two hitting Japan in August, and the satellite data shows several hitting China in September. The curved tracks of these storms are similar in both models. If you look closely, you can also see the daily cycle of evaporation and rain over South America and Central Africa in both videos – watch how these regions appear to pulse each day.

I find these similarities remarkable, because none of these patterns are coded into the climate model – they all emerge as a consequence of getting the basic thermodynamic properties of the atmosphere right. Remember also that a climate model is not intended to forecast the particular weather of any given year (that would be impossible, due to chaos theory). However, the model simulates a “typical” year on planet earth. So the specifics of where and when each storm forms do not correspond to anything that actually happened in any given year. But when the model gets the overall patterns about right, that’s a pretty impressive achievement.

I’ve been trawling through the final draft of the new IPCC assessment report that was released last week, to extract some highlights for a talk I gave yesterday. Here’s what I think are its key messages:

  1. The warming is unequivocal.
  2. Humans caused the majority of it.
  3. The warming is largely irreversible.
  4. Most of the heat is going into the oceans.
  5. Current rates of ocean acidification are unprecedented.
  6. We have to choose which future we want very soon.
  7. To stay below 2°C of warming, the world must become carbon negative.
  8. To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Before I elaborate on these, a little preamble. The IPCC was set up in 1988 as a UN intergovernmental body to provide an overview of the science. Its job is to assess what the peer-reviewed science says, in order to inform policymaking, but it is not tasked with making specific policy recommendations. The IPCC and its workings seem to be widely misunderstood in the media. The dwindling group of people who are still in denial about climate change particularly like to indulge in IPCC-bashing, which seems like a classic case of ‘blame the messenger’. The IPCC itself has a very small staff (no more than a dozen or so people). However, the assessment reports are written and reviewed by a very large team of scientists (several thousands), all of whom volunteer their time to work on the reports. The scientists are are organised into three working groups: WG1 focuses on the physical science basis, WG2 focuses on impacts and climate adaptation, and WG3 focuses on how climate mitigation can be achieved.

Last week, just the WG1 report was released as a final draft, although it was accompanied by bigger media event around the approval of the final wording on the WG1 “Summary for Policymakers”. The final version of the full WG1 report, plus the WG2 and WG3 reports, are not due out until spring next year. That means it’s likely to be subject to minor editing/correcting, and some of the figures might end up re-drawn. Even so, most of the text is unlikely to change, and the major findings can be considered final. Here’s my take on the most important findings, along with a key figure to illustrate each.

(1) The warming is unequivocal

The text of the summary for policymakers says “Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased.”

Observed globally averaged combined land and ocean surface temperature anomaly 1850-2012. The top panel shows the annual values; the bottom panel shows decadal means. (Note: Anomalies are relative to the mean of 1961-1990).

(Fig SPM.1) Observed globally averaged combined land and ocean surface temperature anomaly 1850-2012. The top panel shows the annual values; the bottom panel shows decadal means. (Note: Anomalies are relative to the mean of 1961-1990).

Unfortunately, there has been much play in the press around a silly idea that the warming has “paused” in the last decade. If you squint at the last few years of the top graph, you might be able to convince yourself that the temperature has been nearly flat for a few years, but only if you cherry pick your starting date, and use a period that’s too short to count as climate. When you look at it in the context of an entire century and longer, such arguments are clearly just wishful thinking.

The other thing to point out here is that the rate of warming is unprecedented. “With very high confidence, the current rates of CO2, CH4 and N2O rise in atmospheric concentrations and the associated radiative forcing are unprecedented with respect to the highest resolution ice core records of the last 22,000 years”, and there is “medium confidence that the rate of change of the observed greenhouse gas rise is also unprecedented compared with the lower resolution records of the past 800,000 years.” In other words, there is nothing in any of the ice core records that is comparable to what we have done to the atmosphere over the last century. The earth has warmed and cooled in the past due to natural cycles, but never anywhere near as fast as modern climate change.

(2) Humans caused the majority of it

The summary for policymakers says “It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century”.

The Earth's energy budget from 1970 to 2011. Cumulative energy flux (in zettaJoules!) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

(Box 13.1 fig 1) The Earth’s energy budget from 1970 to 2011. Cumulative energy flux (in zettaJoules!) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

This chart summarizes the impact of different drivers of warming and/or cooling, by showing the total cumulative energy added to the earth system since 1970 from each driver. Note that the chart is in zettajoules (1021J). For comparison, one zettajoule is about the energy that would be released from 200 million bombs of the size of the one dropped on Hiroshima. The world’s total annual global energy consumption is about 0.5ZJ.

Long lived greenhouse gases, such as CO2, contribute the majority of the warming (the purple line). Aerosols, such as particles of industrial pollution, block out sunlight and cause some cooling (the dark blue line), but nowhere near enough to offset the warming from greenhouse gases. Note that aerosols have the largest uncertainty bar; much of the remaining uncertainty about the likely magnitude of future climate warming is due to uncertainty about how much of the warming might be offset by aerosols. The uncertainty on the aerosols curve is, in turn, responsible for most of the uncertainty on the black line, which shows the total effect if you add up all the individual contributions.

The graph also puts into perspective some of other things that people like to blame for climate change, including changes in energy received from the sun (‘solar’), and the impact of volcanoes. Changes in the sun (shown in orange) are tiny compared to greenhouse gases, but do show a very slight warming effect. Volcanoes have a larger (cooling) effect, but it is short-lived. There were two major volcanic eruptions in this period, El Chichón in 1982 and and Pinatubo in 1992. Each can be clearly seen in the graph as an immediate cooling effect, which then tapers off after a a couple of years.

(3) The warming is largely irreversible

The summary for policymakers says “A large fraction of anthropogenic climate change resulting from CO2 emissions is irreversible on a multi-century to millennial time scale, except in the case of a large net removal of CO2 from the atmosphere over a sustained period. Surface temperatures will remain approximately constant at elevated levels for many centuries after a complete cessation of net anthropogenic CO2 emissions.”

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

The conclusions about irreversibility of climate change are greatly strengthened from the previous assessment report, as recent research has explored this in much more detail. The problem is that a significant fraction of our greenhouse gas emissions stay in the atmosphere for thousands of years, so even if we stop emitting them altogether, they hang around, contributing to more warming. In simple terms, whatever peak temperature we reach, we’re stuck at for millennia, unless we can figure out a way to artificially remove massive amounts of CO2 from the atmosphere.

The graph is the result of an experiment that runs (simplified) models for a thousand years into the future. The major climate models are generally too computational expensive to be run for such a long simulation, so these experiments use simpler models, so-called EMICS (Earth system Models of Intermediate Complexity).

The four curves in this figure correspond to four “Representative Concentration Pathways“, which map out four ways in which the composition of the atmosphere is likely to change in the future. These four RCPs were picked to capture four possible futures: two in which there is little to no coordinated action on reducing global emissions (worst case – RCP8.5 and best case – RCP6) and two on which there is serious global action on climate change (worst case – RCP4.5 and best case – RCP 2.6). A simple way to think about them is as follows. RCP8.5 represents ‘business as usual’ – strong economic development for the rest of this century, driven primarily by dependence on fossil fuels. RCP6 represents a world with no global coordinated climate policy, but where lots of localized clean energy initiatives do manage to stabilize emissions by the latter half of the century. RCP4.5 represents a world that implements strong limits on fossil fuel emissions, such that greenhouse gas emissions peak by mid-century and then start to fall. RCP2.6 is a world in which emissions peak in the next few years, and then fall dramatically, so that the world becomes carbon neutral by about mid-century.

Note that in RCP2.6 the temperature does fall, after reaching a peak just below 2°C of warming over pre-industrial levels. That’s because RCP2.6 is a scenario in which concentrations of greenhouse gases in the atmosphere start to fall before the end of the century. This is only possible if we reduce global emissions so fast that we achieve carbon neutrality soon after mid-century, and then go carbon negative. By carbon negative, I mean that globally, each year, we remove more CO2 from the atmosphere than we add. Whether this is possible is an interesting question. But even if it is, the model results show there is no time within the next thousand years when it is anywhere near as cool as it is today.

(4) Most of the heat is going into the oceans

The oceans have a huge thermal mass compared to the atmosphere and land surface. They act as the planet’s heat storage and transportation system, as the ocean currents redistribute the heat. This is important because if we look at the global surface temperature as an indication of warming, we’re only getting some of the picture. The oceans act as a huge storage heater, and will continue to warm up the lower atmosphere (no matter what changes we make to the atmosphere in the future).

(Box 3.1 Fig 1) Plot of energy accumulation in ZJ (1 ZJ = 1021 J) within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. See text for data sources. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

(Box 3.1 Fig 1) Plot of energy accumulation in ZJ (1 ZJ = 1021 J) within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

Note the relationship between this figure (which shows where the heat goes) and the figure I showed above that shows change in cumulative energy budget from different sources. Both graphs show zettajoules accumulating over about the same period (1970-2011). But the first graph has a cumulative total just short of 800ZJ by the end of the period, while this one shows the earth storing “only” about 300ZJ of this. Where did the remaining energy go? Because the earth’s temperature rose during this period, it also lost increasingly more energy back into space. When greenhouse gases trap heat, the earth’s temperature keeps rising until outgoing energy and incoming energy are in balance again.

(5) Current rates of ocean acidification are unprecedented.

The IPCC report says “The pH of seawater has decreased by 0.1 since the beginning of the industrial era, corresponding to a 26% increase in hydrogen ion concentration. … It is virtually certain that the increased storage of carbon by the ocean will increase acidification in the future, continuing the observed trends of the past decades. … Estimates of future atmospheric and oceanic carbon dioxide concentrations indicate that, by the end of this century, the average surface ocean pH could be lower than it has been for more than 50 million years”.

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings. [The numbers indicate the number of models used in each ensemble.]

Ocean acidification has sometimes been ignored in discussions about climate change, but it is a much simpler process, and is much easier to calculate (notice the uncertainty range on the graph above is much smaller than most of the other graphs). This graph shows the projected acidification in the best and worst case scenarios (RCP2.6 and RCP8.5). Recall that RCP8.5 is the “business as usual” future.

Note that this doesn’t mean the ocean will become acid. The ocean has always been slightly alkaline – well above the neutral value of pH7. So “acidification” refers to a drop in pH, rather than a drop below pH7. As this continues, the ocean becomes steadily less alkaline. Unfortunately, as the pH drops, the ocean stops being supersaturated for calcium carbonate. If it’s no longer supersaturated, anything made of calcium carbonate starts dissolving. Corals and shellfish can no longer form. If you kill these off, the entire ocean foodchain is affected. Here’s what the IPCC report says: “Surface waters are projected to become seasonally corrosive to aragonite in parts of the Arctic and in some coastal upwelling systems within a decade, and in parts of the Southern Ocean within 1–3 decades in most scenarios. Aragonite, a less stable form of calcium carbonate, undersaturation becomes widespread in these regions at atmospheric CO2 levels of 500–600 ppm”.

(6) We have to choose which future we want very soon.

In the previous IPCC reports, projections of future climate change were based on a set of scenarios that mapped out different ways in which human society might develop over the rest of this century, taking account of likely changes in population, economic development and technological innovation. However, none of the old scenarios took into account the impact of strong global efforts at climate mitigation. In other words, they all represented futures in which we don’t take serious action on climate change. For this report, the new “RCPs” have been chosen to allow us to explore the choice we face.

This chart sums it up nicely. If we do nothing about climate change, we’re choosing a path that will look most like RCP8.5. Recall that this is the one where emissions keep rising just as they have done throughout the 20th century. On the other hand, if we get serious about curbing emissions, we’ll end up in a future that’s probably somewhere between RCP2.6 and RCP4.5 (the two blue lines). All of these futures give us a much warmer planet. All of these futures will involve many challenges as we adapt to life on a warmer planet. But by curbing emissions soon, we can minimize this future warming.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

Note also that the uncertainty range (the shaded region) is much bigger for RCP8.5 than it is for the other scenarios. The more the climate changes beyond what we’ve experienced in the recent past, the harder it is to predict what will happen. We tend to use the difference across different models as an indication of uncertainty (the coloured numbers shows how many different models participated in each experiment). But there’s also the possibility of “unknown unknowns” – surprises that aren’t in the models, so the uncertainty range is likely to be even bigger than this graph shows.

(7) To stay below 2°C of warming, the world must become carbon negative.

Only one of the four future scenarios (RCP2.6) shows us staying below the UN’s commitment to no more than 2ºC of warming. In RCP2.6, emissions peak soon (within the next decade or so), and then drop fast, under a stronger emissions reduction policy than anyone has ever proposed in international negotiations to date. For example, the post-Kyoto negotiations have looked at targets in the region of 80% reductions in emissions over say a 50 year period. In contrast, the chart below shows something far more ambitious: we need more than 100% emissions reductions. We need to become carbon negative:

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

The graph on the left shows four possible CO2 emissions paths that would all deliver the RCP2.6 scenario, while the graph on the right shows the resulting temperature change for these four. They all give similar results for temperature change, but differ in how we go about reducing emissions. For example, the black curve shows CO2 emissions peaking by 2020 at a level barely above today’s, and then dropping steadily until emissions are below zero by about 2070. Two other curves show what happens if emissions peak higher and later: the eventual reduction has to happen much more steeply. The blue dashed curve offers an implausible scenario, so consider it a thought experiment: if we held emissions constant at today’s level, we have exactly 30 years left before we would have to instantly reduce emissions to zero forever.

Notice where the zero point is on the scale on that left-hand graph. Ignoring the unrealistic blue dashed curve, all of these pathways require the world to go net carbon negative sometime soon after mid-century. None of the emissions targets currently being discussed by any government anywhere in the world are sufficient to achieve this. We should be talking about how to become carbon negative.

One further detail. The graph above shows the temperature response staying well under 2°C for all four curves, although the uncertainty band reaches up to 2°C. But note that this analysis deals only with CO2. The other greenhouse gases have to be accounted for too, and together they push the temperature change right up to the 2°C threshold. There’s no margin for error.

(8) To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Perhaps the most profound advance since the previous IPCC report is a characterization of our global carbon budget. This is based on a finding that has emerged strongly from a number of studies in the last few years: the expected temperature change has a simple linear relationship with cumulative CO2 emissions since the beginning of the industrial era:

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

The chart is a little hard to follow, but the main idea should be clear: whichever experiment we carry out, the results tend to lie on a straight line on this graph. You do get a slightly different slope in one experiment, the “1%/yr” experiment, where only CO2 rises, and much more slowly than it has over the last few decades. All the more realistic scenarios lie in the orange band, and all have about the same slope.

This linear relationship is a useful insight, because it means that for any target ceiling for temperature rise (e.g. the UN’s commitment to not allow warming to rise more than 2°C above pre-industrial levels), we can easily determine a cumulative emissions budget that corresponds to that temperature. So that brings us to the most important paragraph in the entire report, which occurs towards the end of the summary for policymakers:

Limiting the warming caused by anthropogenic CO2 emissions alone with a probability of >33%, >50%, and >66% to less than 2°C since the period 1861–1880, will require cumulative CO2 emissions from all anthropogenic sources to stay between 0 and about 1560 GtC, 0 and about 1210 GtC, and 0 and about 1000 GtC since that period respectively. These upper amounts are reduced to about 880 GtC, 840 GtC, and 800 GtC respectively, when accounting for non-CO2 forcings as in RCP2.6. An amount of 531 [446 to 616] GtC, was already emitted by 2011.

Unfortunately, this paragraph is a little hard to follow, perhaps because there was a major battle over the exact wording of it in the final few hours of inter-governmental review of the “Summary for Policymakers”. Several oil states objected to any language that put a fixed limit on our total carbon budget. The compromise was to give several different targets for different levels of risk. Let’s unpick them. First notice that the targets in the first sentence are based on looking at CO2 emissions alone; the lower targets in the second sentence take into account other greenhouse gases, and other earth systems feedbacks (e.g. release of methane from melting permafrost), and so are much lower. It’s these targets that really matter:

  • To give us a one third (33%) chance of staying below 2°C of warming over pre-industrial levels, we cannot ever emit more than 880 gigatonnes of Carbon. 
  • To give us a 50% chance, we cannot ever emit more than 840 gigatonnes of Carbon.
  • To give us a 66% chance, we cannot ever emit more than 800 gigatonnes of Carbon.

Since the beginning of industrialization, we have already emitted a little more than 500 gigatonnes. So our remaining budget is somewhere between 300 and 400 gigatonnes of carbon. Existing known fossil fuel reserves are enough to release at least 1000 gigatonnes. New discoveries and unconventional sources will likely more than double this. That leads to one inescapable conclusion:

Most of the remaining fossil fuel reserves must stay buried in the ground.

We’ve never done that before. There is no political or economic system anywhere in the world currently that can persuade an energy company to leave a valuable fossil fuel resource untapped. There is no government in the world that has demonstrated the ability to forgo the economic wealth from natural resource extraction, for the good of the planet as a whole. We’re lacking both the political will and the political institutions to achieve this. Finding a way to achieve this presents us with a challenge far bigger than we ever imagined.

Update (10 Oct 2013): An earlier version of this post omitted the phrase “To stay below 2°C of warming” from the last key point.

Yesterday I talked about three re-inforcing feedback loops in the earth system, each of which has the potential to accelerate a warming trend once it has started. I also suggested there are other similar feedback loops, some of which are known, and others perhaps yet to be discovered. For example, a paper published last month suggested a new feedback loop, to do with ocean acidification. In a nutshell, as the ocean absorbs more CO2, it becomes more acidic, which inhibits the growth of phytoplankton. These plankton are a major source of sulphur compounds that end up as aerosols in the atmosphere, which seeds the formation of clouds. Less clouds mean lower albedo, which means more warming. Whether this feedback loop is important remains to be seen, but we do know that clouds have an important role to play in climate change.

I didn’t include clouds on my diagrams yet, because clouds deserve a special treatment, in part because they are involved in two major feedback loops that have opposite effects:

Two opposing cloud feedback loops

Two opposing cloud feedback loops. An increase in temperature leads to an increase in moisture in the atmosphere. This leads to two new loops…

As the earth warms, we get more moisture in the atmosphere (simply because there is more evaporation from the surface, and warmer air can hold more moisture). Water vapour is a powerful greenhouse gas, so the more there is in the atmosphere, the more warming we get (greenhouse gases reduce the outgoing radiation). So this sets up a reinforcing feedback loop: more moisture causes more warming causes more moisture.

However, if there is more moisture in the atmosphere, there’s also likely to be more cloud formation. Clouds raise the albedo of the planet and reflect sunlight back into space before it can reach the surface. Hence, there is also a balancing loop: by blocking more sunlight, extra clouds will help to put the brakes on any warming. Note that I phrased this carefully: this balancing loop can slow a warming trend, but it does not create a cooling trend. Balancing loops tend to stop a change from occurring, but they do not create a change in the opposite direction. For example, if enough clouds form to completely counteract the warming, they also remove the mechanism (i.e. warming!) that causes growth in cloud cover in the first place. If we did end up with so many extra clouds that it cooled the planet, the cooling would then remove the extra clouds, so we’d be back where we started. In fact, this loop is nowhere near that strong anyway. [Note that under some circumstances, balancing loops can lead to oscillations, rather than gently converging on an equilibrium point, and the first wave of a very slow oscillation might be mistaken for a cooling trend. We have to be careful with our assumptions and timescales here!].

So now we have two new loops that set up opposite effects – one tends to accelerate warming, and the other tends to decelerate it. You can experience both these effects directly: cloudy days tend to be cooler than sunny days, because the clouds reflect away some of the sunlight. But cloudy nights tend to be warmer than clear nights because the water vapour traps more of the escaping heat from the surface. In the daytime, both effects are operating, and the cooling effect tends to dominate. During the night, there is no sunlight to block, so only the warming effect works.

If we average out the effects of these loops over many days, months, or years, which of the effects dominate? (i.e. which loop is stronger?) Does the extra moisture mean more warming or less warming? This is clearly an area where building a computer model and experimenting with it might help, as we need to quantify the effects to understand them better. We can build good computer models of how clouds form at the small scale, by simulating the interaction of dust and water vapour. But running such a model for the whole planet is not feasible with today’s computers.

To make things a little more complicated, these two feedback loops interact with other things. For example, another likely feedback loop comes from a change in the vertical temperature profile of the atmosphere. Current models indicate that, at least in the tropics, the upper atmosphere will warm faster than the surface (in technical terms, it will reduce the lapse rate – the rate at which temperature drops as you climb higher). This then increases the outgoing radiation, because it’s from the upper atmosphere that the earth loses its heat to space. This creates another (small) balancing feedback:

The lapse rate feedback - if the upper troposphere warms faster than the surface (i.e. a lower lapse rate), this increases outgoing radiation from the planet.

The lapse rate feedback – if the upper troposphere warms faster than the surface (i.e. a lower lapse rate), this increases outgoing radiation from the planet.

Note that this lapse rate feedback operates in the same way as the main energy balance loop – the two ‘-‘ links have the same effect as the existing ‘+’ link from temperature to outgoing infra-red radiation. In other words this new loop just strengthens the effect of the existing loop – for convenience we could just fold both paths into the one link.

However, water vapour feedback can interact with this new feedback loop, because the warmer upper atmosphere will hold more water vapour in exactly the place where it’s most effective as a greenhouse gas. Not only that, but clouds themselves can change the vertical temperature profile, depending on their height. I said it was complicated!

The difficulty of simulating all these different interactions of clouds accurately leads to one of the biggest uncertainties in climate science. In 1979, the Charney report calculated that all these cloud and water vapour feedback loops roughly cancel out, but pointed out that there was a large uncertainty bound on this estimate. More than thirty years later, we understand much more about the how cloud formation and distribution are altered in a warming world, but our margins of error for calculating cloud effects have barely reduced, because of the difficulty of simulating them on a global scale. Our best guess is now that the (reinforcing) water vapour feedback loop is slightly stronger than than the (balancing) cloud albedo and lapse rate loops. So the net effect of these three loops is an amplifying effect on the warming.

Other posts in this series, so far:

At the start of this series, I argued that Climate Science is inherently a Systems Discipline. To develop that idea, I described two important systems as feedback loops: the earth’s temperature equilibrium loop and economic growth and energy consumption, and then we put these two systems together.

The basic climate system now looks like this (leaving out, for now, the dynamics that drive economic development and energy use):

The basic planetary energy balancing loop, with the burning of fossil fuels forcing the temperature to change

The basic planetary energy balancing loop, with the burning of fossil fuels forcing the temperature to change

Recall that the balancing loop (marked with a ‘B’) ensures that for each change to the input forcings (in this case greenhouse gases and aerosols in the atmosphere), the earth system will settle down to a new equilibrium point: a temperature at which the incoming and outgoing energy flows are balanced again. Each time we increase the concentration of greenhouse gases in the atmosphere, we can expect the earth to warm, slowly, until it reaches this new equilibrium. The economy-energy system (not shown above) is ensuring that we keep on adding more greenhouse gases, so we’re continually pushing the system further and further out of balance. That means we’re continually increasing the eventual temperature rise that the earth will experience before it reaches a new equilibrium.

Meanwhile, the aerosols provide a slight cooling effect, but they wash out of the atmosphere fairly quickly, so their overall concentration isn’t rising much. Carbon dioxide does not wash out quickly – it can remain in the atmosphere for thousands of years. Hence the warming effect dominates.

Now, if that was the whole picture, climate change would be very predictable, using basic thermodynamic principles. Unfortunately, there are other feedback loops that we haven’t considered yet. Here’s one:

The basic climate system with the ice albedo feedback

The basic climate system with the ice albedo feedback

As the temperature rises, the ice sheets start to melt and shrink. These include the Arctic sea ice, glaciers on Greenland and the Antarctic, and mountain glaciers across the world. When sea ice melts, it leaves more sea exposed, which is much darker than the ice. When land ice melts, it uncovers rocks, soils, and (eventually) plants, all of which are also darker than ice. Because of this, loss of ice leads to a lower albedo for the planet. A lower albedo means less of the incoming sunlight is reflected straight back into space, so more reaches the surface. In other words, less albedo means more incoming solar radiation. And, as we already know, this leads to more energy retained and more warming. In other words, it is a re-inforcing feedback loop.

As a quick check, we can use the rule of thumb that reinforcing loops have an even number of ‘-‘ links. Trace the path of this loop to check:

Ice albedo feedback loop on its own

Ice albedo feedback loop on its own

Because this is a reinforcing loop, it can modify the behaviour of the basic energy balancing loop. If a warming process starts, this loop can accelerate it, and cause more warming than we’d expect from just the main balancing loop. In extreme cases, a reinforcing loop can completely destabilize a system that is normally dominated by balancing loops. However, all reinforcing loops also must have limits (remember: nothing can grow forever). In this case, there is clearly a limit once all the ice sheets on the planet have melted. The loop can no longer function at that point.

Here’s another reinforcing loop:

Climate system with permafrost feedback

Climate system with permafrost feedback

In this loop, as the temperature rises, it melts the permafrost across Northern Canada and Russia. This releases the methane from the frozen soils. Methane is a greenhouse gas, so this loop also accelerates the warming. Again, it’s a re-inforcing loop, and again, there’s a limit: the loop must stop once all the permafrost has melted.

Here’s another:

Climate system with carbon sinks feedback

Climate system with carbon sinks feedback

This loop occurs because the more greenhouse gases we put into the atmosphere, the more work the carbon sinks have to do. Carbon sinks include the ocean and soils – they slowly remove carbon dioxide from the atmosphere. But the more carbon they have to absorb, the less effective they are at taking more. There’s an additional effect for the ocean, because a warmer ocean is less able to absorb CO2. Some model studies even suggest that after a few degrees of warming, the ocean might stop being a carbon sink and start being a source.

So, put that altogether and we have three re-inforcing loops working to destabilize the main energy balance loop. The main loop tends to limit the amount of warming we might expect, and the reinforcing loops all tend to increase it:

All three reinforcing loops working together

All three reinforcing loops working together

Remember, all three re-inforcing loops might operate at once. More likely, each will kick in at different times as the planet warms. Predicting when that might occur is hard, as is calculating the likely size of the effect. We can calculate absolute limits to each of these reinforcing loops, but there are likely to be other reasons why the loop stops working before reaching these absolute limits.

One of the goals of climate modelling is to capture these kinds of feedbacks in a computational model, to attempt to quantify the effects, so that we can understand them better. We can use both basic physics and empirical observations to put numbers on each of the relationships in the diagram, and we can experiment with the model to test how sensitive it is to different kinds of perturbation, especially in areas where it’s hard to be sure about the numbers.

However, there’s also the possibility that we missed some important feedback loops. In the model above, we have missed an important one, to do with clouds. We’ll meet that in the next post…

Other posts in this series, so far: