I’m heading off to Florence this week for the International Conference on Software Engineering (ICSE). The highlight of the week will be a panel session I’m chairing, on the Karlskrona Manifesto. The manifesto itself is something we’ve been working on since last summer – a group of us wrote the first draft at the Requirements Engineering conference in Karlskrona, Sweden, last summer (hence the name). This week we’re launching a website for the manifesto, and we’ve published a longer technical paper about it at ICSE.

The idea of the manifesto is to inspire deeper analysis of the roles and responsibilities of technology designers (and especially software designers), given that software systems now shape so much of modern life. We rarely stop to think about the unintended consequences of very large numbers of people using our technologies, nor do we ask whether, on balance, an idea that looks cool on paper will merely help push us even further into unsustainable behaviours. The position we take in the manifesto is that, as designers, our responsibility for the consequences of our designs are much broader than most of us acknowledge, and it’s time to do something about it.

For the manifesto, we ended up thinking about sustainability in terms of five dimensions:

  • Environmental sustainability: the long term viability of natural systems, including ecosystems, resource consumption, climate, pollution food, water, and waste.
  • Social sustainability: the quality of social relationships and the factors that tend to improve or erode trust in society, such as social equity, democracy, and justice.
  • Individual sustainability: the health and well-being of people as individuals, including mental and physical well-being, education, self-respect, skills, and mobility.
  • Economic sustainability: the long term viability of economic activities, such as businesses and nations, including issues such as investment, wealth creation and prosperity.
  • Technical sustainability: the ability to sustain technical systems and their infrastructures, including software maintenance, innovation, obsolescence, and data integrity.

There are of course, plenty of other ways of defining sustainability (which we discuss in the paper), and some hard constraints in some dimensions – e.g. we cannot live beyond the resource limits of the planet, no matter how much progress we make towards sustainability in other other dimensions. But a key insight is that all five dimensions matter, and none of them can be treated in isolation. For example, we might think we’re doing fine in one dimension – economic, say, as we launch a software company with a sound business plan that can make a steady profit – but often we do so only by incurring a debt in other dimensions, perhaps harming the environment by contributing to the mountains of e-waste, or harming social sustainability by replacing skilled jobs with subsistence labour.

The manifesto characterizes a set of problems in how technologists normally think about sustainability (if they do), and ends with a set of principles for sustainability design:

  • Sustainability is systemic. Sustainability is never an isolated property. Systems thinking has to be the starting point for the transdisciplinary common ground of sustainability.
  • Sustainability has multiple dimensions. We have to include those dimensions into our analysis if we are to understand the nature of sustainability in any given situation.
  • Sustainability transcends multiple disciplines. Working in sustainability means working with people from across many disciplines, addressing the challenges from multiple perspectives.
  • Sustainability is a concern independent of the purpose of the system. Sustainability has to be considered even if the primary focus of the system under design is not sustainability.
  • Sustainability applies to both a system and its wider contexts. There are at least two spheres to consider in system design: the sustainability of the system itself and how it affects sustainability of the wider system of which it will be part.
  • Sustainability requires action on multiple levels. Some interventions have more leverage on a system than others. Whenever we take action towards sustainability, we should consider opportunity costs: action at other levels may offer more effective forms of intervention.
  • System visibility is a necessary precondition and enabler for sustainability design. The status of the system and its context should be visible at different levels of abstraction and perspectives to enable participation and informed responsible choice.
  • Sustainability requires long-term thinking. We should assess benefits and impacts on multiple timescales, and include longer-term indicators in assessment and decisions.
  • It is possible to meet the needs of future generations without sacrificing the prosperity of the current generation. Innovation in sustainability can play out as decoupling present and future needs. By moving away from the language of conflict and the trade-off mindset, we can identify and enact choices that benefit both present and future.

You can read the full manifesto at sustainabilitydesign.org, and watch for the twitter tags  and .  I’m looking forward to lots of constructive discussions this week.

For our course about the impacts of the internet, we developed an exercise to get our students thinking critically about the credibility of things they find on the web. As a number of colleagues have expressed in interest in this, I thought I would post it here. Feel free to use it and adapt it!

Near the beginning of the course, we set the students to read the chapter “Crap Detection 101: How to Find What You Need to Know, and How to Decide If It’s True” from Rheingold & Weeks’ book NetSmart. During the tutorial, we get them working in small groups, and give them several, carefully selected web pages to test their skills on. We pick webpages that we think are not too easy nor too hard, and use a mix of credible and misleading ones. It’s a real eye-opener exercise for our students.

To guide them in the activity, we give them the following list of tips (originally distilled from the book by our TA, Matt King, who wrote the first draft of the worksheet).

Tactics for Detecting Crap on the Internet

Here’s a checklist of tactics to use to help you judge the credibility of web pages. Different tactics will be useful for different web pages – use your judgment to decide which tactics to try first. If you find some of these don’t apply, or don’t seem to give you useful information, think about why that is. Make notes about the credibility of each webpage you explored, and which tactics you used to determine its credibility.

  1. Authorship
    • Is the author of a given page named? Who is s/he?
    • What do others say about the author?
  2. Sources cited
    • Does the article include links (or at least references) to sources?
    • What do these sources tell us about credibility and/or bias?
  3. Ownership of the website
    • Can you find out who owns the site? (e.g. look it up using www.easywhois.com)
    • What is the domain name? Does the “pedigree” of a site convince us of its trustworthiness?
    • Who funds the owner’s activities? (e.g. look them up on http://www.sourcewatch.org)
  4. Connectedness
    • How much traffic does this site get? (e.g. use www.alexa.com for stats/demographics)
    • Do the demographics tell you anything about the website’s audience? (see alexa.com again)
    • Do other websites link to this page? (e.g. google with the search term “link: http://*paste URL here*”)? If so, who are the linkers?
    • Is the page ranked highly when searched for from at least two search engines?
  5. Design & Interactivity
    • Does the website’s design and other structural features (such as grammar) tell us anything about its credibility?
    • Does the page have an active comment section? If so, does the author respond to comments?
  6. Triangulation
    • Can you verify the content of a page by “triangulating” its claims with at least two or three other reliable sources?
    • Do fact-checking sites have anything useful on this topic? (e.g. try www.factcheck.org)
    • Are there topic-specific sites that do factchecking? (e.g. www.snopes.com for urban legends, www.skepticalscience.com for climate science). Note: How can you tell whether these sites are credible?
  7. Check your own biases
    • Overall, what’s your personal stake in the credibility of this page’s content?
    • How much time do you think you should allocate to verifying its reliability?

(Download the full worksheet)

It’s been a while since I’ve written about the question of climate model validation, but I regularly get asked about it when I talk about the work I’ve been doing studying how climate models are developed. There’s an upcoming conference organized by the Rotman Institute of Philosophy, in London, Ontario, on Knowledge and Models in Climate Science, at which many of my favourite thinkers on this topic will be speaking. So I thought it was a good time to get philosophical about this again, and define some terms that I think help frame the discussion (at least in the way I see it!).

Here’s my abstract for the conference:

Constructive and External Validity for Climate Modeling

Discussion of validity of scientific computational models tend to treat “the model” as a unitary artifact, and ask questions about its fidelity with respect to observational data, and its predictive power with respect to future situations. For climate modeling, both of these questions are problematic, because of long timescales and inhomogeneities in the available data. Our ethnographic studies of the day-to-day practices of climate modelers suggest an alternative framework for model validity, focusing on a modeling system rather than any individual model. Any given climate model can be configured for a huge variety of different simulation runs, and only ever represents a single instance of a continually evolving body of program code. Furthermore, its execution is always embedded in a broader social system of scientific collaboration which selects suitable model configurations for specific experiments, and interprets the results of the simulations within the broader context of the current body of theory about earth system processes.

We propose that the validity of a climate modeling system should be assessed with respect to two criteria: Constructive Validity, which refers to the extent to which the day-to-day practices of climate model construction involve the continual testing of hypotheses about the ways in which earth system processes are coded into the models, and External Validity, which refers to the appropriateness of claims about how well model outputs ought to correspond to past or future states of the observed climate system. For example, a typical feature of the day-to-day practice of climate model construction is the incremental improvement of the representation of specific earth system processes in the program code, via a series of hypothesis-testing experiments. Each experiment begins with a hypothesis (drawn from current or emerging theories about the earth system) that a particular change to the model code ought to result in a predicable change to the climatology produced by various runs of the model. Such a hypothesis is then tested empirically, using the current version of the model as a control, and the modified version of the model as the experimental case. Such experiments are then replicated for various configurations of the model, and results are evaluated in a peer review process via the scientific working groups who are responsible for steering the ongoing model development effort.

Assessment of constructive validity for a modeling system would take account of how well the day-to-day practices in a climate modeling laboratory adhere to rigorous standards for such experiments, and how well they routinely test the assumptions that are built into the model in this way. Similarly, assessment of the external validity of the modeling system would take account of how well knowledge of the strengths and weaknesses of particular instances of the model are taken into account when making claims about the scope of applicability of model results. We argue that such an approach offers a more coherent approach to questions of model validity, as it corresponds more directly with the way in which climate models are developed and used.

For more background, see:

I’ll be heading off to Stockholm in August to present a paper at the 2nd International Conference on Information and Communication Technologies for Sustainability (ICT4S’2014). The theme of the conference this year is “ICT and transformational change”, which got me thinking about how we think about change, and especially whether we equip students in computing with the right conceptual toolkit to think about change. I ended up writing a long critique of Computational Thinking, which has become popular lately as a way of describing what we teach in computing undergrad programs. I don’t think there’s anything wrong with computational thinking in small doses. But when an entire university program teaches nothing but computational thinking, we turn out generations of computing professionals who are ill-equipped to think about complex societal issues. This then makes them particularly vulnerable to technological solutionism. I hope the paper will provoke some interesting discussion!

Here’s the abstract for my paper (click here for the full paper):

From Computational Thinking to Systems Thinking: A conceptual toolkit for sustainability computing

Steve Easterbrook, University of Toronto

If information and communication technologies (ICT) are to bring about a transformational change to a sustainable society, then we need to transform our thinking. Computer professionals already have a conceptual toolkit for problem solving, sometimes known as computational thinking. However, computational thinking tends to see the world in terms a series of problems (or problem types) that have computational solutions (or solution types). Sustainability, on the other hand, demands a more systemic approach, to avoid technological solutionism, and to acknowledge that technology, human behaviour and environmental impacts are tightly inter-related. In this paper, I argue that systems thinking provides the necessary bridge from computational thinking to sustainability practice, as it provides a domain ontology for reasoning about sustainability, a conceptual basis for reasoning about transformational change, and a set of methods for critical thinking about the social and environmental impacts of technology. I end the paper with a set of suggestions for how to build these ideas into the undergraduate curriculum for computer and information sciences.

At the beginning of March, I was invited to give a talk at TEDxUofT. Colleagues tell me the hardest part of giving these talks is deciding what to talk about. I decided to see if I could answer the question of whether we can trust climate models. It was a fascinating and nerve-wracking experience, quite unlike any talk I’ve given before. Of course, I’d love to do another one, as I now know more about what works and what doesn’t.

Here’s the video and a transcript of my talk. [The bits in square brackets in are things I intended to say but forgot!] 

Computing the Climate: How Can a Computer Model Forecast the Future? TEDxUofT, March 1, 2014.

Talking about the weather forecast is a great way to start a friendly conversation. The weather forecast matters to us. It tells us what to wear in the morning; it tells us what to pack for a trip. We also know that weather forecasts can sometimes be wrong, but we’d be foolish to ignore them when they tell us a major storm is heading our way.

[Unfortunately, talking about climate forecasts is often a great way to end a friendly conversation!] Climate models tell us that by the end of this century, if we carry on burning fossil fuels at the rate we have been doing, and we carry on cutting down forests at the rate we have been doing, the planet will warm by somewhere between 5 to 6 degrees centigrade. That might not seem much, but, to put it into context, in the entire history of human civilization, the average temperature of the planet has not varied by more than 1 degree. So that forecast tells us something major is coming, and we probably ought to pay attention to it.

But on the other hand, we know that weather forecasts don’t work so well the longer into the future we peer. Tomorrow’s forecast is usually pretty accurate. Three day and five day forecasts are reasonably good. But next week? They always change their minds before next week comes. So how can we peer 100 years into the future and look at what is coming with respect to the climate? Should we trust those forecasts? Should we trust the climate models that provide them to us?

Six years ago, I set out to find out. I’m a professor of computer science. I study how large teams of software developers can put together complex pieces of software. I’ve worked with NASA, studying how NASA builds the flight software for the Space Shuttle and the International Space Station. I’ve worked with large companies like Microsoft and IBM. My work focusses not so much on software errors, but on the reasons why people make those errors, and how programmers then figure out they’ve made an error, and how they know how to fix it.

To start my study, I visited four major climate modelling labs around the world: in the UK, in Paris, Hamburg, Germany and in Colorado. Each of these labs have typically somewhere between 50-100 scientists who are contributing code to their climate models. And although I only visited four of these labs, there are another twenty or so around the world, all doing similar things. They run these models on some of the fastest supercomputers in the world, and many of the models have been in construction, the same model, for more than 20 years.

When I started this study, I asked one of my students to attempt to measure how many bugs there are in a typical climate model. We know from our experience with software there are always bugs. Sooner or later the machine crashes. So how buggy are climate models? More specifically, what we set out to measure is what we call “defect density” – How many errors are there per thousand lines of code. By this measure, it turns out climate models are remarkably high quality. In fact, they’re better than almost any commercial software that’s ever been studied. They’re about the same level of quality as the Space Shuttle flight software. Here’s my results (For the actual results you’ll have to read the paper):

DefectDensityResults-sm

We know it’s very hard to build a large complex piece of software without making mistakes.  Even the space shuttle’s software had errors in it. So the question is not “is the software perfect for predicting the future?”. The question is “Is it good enough?” Is it fit for purpose?

To answer that question, we’d better understand what that purpose of a climate model is. First of all, I’d better be clear what a climate model is not. A climate model is not a projection of trends we’ve seen in the past extrapolated into the future. If you did that, you’d be wrong, because you haven’t accounted for what actually causes the climate to change, and so the trend might not continue. They are also not decision-support tools. A climate model cannot tell us what to do about climate change. It cannot tell us whether we should be building more solar panels, or wind farms. It can’t tell us whether we should have a carbon tax. It can’t tell us what we ought to put into an international treaty.

What it does do is tell us how the physics of planet earth work, and what the consequences are of changing things, within that physics. I could describe it as “computational fluid dynamics on a rotating sphere”. But computational fluid dynamics is complex.

I went into my son’s fourth grade class recently, and I started to explain what a climate model is, and the first question they asked me was “is it like Minecraft?”. Well, that’s not a bad place to start. If you’re not familiar with Minecraft, it divides the world into blocks, and the blocks are made of stuff. They might be made of wood, or metal, or water, or whatever, and you can build things out of them. There’s no gravity in Minecraft, so you can build floating islands and it’s great fun.

Climate models are a bit like that. To build a climate model, you divide the world into a number of blocks. The difference is that in Minecraft, the blocks are made of stuff. In a climate model, the blocks are really blocks of space, through which stuff can flow. At each timestep, the program calculates how much water, or air, or ice is flowing into, or out of, each block, and if so, in which directions? It calculates changes in temperature, density, humidity, and so on. And whether stuff such as dust, salt, and pollutants are passing through or accumulating in each block. We have to account for the sunlight passing down through the block during the day. Some of what’s in each block might filter some of the the incoming sunlight, for example if there are clouds or dust, so some of the sunlight doesn’t get down to the blocks below. There’s also heat escaping upwards through the blocks, and again, some of what is in the block might trap some of that heat — for example clouds and greenhouse gases.

As you can see from this diagram, the blocks can be pretty large. The upper figure shows blocks of 87km on a side. If you want more detail in the model, you have to make the blocks smaller. Some of the fastest climate models today look more like the lower figure:

ModelResolution

Ideally, you want to make the blocks as small as possible, but then you have many more blocks to keep track of, and you get to the point where the computer just can’t run fast enough. A typical run of a climate model, to simulate a century’s worth of climate, you might have to wait a couple of weeks on some of the fastest supercomputers for that run to complete. So the speed of the computer limits how small we can make the blocks.

Building models this way is remarkably successful. Here’s video of what a climate model can do today. This simulation shows a year’s worth of weather from a climate model. What you’re seeing is clouds and, in orange, that’s where it’s raining. Compare that to a year’s worth of satellite data for the year 2013. If you put them side by side, you can see many of the same patterns. You can see the westerlies, the winds at the top and bottom of the globe, heading from west to east, and nearer the equator, you can see the trade winds flowing in the opposite direction. If you look very closely, you might even see a pulse over South America, and a similar one over Africa in both the model and the satellite data. That’s the daily cycle as the land warms up in the morning and the moisture evaporates from soils and plants, and then later on in the afternoon as it cools, it turns into rain.

Note that the bottom is an actual year, 2013, while the top, the model simulation is not a real year at all – it’s a typical year. So the two don’t correspond exactly. You won’t get storms forming at the same time, because it’s not designed to be an exact simulation; the climate model is designed to get the patterns right. And by and large, it does. [These patterns aren’t coded into this model. They emerge as a consequences of getting the basic physics of the atmosphere right].

So how do you build a climate model like this? The answer is “very slowly”. It takes a lot of time, and a lot of failure. One of the things that surprised me when I visited these labs is that the scientists don’t build these models to try and predict the future. They build these models to try and understand the past. They know their models are only approximations, and they regularly quote the statistician, George Box, who said “All models are wrong, but some are useful”. What he meant is that any model of the world is only an approximation. You can’t get all the complexity of the real world into a model. But even so, even a simple model is a good way to test your theories about the world.

So the way that modellers work, is they spend their time focussing on places where the model does isn’t quite right. For example, maybe the model isn’t getting the Indian monsoon right. Perhaps it’s getting the amount of rain right, but it’s falling in the wrong place. They then form a hypothesis. They’ll say, I think I can improve the model, because I think this particular process is responsible, and if I improve that process in a particular way, then that should fix the simulation of the monsoon cycle. And then they run a whole series of experiments, comparing the old version of the model, which is getting it wrong, with the new version, to test whether the hypothesis is correct. And if after a series of experiments, they believe their hypothesis is correct, they have to convince the rest of the modelling team that this really is an improvement to the model.

In other words, to build the models, they are doing science. They are developing hypotheses, they are running experiments, and using peer review process to convince their colleagues that what they have done is correct:

ModelDevelopmentProcess-sm

Climate modellers also have a few other weapons up their sleeves. Imagine for a moment if Microsoft had 25 competitors around the world, all of whom were attempting to build their own versions of Microsoft Word. Imagine further that every few years, those 25 companies all agreed to run their software on a very complex battery of tests, designed to test all the different conditions under which you might expect a word processor to work. And not only that, but they agree to release all the results of those tests to the public, on the internet, so that anyone who wanted to use any of that software can pore over all the data and find out how well each version did, and decide which version they want to use for their own purposes. Well, that’s what climate modellers do. There is no other software in the world for which there are 25 teams around the world trying to build the same thing, and competing with each other.

Climate modellers also have some other advantages. In some sense, climate modelling is actually easier than weather forecasting. I can show you what I mean by that. Imagine I had a water balloon (actually, you don’t have to imagine – I have one here):

AboutToThrowTheWaterBalloon

I’m going to throw it at the fifth row. Now, you might want to know who will get wet. You could measure everything about my throw: Will I throw underarm, or overarm? Which way am I facing when I let go of it? How much swing do I put in? If you could measure all of those aspects of my throw, and you understand the physics of how objects move, you could come up with a fairly accurate prediction of who is going to get wet.

That’s like weather forecasting. We have to measure the current conditions as accurately as possible, and then project forward to see what direction it’s moving in:

WeatherForecasting

If I make any small mistakes in measuring my throw, those mistakes will multiply as the balloon travels further. The further I attempt to throw it, the more room there is for inaccuracy in my estimate. That’s like weather forecasting. Any errors in the initial conditions multiply up rapidly, and the current limit appears to be about a week or so. Beyond that, the errors get so big that we just cannot make accurate forecasts.

In contrast, climate models would be more like releasing a balloon into the wind, and predicting where it will go by knowing about the wind patterns. I’ll make some wind here using a fan:

BalloonInTheWind

Now that balloon is going to bob about in the wind from the fan. I could go away and come back tomorrow and it will still be doing about the same thing. If the power stays on, I could leave it for a hundred years, and it might still be doing the same thing. I won’t be able to predict exactly where that balloon is going to be at any moment, but I can predict, very reliably, the space in which it will move. I can predict the boundaries of its movement. And if the things that shape those boundaries change, for example by moving the fan, and I know what the factors are that shape those boundaries, I can tell you how the patterns of its movements are going to change – how the boundaries are going to change. So we call that a boundary problem:

ClimateAsABoundaryProblem

The initial conditions are almost irrelevant. It doesn’t matter where the balloon started, what matters is what’s shaping its boundary.

So can these models predict the future? Are they good enough to predict the future? The answer is “yes and no”. We know the models are better at some things than others. They’re better at simulating changes in temperature than they are at simulating changes in rainfall. We also know that each model tends to be stronger in some areas and weaker in others. If you take the average of a whole set of models, you get a much better simulation of how the planet’s climate works than if you look at any individual model on its own. What happens is that the weaknesses in any one model are compensated for by other models that don’t have those weaknesses.

But the results of the models have to be interpreted very carefully, by someone who knows what the models are good at, and what they are not good at – you can’t just take the output of a model and say “that’s how it’s going to be”.

Also, you don’t actually need a computer model to predict climate change. The first predictions of what would happen if we keep on adding carbon dioxide to the atmosphere were produced over 120 years ago. That’s fifty years before the first digital computer was invented. And those predictions were pretty accurate – what has happened over the twentieth century has followed very closely what was predicted all those years ago. Scientists also predicted, for example, that the arctic would warm faster than the equatorial regions, and that’s what happened. They predicted night time temperatures would rise faster than day time temperatures, and that’s what happened.

So in many ways, the models only add detail to what we already know about the climate. They allow scientists to explore “what if” questions. For example, you could ask of a model, what would happen if we stop burning all fossil fuels tomorrow. And the answer from the models is that the temperature of the planet will stay at whatever temperature it was when you stopped. For example, if we wait twenty years, and then stopped, we’re stuck with whatever temperature we’re at for tens of thousands of years. You could ask a model what happens if we dig up all known reserves of fossil fuels, and burn them all at once, in one big party? Well, it gets very hot.

More interestingly, you could ask what if we tried blocking some of the incoming sunlight to cool the planet down, to compensate for some of the warming we’re getting from adding greenhouse gases to the atmosphere? There have been a number of very serious proposals to do that. There are some who say we should float giant space mirrors. That might be hard, but a simpler way of doing it is to put dust up in the stratosphere, and that blocks some of the incoming sunlight. It turns out that if you do that, you can very reliably bring the average temperature of the planet back down to whatever level you want, just by adjusting the amount of the dust. Unfortunately, some parts of the planet cool too much, and others not at all. The crops don’t grow so well, and everyone’s weather gets messed up. So it seems like that could be a solution, but when you study the model results in detail, there are too many problems.

Remember that we know fairly well what will happen to the climate if we keep adding CO2, even without using a computer model, and the computer models just add detail to what we already know. If the models are wrong, they could be wrong in either direction. They might under-estimate the warming just as much as they might over-estimate it. If you look at how well the models can simulate the past few decades, especially the last decade, you’ll see some of both. For example, the models have under-estimated how fast the arctic sea ice has melted. The models have underestimated how fast the sea levels have risen over the last decade. On the other hand, they over-estimated the rate of warming at the surface of the planet. But they underestimated the rate of warming in the deep oceans, so some of the warming ends up in a different place from where the models predicted. So they can under-estimate just as much as they can over-estimate. [The less certain we are about the results from the models, the bigger the risk that the warming might be much worse than we think.]

So when you see a graph like this, which comes from the latest IPCC report that just came out last month, it doesn’t tell us what to do about climate change, it just tells us the consequences of what we might choose to do. Remember, humans aren’t represented in the models at all, except in terms of us producing greenhouse gases and adding them to the atmosphere.

IPCC-AR5-WG1-Fig12.5

If we keep on increasing our use of fossil fuels — finding more oil, building more pipelines, digging up more coal, we’ll follow the top path. And that takes us to a planet that by the end of this century, is somewhere between 4 and 6 degrees warmer, and it keeps on getting warmer over the next few centuries. On the other hand, the bottom path, in dark blue, shows what would happen if, year after year from now onwards, we use less fossil fuels than we did the previous year, until about mid-century, when we get down to zero emissions, and we invent some way to start removing that carbon dioxide from the atmosphere before the end of the century, to stay below 2 degrees of warming.

The models don’t tell us which of these paths we should follow. They just tell us that if this is what we do, here’s what the climate will do in response. You could say that what the models do is take all the data and all the knowledge we have about the climate system and how it works, and put them into one neat package, and its our job to take that knowledge and turn it into wisdom. And to decide which future we would like.

My department is busy revising the set of milestones our PhD students need to meet in the course of their studies. The milestones are intended to ensure each student is making steady progress, and to identify (early!) any problems. At the moment they don’t really do this well, in part because the faculty all seem to have different ideas about what we should expect at each milestone. (This is probably a special case of the general rule that if you gather n professors together, they will express at least n+1 mutually incompatible opinions). As a result, the students don’t really know what’s expected of them, and hence spend far longer in the PhD program than they would need to if they received clear guidance.

Anyway, in order to be helpful, I wrote down what I think are the set of skills that a PhD student needs to demonstrate early in the program, as a prerequisite for becoming a successful researcher:

  1. The ability to select a small number of significant research contributions from a larger set of published papers, and justify that selection.
  2. The ability to articulate a rationale for selection of these papers, on the basis of significance of the results, novelty of the approach, etc.
  3. The ability to relate the papers to one another, and to other research in the literature.
  4. The ability to critique the research methods used in these papers, the strengths and weaknesses of these methods, and likely threats to validity, whether acknowledged in the papers or not.
  5. The ability to suggest alternative approaches to answering the research questions posed in these papers.
  6. The ability to identify limitations on the results reported in the papers, along with their implications.
  7. The ability to identify and prioritize lines of investigation for further research, based on limitations of the research described in the papers and/or important open problems that the papers fail to answer.

My suggestion is that at the end of the first year of the PhD program, each student should demonstrate development of these skills by writing a short report that selects and critiques a handful (4-6) of papers in a particular subfield. If a student can’t do this well, they’re probably not going to succeed in the PhD program.

My proposal has now gone to the relevant committee (“where good ideas go to die™”), so we’ll see what happens…

Imagine for a moment if Microsoft had 24 competitors around the world, each building their own version of Microsoft Word. Imagine further that every few years, they all agreed to run their software through the same set of very demanding tests of what a word processor ought to be able to do in a large variety of different conditions. And imagine that all these competing  companies agreed that all the results from these tests would be freely available on the web, for anyone to see. Then, people who want to use a word processor can explore the data and decide for themselves which one best serves their purpose. People who have concerns about the reliability of word processors can analyze the strengths and weaknesses of each company’s software. Then think about what such a process would do to the reliability of word processors. Wouldn’t that be a great world to live in?

Well, that’s what climate modellers do, through a series of model inter-comparison projects. There are around 25 major climate modelling labs around the world developing fully integrated global climate models, and hundreds of smaller labs building specialized models of specific components of the earth system. The fully integrated models are compared in detail every few years through the Coupled Model Intercomparison Projects. And there are many other model inter-comparison projects for various specialist communities within climate science.

Have a look at how this process works, via this short paper on the planning process for CMIP6.

What’s the difference between forecasting the weather and predicting future climate change? A few years ago, I wrote a long post explaining that weather forecasting is an initial value problem, while climate is a boundary value problem. This is a much shorter explanation:

Imagine I were to throw a water balloon at you. If you could measure precisely how I threw it, and you understand the laws of physics correctly, you could predict precisely where it will go. If you could calculate it fast enough, you would know whether you’re going to get wet, or whether I’ll miss. That’s an initial value problem. The less precise your measurements of the initial value (how I throw it), the less accurate your prediction will be. Also, the longer the throw, the more the errors grow. This is how weather forecasting works – you measure the current conditions (temperature, humidity, wind speed, and so on) as accurately as possible, put them into a model that simulates the physics of the atmosphere, and run it to see how the weather will evolve. But the further into the future that you want to peer, the less accurate your forecast, because the errors on the initial value get bigger. It’s really hard to predict the weather more than about a week into the future:

Weather as an initial value problem

Now imagine I release a helium balloon into the air flow from a desk fan, and the balloon is on a string that’s tied to the fan casing. The balloon will reach the end of its string, and bob around in the stream of air. It doesn’t matter how exactly I throw the balloon into the airstream – it will keep on bobbing about in the same small area. I could leave it there for hours and it will do the same thing. This is a boundary value problem. I won’t be able to predict exactly where the balloon will be at any moment, but I will be able to tell you fairly precisely the boundaries of the space in which it will be bobbing. If anything affects these boundaries (e.g. because I move the fan a little), I should also be able to predict how this will shifts the area in which the balloon will bob. This is how climate prediction works. You start off with any (reasonable) starting state, and run your model for as long as you like. If your model gets the physics right, it will simulate a stable climate indefinitely, no matter how you initialize it:

Climate as a boundary value problem

But if the boundary conditions change, because, for example, we alter the radiative balance of the planet, the model should also be able to predict fairly accurately how this will shift the boundaries on the climate:

Climate change as a change in boundary conditions

 

We cannot predict what the weather will do on any given day far into the future. But if we understand the boundary conditions and how they are altered, we can predict fairly accurately how the range of possible weather patterns will be affected. Climate change is a change in the boundary conditions on our weather systems.

A few weeks ago, Mark Higgins, from EUMETSAT, posted this wonderful video of satellite imagery of planet earth for the whole of the year 2013. The video superimposes the aggregated satellite data from multiple satellites on the top of NASA’s ‘Blue Marble Next Generation’ ground maps, to give a consistent picture of large scale weather patterns (Original video here – be sure to listen to Mark’s commentary):

When I saw the video, it reminded me of something. Here’s the output from the CAM3, the atmospheric component of the global climate model CESM, run at very high resolution (Original video here):

I find it fascinating to play these two videos at the same time, and observe how the model captures the large scale weather patterns of the planet. The comparison isn’t perfect, because the satellite data measures the cloud temperature (the colder the clouds, the whiter they are shown), while the climate model output shows total water vapour & rain (i.e. warmer clouds are a lot more visible, and precipitation is shown in orange). This means the tropical regions look much drier in the satellite imagery than they do in the model output.

But even so, there are some remarkable similarities. For example, both videos clearly show the westerlies, the winds that flow from west to east at the top and bottom of the map (e.g. pushing rain across the North Atlantic to the UK), and they both show the trade winds, which flow from east to west, closer to the equator. Both videos also show how cyclones form in the regions between these wind patterns. For example, in both videos, you can see the typhoon season ramp up in the Western Pacific in August and September – the model has two hitting Japan in August, and the satellite data shows several hitting China in September. The curved tracks of these storms are similar in both models. If you look closely, you can also see the daily cycle of evaporation and rain over South America and Central Africa in both videos – watch how these regions appear to pulse each day.

I find these similarities remarkable, because none of these patterns are coded into the climate model – they all emerge as a consequence of getting the basic thermodynamic properties of the atmosphere right. Remember also that a climate model is not intended to forecast the particular weather of any given year (that would be impossible, due to chaos theory). However, the model simulates a “typical” year on planet earth. So the specifics of where and when each storm forms do not correspond to anything that actually happened in any given year. But when the model gets the overall patterns about right, that’s a pretty impressive achievement.

I’ve been trawling through the final draft of the new IPCC assessment report that was released last week, to extract some highlights for a talk I gave yesterday. Here’s what I think are its key messages:

  1. The warming is unequivocal.
  2. Humans caused the majority of it.
  3. The warming is largely irreversible.
  4. Most of the heat is going into the oceans.
  5. Current rates of ocean acidification are unprecedented.
  6. We have to choose which future we want very soon.
  7. To stay below 2°C of warming, the world must become carbon negative.
  8. To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Before I elaborate on these, a little preamble. The IPCC was set up in 1988 as a UN intergovernmental body to provide an overview of the science. Its job is to assess what the peer-reviewed science says, in order to inform policymaking, but it is not tasked with making specific policy recommendations. The IPCC and its workings seem to be widely misunderstood in the media. The dwindling group of people who are still in denial about climate change particularly like to indulge in IPCC-bashing, which seems like a classic case of ‘blame the messenger’. The IPCC itself has a very small staff (no more than a dozen or so people). However, the assessment reports are written and reviewed by a very large team of scientists (several thousands), all of whom volunteer their time to work on the reports. The scientists are are organised into three working groups: WG1 focuses on the physical science basis, WG2 focuses on impacts and climate adaptation, and WG3 focuses on how climate mitigation can be achieved.

Last week, just the WG1 report was released as a final draft, although it was accompanied by bigger media event around the approval of the final wording on the WG1 “Summary for Policymakers”. The final version of the full WG1 report, plus the WG2 and WG3 reports, are not due out until spring next year. That means it’s likely to be subject to minor editing/correcting, and some of the figures might end up re-drawn. Even so, most of the text is unlikely to change, and the major findings can be considered final. Here’s my take on the most important findings, along with a key figure to illustrate each.

(1) The warming is unequivocal

The text of the summary for policymakers says “Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased.”

Observed globally averaged combined land and ocean surface temperature anomaly 1850-2012. The top panel shows the annual values; the bottom panel shows decadal means. (Note: Anomalies are relative to the mean of 1961-1990).

(Fig SPM.1) Observed globally averaged combined land and ocean surface temperature anomaly 1850-2012. The top panel shows the annual values; the bottom panel shows decadal means. (Note: Anomalies are relative to the mean of 1961-1990).

Unfortunately, there has been much play in the press around a silly idea that the warming has “paused” in the last decade. If you squint at the last few years of the top graph, you might be able to convince yourself that the temperature has been nearly flat for a few years, but only if you cherry pick your starting date, and use a period that’s too short to count as climate. When you look at it in the context of an entire century and longer, such arguments are clearly just wishful thinking.

The other thing to point out here is that the rate of warming is unprecedented. “With very high confidence, the current rates of CO2, CH4 and N2O rise in atmospheric concentrations and the associated radiative forcing are unprecedented with respect to the highest resolution ice core records of the last 22,000 years”, and there is “medium confidence that the rate of change of the observed greenhouse gas rise is also unprecedented compared with the lower resolution records of the past 800,000 years.” In other words, there is nothing in any of the ice core records that is comparable to what we have done to the atmosphere over the last century. The earth has warmed and cooled in the past due to natural cycles, but never anywhere near as fast as modern climate change.

(2) Humans caused the majority of it

The summary for policymakers says “It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century”.

The Earth's energy budget from 1970 to 2011. Cumulative energy flux (in zettaJoules!) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

(Box 13.1 fig 1) The Earth’s energy budget from 1970 to 2011. Cumulative energy flux (in zettaJoules!) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

This chart summarizes the impact of different drivers of warming and/or cooling, by showing the total cumulative energy added to the earth system since 1970 from each driver. Note that the chart is in zettajoules (1021J). For comparison, one zettajoule is about the energy that would be released from 200 million bombs of the size of the one dropped on Hiroshima. The world’s total annual global energy consumption is about 0.5ZJ.

Long lived greenhouse gases, such as CO2, contribute the majority of the warming (the purple line). Aerosols, such as particles of industrial pollution, block out sunlight and cause some cooling (the dark blue line), but nowhere near enough to offset the warming from greenhouse gases. Note that aerosols have the largest uncertainty bar; much of the remaining uncertainty about the likely magnitude of future climate warming is due to uncertainty about how much of the warming might be offset by aerosols. The uncertainty on the aerosols curve is, in turn, responsible for most of the uncertainty on the black line, which shows the total effect if you add up all the individual contributions.

The graph also puts into perspective some of other things that people like to blame for climate change, including changes in energy received from the sun (‘solar’), and the impact of volcanoes. Changes in the sun (shown in orange) are tiny compared to greenhouse gases, but do show a very slight warming effect. Volcanoes have a larger (cooling) effect, but it is short-lived. There were two major volcanic eruptions in this period, El Chichón in 1982 and and Pinatubo in 1992. Each can be clearly seen in the graph as an immediate cooling effect, which then tapers off after a a couple of years.

(3) The warming is largely irreversible

The summary for policymakers says “A large fraction of anthropogenic climate change resulting from CO2 emissions is irreversible on a multi-century to millennial time scale, except in the case of a large net removal of CO2 from the atmosphere over a sustained period. Surface temperatures will remain approximately constant at elevated levels for many centuries after a complete cessation of net anthropogenic CO2 emissions.”

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

The conclusions about irreversibility of climate change are greatly strengthened from the previous assessment report, as recent research has explored this in much more detail. The problem is that a significant fraction of our greenhouse gas emissions stay in the atmosphere for thousands of years, so even if we stop emitting them altogether, they hang around, contributing to more warming. In simple terms, whatever peak temperature we reach, we’re stuck at for millennia, unless we can figure out a way to artificially remove massive amounts of CO2 from the atmosphere.

The graph is the result of an experiment that runs (simplified) models for a thousand years into the future. The major climate models are generally too computational expensive to be run for such a long simulation, so these experiments use simpler models, so-called EMICS (Earth system Models of Intermediate Complexity).

The four curves in this figure correspond to four “Representative Concentration Pathways“, which map out four ways in which the composition of the atmosphere is likely to change in the future. These four RCPs were picked to capture four possible futures: two in which there is little to no coordinated action on reducing global emissions (worst case – RCP8.5 and best case – RCP6) and two on which there is serious global action on climate change (worst case – RCP4.5 and best case – RCP 2.6). A simple way to think about them is as follows. RCP8.5 represents ‘business as usual’ – strong economic development for the rest of this century, driven primarily by dependence on fossil fuels. RCP6 represents a world with no global coordinated climate policy, but where lots of localized clean energy initiatives do manage to stabilize emissions by the latter half of the century. RCP4.5 represents a world that implements strong limits on fossil fuel emissions, such that greenhouse gas emissions peak by mid-century and then start to fall. RCP2.6 is a world in which emissions peak in the next few years, and then fall dramatically, so that the world becomes carbon neutral by about mid-century.

Note that in RCP2.6 the temperature does fall, after reaching a peak just below 2°C of warming over pre-industrial levels. That’s because RCP2.6 is a scenario in which concentrations of greenhouse gases in the atmosphere start to fall before the end of the century. This is only possible if we reduce global emissions so fast that we achieve carbon neutrality soon after mid-century, and then go carbon negative. By carbon negative, I mean that globally, each year, we remove more CO2 from the atmosphere than we add. Whether this is possible is an interesting question. But even if it is, the model results show there is no time within the next thousand years when it is anywhere near as cool as it is today.

(4) Most of the heat is going into the oceans

The oceans have a huge thermal mass compared to the atmosphere and land surface. They act as the planet’s heat storage and transportation system, as the ocean currents redistribute the heat. This is important because if we look at the global surface temperature as an indication of warming, we’re only getting some of the picture. The oceans act as a huge storage heater, and will continue to warm up the lower atmosphere (no matter what changes we make to the atmosphere in the future).

(Box 3.1 Fig 1) Plot of energy accumulation in ZJ (1 ZJ = 1021 J) within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. See text for data sources. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

(Box 3.1 Fig 1) Plot of energy accumulation in ZJ (1 ZJ = 1021 J) within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

Note the relationship between this figure (which shows where the heat goes) and the figure I showed above that shows change in cumulative energy budget from different sources. Both graphs show zettajoules accumulating over about the same period (1970-2011). But the first graph has a cumulative total just short of 800ZJ by the end of the period, while this one shows the earth storing “only” about 300ZJ of this. Where did the remaining energy go? Because the earth’s temperature rose during this period, it also lost increasingly more energy back into space. When greenhouse gases trap heat, the earth’s temperature keeps rising until outgoing energy and incoming energy are in balance again.

(5) Current rates of ocean acidification are unprecedented.

The IPCC report says “The pH of seawater has decreased by 0.1 since the beginning of the industrial era, corresponding to a 26% increase in hydrogen ion concentration. … It is virtually certain that the increased storage of carbon by the ocean will increase acidification in the future, continuing the observed trends of the past decades. … Estimates of future atmospheric and oceanic carbon dioxide concentrations indicate that, by the end of this century, the average surface ocean pH could be lower than it has been for more than 50 million years”.

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings. [The numbers indicate the number of models used in each ensemble.]

Ocean acidification has sometimes been ignored in discussions about climate change, but it is a much simpler process, and is much easier to calculate (notice the uncertainty range on the graph above is much smaller than most of the other graphs). This graph shows the projected acidification in the best and worst case scenarios (RCP2.6 and RCP8.5). Recall that RCP8.5 is the “business as usual” future.

Note that this doesn’t mean the ocean will become acid. The ocean has always been slightly alkaline – well above the neutral value of pH7. So “acidification” refers to a drop in pH, rather than a drop below pH7. As this continues, the ocean becomes steadily less alkaline. Unfortunately, as the pH drops, the ocean stops being supersaturated for calcium carbonate. If it’s no longer supersaturated, anything made of calcium carbonate starts dissolving. Corals and shellfish can no longer form. If you kill these off, the entire ocean foodchain is affected. Here’s what the IPCC report says: “Surface waters are projected to become seasonally corrosive to aragonite in parts of the Arctic and in some coastal upwelling systems within a decade, and in parts of the Southern Ocean within 1–3 decades in most scenarios. Aragonite, a less stable form of calcium carbonate, undersaturation becomes widespread in these regions at atmospheric CO2 levels of 500–600 ppm”.

(6) We have to choose which future we want very soon.

In the previous IPCC reports, projections of future climate change were based on a set of scenarios that mapped out different ways in which human society might develop over the rest of this century, taking account of likely changes in population, economic development and technological innovation. However, none of the old scenarios took into account the impact of strong global efforts at climate mitigation. In other words, they all represented futures in which we don’t take serious action on climate change. For this report, the new “RCPs” have been chosen to allow us to explore the choice we face.

This chart sums it up nicely. If we do nothing about climate change, we’re choosing a path that will look most like RCP8.5. Recall that this is the one where emissions keep rising just as they have done throughout the 20th century. On the other hand, if we get serious about curbing emissions, we’ll end up in a future that’s probably somewhere between RCP2.6 and RCP4.5 (the two blue lines). All of these futures give us a much warmer planet. All of these futures will involve many challenges as we adapt to life on a warmer planet. But by curbing emissions soon, we can minimize this future warming.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

Note also that the uncertainty range (the shaded region) is much bigger for RCP8.5 than it is for the other scenarios. The more the climate changes beyond what we’ve experienced in the recent past, the harder it is to predict what will happen. We tend to use the difference across different models as an indication of uncertainty (the coloured numbers shows how many different models participated in each experiment). But there’s also the possibility of “unknown unknowns” – surprises that aren’t in the models, so the uncertainty range is likely to be even bigger than this graph shows.

(7) To stay below 2°C of warming, the world must become carbon negative.

Only one of the four future scenarios (RCP2.6) shows us staying below the UN’s commitment to no more than 2ºC of warming. In RCP2.6, emissions peak soon (within the next decade or so), and then drop fast, under a stronger emissions reduction policy than anyone has ever proposed in international negotiations to date. For example, the post-Kyoto negotiations have looked at targets in the region of 80% reductions in emissions over say a 50 year period. In contrast, the chart below shows something far more ambitious: we need more than 100% emissions reductions. We need to become carbon negative:

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

The graph on the left shows four possible CO2 emissions paths that would all deliver the RCP2.6 scenario, while the graph on the right shows the resulting temperature change for these four. They all give similar results for temperature change, but differ in how we go about reducing emissions. For example, the black curve shows CO2 emissions peaking by 2020 at a level barely above today’s, and then dropping steadily until emissions are below zero by about 2070. Two other curves show what happens if emissions peak higher and later: the eventual reduction has to happen much more steeply. The blue dashed curve offers an implausible scenario, so consider it a thought experiment: if we held emissions constant at today’s level, we have exactly 30 years left before we would have to instantly reduce emissions to zero forever.

Notice where the zero point is on the scale on that left-hand graph. Ignoring the unrealistic blue dashed curve, all of these pathways require the world to go net carbon negative sometime soon after mid-century. None of the emissions targets currently being discussed by any government anywhere in the world are sufficient to achieve this. We should be talking about how to become carbon negative.

One further detail. The graph above shows the temperature response staying well under 2°C for all four curves, although the uncertainty band reaches up to 2°C. But note that this analysis deals only with CO2. The other greenhouse gases have to be accounted for too, and together they push the temperature change right up to the 2°C threshold. There’s no margin for error.

(8) To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Perhaps the most profound advance since the previous IPCC report is a characterization of our global carbon budget. This is based on a finding that has emerged strongly from a number of studies in the last few years: the expected temperature change has a simple linear relationship with cumulative CO2 emissions since the beginning of the industrial era:

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

The chart is a little hard to follow, but the main idea should be clear: whichever experiment we carry out, the results tend to lie on a straight line on this graph. You do get a slightly different slope in one experiment, the “1%/yr” experiment, where only CO2 rises, and much more slowly than it has over the last few decades. All the more realistic scenarios lie in the orange band, and all have about the same slope.

This linear relationship is a useful insight, because it means that for any target ceiling for temperature rise (e.g. the UN’s commitment to not allow warming to rise more than 2°C above pre-industrial levels), we can easily determine a cumulative emissions budget that corresponds to that temperature. So that brings us to the most important paragraph in the entire report, which occurs towards the end of the summary for policymakers:

Limiting the warming caused by anthropogenic CO2 emissions alone with a probability of >33%, >50%, and >66% to less than 2°C since the period 1861–1880, will require cumulative CO2 emissions from all anthropogenic sources to stay between 0 and about 1560 GtC, 0 and about 1210 GtC, and 0 and about 1000 GtC since that period respectively. These upper amounts are reduced to about 880 GtC, 840 GtC, and 800 GtC respectively, when accounting for non-CO2 forcings as in RCP2.6. An amount of 531 [446 to 616] GtC, was already emitted by 2011.

Unfortunately, this paragraph is a little hard to follow, perhaps because there was a major battle over the exact wording of it in the final few hours of inter-governmental review of the “Summary for Policymakers”. Several oil states objected to any language that put a fixed limit on our total carbon budget. The compromise was to give several different targets for different levels of risk. Let’s unpick them. First notice that the targets in the first sentence are based on looking at CO2 emissions alone; the lower targets in the second sentence take into account other greenhouse gases, and other earth systems feedbacks (e.g. release of methane from melting permafrost), and so are much lower. It’s these targets that really matter:

  • To give us a one third (33%) chance of staying below 2°C of warming over pre-industrial levels, we cannot ever emit more than 880 gigatonnes of Carbon. 
  • To give us a 50% chance, we cannot ever emit more than 840 gigatonnes of Carbon.
  • To give us a 66% chance, we cannot ever emit more than 800 gigatonnes of Carbon.

Since the beginning of industrialization, we have already emitted a little more than 500 gigatonnes. So our remaining budget is somewhere between 300 and 400 gigatonnes of carbon. Existing known fossil fuel reserves are enough to release at least 1000 gigatonnes. New discoveries and unconventional sources will likely more than double this. That leads to one inescapable conclusion:

Most of the remaining fossil fuel reserves must stay buried in the ground.

We’ve never done that before. There is no political or economic system anywhere in the world currently that can persuade an energy company to leave a valuable fossil fuel resource untapped. There is no government in the world that has demonstrated the ability to forgo the economic wealth from natural resource extraction, for the good of the planet as a whole. We’re lacking both the political will and the political institutions to achieve this. Finding a way to achieve this presents us with a challenge far bigger than we ever imagined.

Update (10 Oct 2013): An earlier version of this post omitted the phrase “To stay below 2°C of warming” from the last key point.

Yesterday I talked about three re-inforcing feedback loops in the earth system, each of which has the potential to accelerate a warming trend once it has started. I also suggested there are other similar feedback loops, some of which are known, and others perhaps yet to be discovered. For example, a paper published last month suggested a new feedback loop, to do with ocean acidification. In a nutshell, as the ocean absorbs more CO2, it becomes more acidic, which inhibits the growth of phytoplankton. These plankton are a major source of sulphur compounds that end up as aerosols in the atmosphere, which seeds the formation of clouds. Less clouds mean lower albedo, which means more warming. Whether this feedback loop is important remains to be seen, but we do know that clouds have an important role to play in climate change.

I didn’t include clouds on my diagrams yet, because clouds deserve a special treatment, in part because they are involved in two major feedback loops that have opposite effects:

Two opposing cloud feedback loops

Two opposing cloud feedback loops. An increase in temperature leads to an increase in moisture in the atmosphere. This leads to two new loops…

As the earth warms, we get more moisture in the atmosphere (simply because there is more evaporation from the surface, and warmer air can hold more moisture). Water vapour is a powerful greenhouse gas, so the more there is in the atmosphere, the more warming we get (greenhouse gases reduce the outgoing radiation). So this sets up a reinforcing feedback loop: more moisture causes more warming causes more moisture.

However, if there is more moisture in the atmosphere, there’s also likely to be more cloud formation. Clouds raise the albedo of the planet and reflect sunlight back into space before it can reach the surface. Hence, there is also a balancing loop: by blocking more sunlight, extra clouds will help to put the brakes on any warming. Note that I phrased this carefully: this balancing loop can slow a warming trend, but it does not create a cooling trend. Balancing loops tend to stop a change from occurring, but they do not create a change in the opposite direction. For example, if enough clouds form to completely counteract the warming, they also remove the mechanism (i.e. warming!) that causes growth in cloud cover in the first place. If we did end up with so many extra clouds that it cooled the planet, the cooling would then remove the extra clouds, so we’d be back where we started. In fact, this loop is nowhere near that strong anyway. [Note that under some circumstances, balancing loops can lead to oscillations, rather than gently converging on an equilibrium point, and the first wave of a very slow oscillation might be mistaken for a cooling trend. We have to be careful with our assumptions and timescales here!].

So now we have two new loops that set up opposite effects – one tends to accelerate warming, and the other tends to decelerate it. You can experience both these effects directly: cloudy days tend to be cooler than sunny days, because the clouds reflect away some of the sunlight. But cloudy nights tend to be warmer than clear nights because the water vapour traps more of the escaping heat from the surface. In the daytime, both effects are operating, and the cooling effect tends to dominate. During the night, there is no sunlight to block, so only the warming effect works.

If we average out the effects of these loops over many days, months, or years, which of the effects dominate? (i.e. which loop is stronger?) Does the extra moisture mean more warming or less warming? This is clearly an area where building a computer model and experimenting with it might help, as we need to quantify the effects to understand them better. We can build good computer models of how clouds form at the small scale, by simulating the interaction of dust and water vapour. But running such a model for the whole planet is not feasible with today’s computers.

To make things a little more complicated, these two feedback loops interact with other things. For example, another likely feedback loop comes from a change in the vertical temperature profile of the atmosphere. Current models indicate that, at least in the tropics, the upper atmosphere will warm faster than the surface (in technical terms, it will reduce the lapse rate – the rate at which temperature drops as you climb higher). This then increases the outgoing radiation, because it’s from the upper atmosphere that the earth loses its heat to space. This creates another (small) balancing feedback:

The lapse rate feedback - if the upper troposphere warms faster than the surface (i.e. a lower lapse rate), this increases outgoing radiation from the planet.

The lapse rate feedback – if the upper troposphere warms faster than the surface (i.e. a lower lapse rate), this increases outgoing radiation from the planet.

Note that this lapse rate feedback operates in the same way as the main energy balance loop – the two ‘-‘ links have the same effect as the existing ‘+’ link from temperature to outgoing infra-red radiation. In other words this new loop just strengthens the effect of the existing loop – for convenience we could just fold both paths into the one link.

However, water vapour feedback can interact with this new feedback loop, because the warmer upper atmosphere will hold more water vapour in exactly the place where it’s most effective as a greenhouse gas. Not only that, but clouds themselves can change the vertical temperature profile, depending on their height. I said it was complicated!

The difficulty of simulating all these different interactions of clouds accurately leads to one of the biggest uncertainties in climate science. In 1979, the Charney report calculated that all these cloud and water vapour feedback loops roughly cancel out, but pointed out that there was a large uncertainty bound on this estimate. More than thirty years later, we understand much more about the how cloud formation and distribution are altered in a warming world, but our margins of error for calculating cloud effects have barely reduced, because of the difficulty of simulating them on a global scale. Our best guess is now that the (reinforcing) water vapour feedback loop is slightly stronger than than the (balancing) cloud albedo and lapse rate loops. So the net effect of these three loops is an amplifying effect on the warming.

Other posts in this series, so far:

At the start of this series, I argued that Climate Science is inherently a Systems Discipline. To develop that idea, I described two important systems as feedback loops: the earth’s temperature equilibrium loop and economic growth and energy consumption, and then we put these two systems together.

The basic climate system now looks like this (leaving out, for now, the dynamics that drive economic development and energy use):

The basic planetary energy balancing loop, with the burning of fossil fuels forcing the temperature to change

The basic planetary energy balancing loop, with the burning of fossil fuels forcing the temperature to change

Recall that the balancing loop (marked with a ‘B’) ensures that for each change to the input forcings (in this case greenhouse gases and aerosols in the atmosphere), the earth system will settle down to a new equilibrium point: a temperature at which the incoming and outgoing energy flows are balanced again. Each time we increase the concentration of greenhouse gases in the atmosphere, we can expect the earth to warm, slowly, until it reaches this new equilibrium. The economy-energy system (not shown above) is ensuring that we keep on adding more greenhouse gases, so we’re continually pushing the system further and further out of balance. That means we’re continually increasing the eventual temperature rise that the earth will experience before it reaches a new equilibrium.

Meanwhile, the aerosols provide a slight cooling effect, but they wash out of the atmosphere fairly quickly, so their overall concentration isn’t rising much. Carbon dioxide does not wash out quickly – it can remain in the atmosphere for thousands of years. Hence the warming effect dominates.

Now, if that was the whole picture, climate change would be very predictable, using basic thermodynamic principles. Unfortunately, there are other feedback loops that we haven’t considered yet. Here’s one:

The basic climate system with the ice albedo feedback

The basic climate system with the ice albedo feedback

As the temperature rises, the ice sheets start to melt and shrink. These include the Arctic sea ice, glaciers on Greenland and the Antarctic, and mountain glaciers across the world. When sea ice melts, it leaves more sea exposed, which is much darker than the ice. When land ice melts, it uncovers rocks, soils, and (eventually) plants, all of which are also darker than ice. Because of this, loss of ice leads to a lower albedo for the planet. A lower albedo means less of the incoming sunlight is reflected straight back into space, so more reaches the surface. In other words, less albedo means more incoming solar radiation. And, as we already know, this leads to more energy retained and more warming. In other words, it is a re-inforcing feedback loop.

As a quick check, we can use the rule of thumb that reinforcing loops have an even number of ‘-‘ links. Trace the path of this loop to check:

Ice albedo feedback loop on its own

Ice albedo feedback loop on its own

Because this is a reinforcing loop, it can modify the behaviour of the basic energy balancing loop. If a warming process starts, this loop can accelerate it, and cause more warming than we’d expect from just the main balancing loop. In extreme cases, a reinforcing loop can completely destabilize a system that is normally dominated by balancing loops. However, all reinforcing loops also must have limits (remember: nothing can grow forever). In this case, there is clearly a limit once all the ice sheets on the planet have melted. The loop can no longer function at that point.

Here’s another reinforcing loop:

Climate system with permafrost feedback

Climate system with permafrost feedback

In this loop, as the temperature rises, it melts the permafrost across Northern Canada and Russia. This releases the methane from the frozen soils. Methane is a greenhouse gas, so this loop also accelerates the warming. Again, it’s a re-inforcing loop, and again, there’s a limit: the loop must stop once all the permafrost has melted.

Here’s another:

Climate system with carbon sinks feedback

Climate system with carbon sinks feedback

This loop occurs because the more greenhouse gases we put into the atmosphere, the more work the carbon sinks have to do. Carbon sinks include the ocean and soils – they slowly remove carbon dioxide from the atmosphere. But the more carbon they have to absorb, the less effective they are at taking more. There’s an additional effect for the ocean, because a warmer ocean is less able to absorb CO2. Some model studies even suggest that after a few degrees of warming, the ocean might stop being a carbon sink and start being a source.

So, put that altogether and we have three re-inforcing loops working to destabilize the main energy balance loop. The main loop tends to limit the amount of warming we might expect, and the reinforcing loops all tend to increase it:

All three reinforcing loops working together

All three reinforcing loops working together

Remember, all three re-inforcing loops might operate at once. More likely, each will kick in at different times as the planet warms. Predicting when that might occur is hard, as is calculating the likely size of the effect. We can calculate absolute limits to each of these reinforcing loops, but there are likely to be other reasons why the loop stops working before reaching these absolute limits.

One of the goals of climate modelling is to capture these kinds of feedbacks in a computational model, to attempt to quantify the effects, so that we can understand them better. We can use both basic physics and empirical observations to put numbers on each of the relationships in the diagram, and we can experiment with the model to test how sensitive it is to different kinds of perturbation, especially in areas where it’s hard to be sure about the numbers.

However, there’s also the possibility that we missed some important feedback loops. In the model above, we have missed an important one, to do with clouds. We’ll meet that in the next post…

Other posts in this series, so far:

The story so far: First, I argued that Climate Science is inherently a Systems Discipline. To develop that idea, I described two important systems as feedback loops: the earth’s temperature equilibrium loop and economic growth and energy consumption. Now it’s time to put those two systems together…

First, we’ll need to capture the unintended consequences of burning fossil fuels for energy, in the form of two distinct kinds of pollution:

Effect of two different kinds of pollutant

Effect of two different kinds of pollutant

Aerosols are tiny particles (smoke, dust, etc) produced when dirtier fossil fuels are burnt, particularly, sulphur dioxide. Coal is the worst for producing these, but oil produces them as well, especially from poorly tuned gasoline and diesel engines. The effect of aerosols is easy to understand, because we can see them. They hang around in the air and block out the light. They contribute to the clouds of smog that hang over our cities in the summer, and they react with water vapour to create sulphuric acid, leading to acid rain. It’s possible to greatly reduce the amount of aerosols produced when we burn fossil fuels, by processing the fuels first to remove the impurities that otherwise would end up as aerosols. For example, low-sulphur coal is much “cleaner” than regular coal, because it produces very few aerosols when you burn it. That’s good for our air quality.

Greenhouse gases include carbon dioxide, methane, water vapour, and a number of other gases such as Chlorofluorocarbons (CFCs). By volume, CO2 is by far the most common byproduct from fossil fuels, although some of the rarer gases actually have a larger “greenhouse effect”. Some greenhouse gases are “short-lived”, because they are chemically unstable, and break down fairly rapidly (for example, carbon monoxide). Others are “long-lived” because they are very stable. For example, carbon dioxide stays in the atmosphere for thousands of years. Unfortunately, we can’t remove these compounds before we burn fossil fuels, because fossil fuels are primarily made of carbon, and it is the carbon that makes them useful as fuels. So, unlike sulphur, you can’t “clean up” the fuel first. When the coal industry talks about “clean coal” these days, they don’t mean the coal itself is clean; they mean they’re working on technology to capture the CO2 after it is produced, but before it disappears up the chimney. Whether this can work cost-effectively on a large scale is an open question.

These two pollutants have opposite effects on the climate system, because each blocks a different part of the spectrum. Aerosols block visible light, and hence reduce the incoming sunlight (like adding a sunshade). Greenhouse gases block infrared radiation, and hence reduce the outgoing radiation from the planet (like adding an extra blanket):

The effect of these two different kinds of pollutant

The effect of these two different kinds of pollutant

Now when we look at these two effects in the context of all the feedback loops we’ve explored so far, we get the following:

The energy system interacting with the basic climate system

The energy system interacting with the basic climate system

So aerosols reduce the net radiative forcing (causing cooling), and greenhouse gases increase it (causing warming). The earth’s energy balance loop means that each time the concentrations of aerosols and greenhouse gases in the atmosphere change, the earth will change its temperature until all the forces balance out again. Unfortunately, the reinforcing loop that drives energy consumption means that the concentrations of these pollutants are continually changing, and they’re changing at a rate that’s faster than the earth’s balancing loop can cope with. We already noted that the earth’s balancing loop can take several decades to find a new equilibrium. If we were able to “turn off the tap”, so that we’re not adding any more of these pollutants (but we leave the ones that are already in the atmosphere), we’d find the earth’s temperature continues to change.

Which one is winning? Satellites allow us to measure the different effects fairly accurately, and observations from the pre-satellite era allow us to extrapolate backwards, so we can estimate the total effect of each from pre-industrial times to the present. Here’s a chart summarizing the effects:

Total Radiative Forcing from different sources for the period 1750 to 2005. From the IPCC Fourth Assessment Report (AR4).

Note that aerosols have two different effects. The direct effect is the one we described in the system diagram above – it blocks incoming sunlight. The indirect effect is because aerosols also interact with clouds. We’ll explore the indirect effect in a future post. However we look at it, the greenhouse gases are winning, by a large margin. That should mean the planet is warming. And it is:

Land-surface temperatures from 1750-2005, from the Berkeley Earth Surface Temperature project (click for original source)

Note the steep rise from the 1980s onwards, and compare it to the exponential curve of greenhouse gas emissions we saw earlier. More interestingly note the slight fall in the immediate postwar period (1940s to 1970s). One hypothesis for this is that during this period the sulphate aerosols were winning. There’s some uncertainty about the exact size of the aerosol effect during this period (note the size of the ‘uncertainty whiskers’ on the bar graph above). However, it’s true that concern about acid rain led legislation and international agreements in the 1980s to reduce sulphate emissions from fossil fuels.

The fact that sulphate aerosols have a cooling effect that can counteract the warming effect from greenhouse gases leads to an interesting proposal. If we can’t reduce the amount of greenhouse gases we emit, maybe instead we can increase the amount of sulphate aerosols. This has been studied as a serious geo-engineering proposal, and could be done quite cheaply, although we’d have to keep pumping up more of the stuff, as it washes out of the atmosphere fairly quickly. Alan Robock has identified 20 reasons why this is a bad idea. But really, we only need to know one reason why it’s a silly idea, and that comes directly from our analysis of the feedback loops in the economic growth and energy consumption system. As long as that loop is producing an exponential growth in greenhouse gas emissions, any attempt to counter-act them would also have to grow exponentially to keep up. The dimming effect from sulphate aerosols will affect many things on earth, including crop production. Committing ourselves to a path of exponential growth in sulphate aerosols in the stratosphere is quite clearly ridiculous. So if we ever do try this, it can only ever be a short-term solution, perhaps to buy us a few years to get the growth in greenhouse gases under control.

One other comment about the system diagrams we’ve created so far. Energy is mentioned twice in the diagrams: once in the loop describing economic growth, and once in the earth’s energy balance loop. We can compare these two. In the top loop, the current worldwide energy consumption by humans is about 16 terawatts. In the bottom loop, the current amount of energy being added to the earth due to greenhouse gases is about 300 terawatts. So the earth is currently gaining about 18 times the amount of energy that the entire human race is actually using. (Here’s how this is calculated).

Finally, note that although the diagram contains four different feedback loops, none of these are what climate scientists mean when they talk about feedbacks in the climate system. To understand why, we have to make a distinction between the basic operation of the system I’ve described so far (which drives global warming), and additional chains of cause-and-effect that respond to changes in the basic system. Climate scientists study these additional loops, because it’s these additional loops that determine how much warming we should expect from any specific amount of greenhouse gases. If you start warming the planet, using the system we’ve described so far, there are many other consequences. Some of those consequences can come back to bite you as reinforcing feedbacks. We’ll start looking at these in the next post.

Other posts in this series, so far:

In part 1, I described the central equilibrium loop that controls the earth’s temperature. Now it’s time to look at other loops that interact with central loop, and tend to push it out of balance. The most important of these is the use of fossil fuels that produce greenhouse gases, which change the composition of the atmosphere. Let’s first have a look at energy consumption on its own. Here’s the basic loop:

Core economic growth and energy consumption loop

Core economic growth and energy consumption loop

This reinforcing loop has driven the growth in the economy and energy use since the beginning of the industrial era. As we might expect from a reinforcing loop, this dynamic creates an exponential curve – in both the size of the global economy and the consumption of fossil fuels. For example, here’s the curve for carbon produced per year from fossil fuels (data from CDIAC):

Global Greenhouse gas emissions per year

The exponential rise in global carbon emissions (click for bigger version)

For the first century, the curve looks flat, but it’s not zero. In 1751 the world was producing about 3 tonnes of carbon per year, and this rises to about 50 tonnes per year by 1851. The growth really gets going in the postwar period. There are dips for global recession in the 1930s and 1980s, but these barely dent the overall rise. (For a slightly more detailed exploration of the dynamics that drive this exponential growth, see the Postcarbon Institute’s 300 years of fossil fuels in 300 seconds).

Exponential growth cannot go on forever, so there must be a balancing loop somewhere that (eventually) brings this growth to a halt. The world’s supply of fossil fuels is finite, so if we keep climbing the exponential curve, we must eventually run out. But long before that happens, prices start to rise because of scarcity. So the actual balancing loop looks like this:

The "peak oil" balancing loop for economic growth

I call the left hand loop the “peak oil” balancing loop for economic growth

In this new loop, each link inverts the direction of change: as consumption of fossil fuels rises, the remaining reserves fall. As reserves fall, the price tends to go up. As the price goes up, the rate of consumption falls. It’s a balancing loop because (once it starts operating) each rise in fossil fuel prices should cause a price rise that then damps down further consumption.

If these two loops operate on their own, we might expect an initial exponential curve in the use of fossil fuels, until there are signs that reserves are being depleted, and then a gradual phasing out of fossil fuels, producing the classic bell-shaped curve of Hubbert’s peak theory. In the process, economic development would also grind to a halt, unless we manage to decouple the first loop fairly quickly, by switching to renewable sources of energy. In theory, the rising price of fossil fuels should cause this switch to happen gracefully, once prices start rising – a properly functioning economic system should guarantee this. Unfortunately, the switch is not easy, because we’ve built a massive energy infrastructure that is based exclusively on fossil fuels, and this locks us in to a dependency on fossil fuels. This lock-in, along with the exponential growth in demand, means that we cannot just switch to alternative energy as the prices rise – a more likely outcome is an overshoot, where the rate of consumption is stuck in an upwards trend, causing prices to shoot up, while the balancing loop is unable to do its stuff.

However, there’s another complication. Conventional sources aren’t the only way to get fossil fuels. As the price rises, other sources become viable. The classic example is the Alberta oil sands. Twenty years ago, nobody could extract these because it was too expensive, compared to the price of oil. Today, the price of oil is high enough that exploiting the oil sands becomes profitable. So there’s another loop:

I call this new loop the "tar sands" balancing loop

I call this new loop the “tar sands” balancing loop

This new loop balances the rising prices from the middle loop when reserves start to fall. So now we have a system that could keep the economy functioning well beyond the point at which we exhaust conventional sources of fossil fuels. At each new price point, there’s a stimulus to start tapping new sources of these fuels, and as these new sources come on stream, they allow the global economy to keep growing, and the consumption of fossil fuels to keep rising. To someone who just studies economics of energy, everything looks okay for the foreseeable future (except that cheap oil is never coming back). To someone who predicts doom because of peak oil, it complicates the picture (except that the resource depletion predictions were basically correct). But to someone who studies climate, it means the challenge just got harder…

In the next post, I’ll link this system with the basic climate system.

Other posts in this series, so far:

I wrote earlier this week that we should incorporate more of the key ideas from systems thinking into discussions about climate change and sustainability. Here’s an example: I think it’s very helpful to think about the climate as a set of interacting feedback loops. If you understand how those feedback loops work, you’ve captured the main reasons why climate change is such a massive challenge for humanity. So, this is the first in a series of posts where I attempt to substantiate my argument. I’ll describe the global climate in terms of a set of balancing and reinforcing feedback loops. (Note: This is a very elementary introduction. If you prefer a detailed mathematical treatment of feedbacks in the climate system, try this paper by Gerard Roe)

Before we start, we need some basic concepts. The first is the idea of a feedback loop. We’re used to thinking in terms of linear sequences of cause and effect: A causes B, which causes C, and so on. However, our interactions with the world are rarely like this. More often, change tends to feed back on itself. For example, we identify a problem that needs solving, we take some action to solve it, and that action ends up changing our perception of the problem. The feedback usually comes in one of two forms. The first is a balancing feedback: The more you try to change something, the more the world pushes back and makes it harder. Take dieting for example: if we manage to lose a few pounds, the sense of achievement can make us complacent, and then we put the weight all back on again. The second form is a reinforcing feedback. This is where success feeds on itself. For example, perhaps we try a new exercise regime, and it makes us feel energized, so we end up exercising even more, and so on.

In physics and engineering, these are usually called ‘positive’ and ‘negative’ feedback loops. I prefer to call them ‘reinforcing’ and ‘balancing’ loops, because it’s a better description of what they do. People tend to think ‘positive’ means good and ‘negative’ means bad. In fact, both types of loop can be good or bad, depending on what you think the system ought to be doing. A reinforcing loop is good when you want to achieve a change (e.g. your protest movement goes viral), but is certainly not good when it’s driving a change you don’t want (a forest fire spreading towards your town, for example). Similarly, a balancing loop is good when it keeps a system stable that you depend on (prices in the marketplace, perhaps), but is bad when it defeats your attempts to bring about change (as in the dieting example above). Of course, what’s good to one person might be bad to someone else, so we’ll set aside such value judgements for the moment, and just focus on how the loops work in the climate system.

It helps to draw pictures. Here’s an example of how both types of loop affect a tech company trying to sell a new product (say, the iPhone):

The action of reinforcing and balancing feedback loops in selling iPhones

The action of reinforcing and balancing feedback loops in selling iPhones

You can read the arrows labelled “+” as “more of A tends to cause more of B (than there otherwise would have been), while less of A tends to cause less of B (than there otherwise would have been)”. The arrows labelled “-” mean “more of A tends to cause less of B, and less of A tends to cause more of B”. [Note: there are some subtleties to this interpretation, but we can ignore them for now.]

On the left, we have a reinforcing loop (labelled with an ‘R’): the effect of word of mouth. The more iPhones we sell, the more people there are to spread the word, which in turn means more get sold. This tends to create an exponential growth in sales figures. However, this cannot go on forever. Sooner or later, the balancing loop on the right starts to matter (labelled with a ‘B’). The more iPhones sold, the fewer there are people left without one – we start to saturate the market. The more the market is saturated, the fewer iPhones we can sell. The growth in sales slows, and may even stop altogether. The resulting graph of sales over time might look like this:

How the sales of iPhones might look over time

How the sales of iPhones might look over time

When the reinforcing loop dominates, sales grow exponentially. When the balancing loop dominates, sales stagnate. In this case, the natural limit is when everyone who might ever want an iPhone has one. Of course, in real life, the curves are never this smooth – other feedback loops (that we haven’t mentioned yet) kick in, and temporarily push sales up or down. However, we could hypothesize that these two loops do explain most of the dynamic behaviour of the sales of a new product, and everything else is just noise. In many cases this is true – diffusion of innovation studies frequently reveal this type of S-shaped curve.

The structure of these two loops and the S-shaped curve they produce describe many real world phenomena: the spread of disease, growth of a population, the growth of a firm, the spread of a forest fire. In each case, there may well be other feedback loops that complicate the picture. But the underlying story about growth and its limits still captures a basic truth: exponential growth occurs when there is a reinforcing feedback loop, and as nothing can grow exponentially forever, there must always be a balancing loop somewhere that provides a limit to growth.

Okay, that’s enough background. Time to look at the first feedback loop in the global climate system. We’ll start with the global climate system in its equilibrium state – i.e. when the climate is not changing. The climate has been remarkably stable for the last 10,000 years, since the end of the last ice age. Over that time, it has varied only within less than 1°C. That stability suggests there are likely to be balancing feedback loops keeping it stable. The most important of these is the basic energy balance loop:

The Earth's energy balance as a balancing loop

The Earth’s energy balance as a balancing loop

The temperature of the planet is determined primarily by the balance between the incoming energy from the sun and the outgoing energy lost back into space. The incoming energy is in the form of shortwave radiation from the sun, and the amount we get is determined by the solar constant (which, of course, is not really constant, although the variations were too small to measure before the satellite era). The incoming energy from the sun, averaged out over the surface of the earth, is about 340 watts per square meter. If this is greater than the outgoing energy, the imbalance causes the earth to retain more energy, and so the temperature rises. As a warmer planet loses energy faster, this increases the outgoing radiation, which in turn reduces the imbalance again (i.e. this is a balancing loop).

Imagine there’s an overshoot – i.e. the outgoing radiation rises, but goes a little too far, so that it’s now more than the incoming solar radiation. This reduces the net radiative forcing so far that it becomes negative. But a decrease in net radiative forcing tends to cause a decrease in energy retained, which causes a decrease in temperature, which causes a decrease in outgoing radiation again. So the balancing loop also cancels out any overshoot sooner or later. In other words, the structure of this loop always pushes the planet to find a (roughly) stable equilibrium: essentially, if the incoming and outgoing energy ever get out of balance, the temperature of the planet rises or falls until they are balanced again.

Note that we could tell this is a balancing loop, without tracing the effects, just by counting up the number of “-” links. If it’s an odd number, it’s a balancing loop; if it’s even (or zero), it’s a reinforcing loop. In my systems thinking class, we play a game that simulates different kinds of loop, with each person acting as one link (some are “+” links, some are “-” links). The students usually find it hard to predict how loops of different structure will behave, but once we’ve played it a few times, everyone has a good intuition for the difference between reinforcing loops and balancing loops.

There is one more complication for this loop. The net radiative forcing determines the rate at which energy is retained, rather than the total amount. If the net forcing is positive, the earth keeps on retaining energy. So although this leads to an increase temperature, and, if you follow the loop around, a decrease in the net radiative forcing, it will reduce the rate at which energy is retained (and hence the rate of warming), it won’t actually stop the warming until the net radiative balance falls to zero. And then, when the warming stops, it doesn’t cool off again – the loop ensures the planet stays at this new temperature. It’s a slow process because it takes time for the planet to warm up. For example, the oceans can absorb a huge amount of energy before you’ll notice any increase in temperature. This means the loop operates slowly. We know from simulations (and from studies of the distant past) that it can take many decades for the planet to find a new balance in response to a change in net radiative forcing.

There are, of course, other feedback loops to complicate the picture, and some of them are reinforcing loops. I’ll describe some of these in my next post. But from an understanding of this one loop, we can gain a number of insights:

  1. This loop, on its own, cannot produce a runaway global warming (or cooling) – the earth will eventually find a new equilibrium in response to a change in net radiative forcing. More precisely, for a runaway warming to occur, some other reinforcing loop must dominate this one. As I said, there are some reinforcing loops, and they complicate the picture, but nobody has managed to demonstrate that any of them are strong enough to overcome the balancing effect of this loop.
  2. The balancing loop has a delay, because it takes a lot of energy to warm the oceans. Hence, once a change starts in this loop, it takes many decades for the balancing effect to kick in. That’s the main reason why we have to take action on climate change many decades before we see the full effect. On human timescales, the earth’s natural balancing mechanism is a very slow process.
  3. If we make a one-time change to the radiative balance, the earth will slowly change its temperature until it reaches a new balance point, and then will stay there, because the balancing loop keeps it there. However, if there is some other force that keeps changing the radiative balance, despite this loop’s attempts to adjust, then the temperature will keep on changing. Our current dilemma with respect to climate change isn’t that we’ve made a one-time change to the amount of greenhouse gases in the atmosphere – the dilemma is that we’re continually changing them. This balancing loop only really helps once we stop changing the atmosphere.

Other posts in this series, so far: