I’ve been trawling through the final draft of the new IPCC assessment report that was released last week, to extract some highlights for a talk I gave yesterday. Here’s what I think are its key messages:

  1. The warming is unequivocal.
  2. Humans caused the majority of it.
  3. The warming is largely irreversible.
  4. Most of the heat is going into the oceans.
  5. Current rates of ocean acidification are unprecedented.
  6. We have to choose which future we want very soon.
  7. To stay below 2°C of warming, the world must become carbon negative.
  8. To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Before I elaborate on these, a little preamble. The IPCC was set up in 1988 as a UN intergovernmental body to provide an overview of the science. Its job is to assess what the peer-reviewed science says, in order to inform policymaking, but it is not tasked with making specific policy recommendations. The IPCC and its workings seem to be widely misunderstood in the media. The dwindling group of people who are still in denial about climate change particularly like to indulge in IPCC-bashing, which seems like a classic case of ‘blame the messenger’. The IPCC itself has a very small staff (no more than a dozen or so people). However, the assessment reports are written and reviewed by a very large team of scientists (several thousands), all of whom volunteer their time to work on the reports. The scientists are are organised into three working groups: WG1 focuses on the physical science basis, WG2 focuses on impacts and climate adaptation, and WG3 focuses on how climate mitigation can be achieved.

Last week, just the WG1 report was released as a final draft, although it was accompanied by bigger media event around the approval of the final wording on the WG1 “Summary for Policymakers”. The final version of the full WG1 report, plus the WG2 and WG3 reports, are not due out until spring next year. That means it’s likely to be subject to minor editing/correcting, and some of the figures might end up re-drawn. Even so, most of the text is unlikely to change, and the major findings can be considered final. Here’s my take on the most important findings, along with a key figure to illustrate each.

(1) The warming is unequivocal

The text of the summary for policymakers says “Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased.”

Observed globally averaged combined land and ocean surface temperature anomaly 1850-2012. The top panel shows the annual values; the bottom panel shows decadal means. (Note: Anomalies are relative to the mean of 1961-1990).

(Fig SPM.1) Observed globally averaged combined land and ocean surface temperature anomaly 1850-2012. The top panel shows the annual values; the bottom panel shows decadal means. (Note: Anomalies are relative to the mean of 1961-1990).

Unfortunately, there has been much play in the press around a silly idea that the warming has “paused” in the last decade. If you squint at the last few years of the top graph, you might be able to convince yourself that the temperature has been nearly flat for a few years, but only if you cherry pick your starting date, and use a period that’s too short to count as climate. When you look at it in the context of an entire century and longer, such arguments are clearly just wishful thinking.

The other thing to point out here is that the rate of warming is unprecedented. “With very high confidence, the current rates of CO2, CH4 and N2O rise in atmospheric concentrations and the associated radiative forcing are unprecedented with respect to the highest resolution ice core records of the last 22,000 years”, and there is “medium confidence that the rate of change of the observed greenhouse gas rise is also unprecedented compared with the lower resolution records of the past 800,000 years.” In other words, there is nothing in any of the ice core records that is comparable to what we have done to the atmosphere over the last century. The earth has warmed and cooled in the past due to natural cycles, but never anywhere near as fast as modern climate change.

(2) Humans caused the majority of it

The summary for policymakers says “It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century”.

The Earth's energy budget from 1970 to 2011. Cumulative energy flux (in zettaJoules!) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

(Box 13.1 fig 1) The Earth’s energy budget from 1970 to 2011. Cumulative energy flux (in zettaJoules!) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

This chart summarizes the impact of different drivers of warming and/or cooling, by showing the total cumulative energy added to the earth system since 1970 from each driver. Note that the chart is in zettajoules (1021J). For comparison, one zettajoule is about the energy that would be released from 200 million bombs of the size of the one dropped on Hiroshima. The world’s total annual global energy consumption is about 0.5ZJ.

Long lived greenhouse gases, such as CO2, contribute the majority of the warming (the purple line). Aerosols, such as particles of industrial pollution, block out sunlight and cause some cooling (the dark blue line), but nowhere near enough to offset the warming from greenhouse gases. Note that aerosols have the largest uncertainty bar; much of the remaining uncertainty about the likely magnitude of future climate warming is due to uncertainty about how much of the warming might be offset by aerosols. The uncertainty on the aerosols curve is, in turn, responsible for most of the uncertainty on the black line, which shows the total effect if you add up all the individual contributions.

The graph also puts into perspective some of other things that people like to blame for climate change, including changes in energy received from the sun (‘solar’), and the impact of volcanoes. Changes in the sun (shown in orange) are tiny compared to greenhouse gases, but do show a very slight warming effect. Volcanoes have a larger (cooling) effect, but it is short-lived. There were two major volcanic eruptions in this period, El Chichón in 1982 and and Pinatubo in 1992. Each can be clearly seen in the graph as an immediate cooling effect, which then tapers off after a a couple of years.

(3) The warming is largely irreversible

The summary for policymakers says “A large fraction of anthropogenic climate change resulting from CO2 emissions is irreversible on a multi-century to millennial time scale, except in the case of a large net removal of CO2 from the atmosphere over a sustained period. Surface temperatures will remain approximately constant at elevated levels for many centuries after a complete cessation of net anthropogenic CO2 emissions.”

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

The conclusions about irreversibility of climate change are greatly strengthened from the previous assessment report, as recent research has explored this in much more detail. The problem is that a significant fraction of our greenhouse gas emissions stay in the atmosphere for thousands of years, so even if we stop emitting them altogether, they hang around, contributing to more warming. In simple terms, whatever peak temperature we reach, we’re stuck at for millennia, unless we can figure out a way to artificially remove massive amounts of CO2 from the atmosphere.

The graph is the result of an experiment that runs (simplified) models for a thousand years into the future. The major climate models are generally too computational expensive to be run for such a long simulation, so these experiments use simpler models, so-called EMICS (Earth system Models of Intermediate Complexity).

The four curves in this figure correspond to four “Representative Concentration Pathways“, which map out four ways in which the composition of the atmosphere is likely to change in the future. These four RCPs were picked to capture four possible futures: two in which there is little to no coordinated action on reducing global emissions (worst case – RCP8.5 and best case – RCP6) and two on which there is serious global action on climate change (worst case – RCP4.5 and best case – RCP 2.6). A simple way to think about them is as follows. RCP8.5 represents ‘business as usual’ – strong economic development for the rest of this century, driven primarily by dependence on fossil fuels. RCP6 represents a world with no global coordinated climate policy, but where lots of localized clean energy initiatives do manage to stabilize emissions by the latter half of the century. RCP4.5 represents a world that implements strong limits on fossil fuel emissions, such that greenhouse gas emissions peak by mid-century and then start to fall. RCP2.6 is a world in which emissions peak in the next few years, and then fall dramatically, so that the world becomes carbon neutral by about mid-century.

Note that in RCP2.6 the temperature does fall, after reaching a peak just below 2°C of warming over pre-industrial levels. That’s because RCP2.6 is a scenario in which concentrations of greenhouse gases in the atmosphere start to fall before the end of the century. This is only possible if we reduce global emissions so fast that we achieve carbon neutrality soon after mid-century, and then go carbon negative. By carbon negative, I mean that globally, each year, we remove more CO2 from the atmosphere than we add. Whether this is possible is an interesting question. But even if it is, the model results show there is no time within the next thousand years when it is anywhere near as cool as it is today.

(4) Most of the heat is going into the oceans

The oceans have a huge thermal mass compared to the atmosphere and land surface. They act as the planet’s heat storage and transportation system, as the ocean currents redistribute the heat. This is important because if we look at the global surface temperature as an indication of warming, we’re only getting some of the picture. The oceans act as a huge storage heater, and will continue to warm up the lower atmosphere (no matter what changes we make to the atmosphere in the future).

(Box 3.1 Fig 1) Plot of energy accumulation in ZJ (1 ZJ = 1021 J) within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. See text for data sources. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

(Box 3.1 Fig 1) Plot of energy accumulation in ZJ (1 ZJ = 1021 J) within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

Note the relationship between this figure (which shows where the heat goes) and the figure I showed above that shows change in cumulative energy budget from different sources. Both graphs show zettajoules accumulating over about the same period (1970-2011). But the first graph has a cumulative total just short of 800ZJ by the end of the period, while this one shows the earth storing “only” about 300ZJ of this. Where did the remaining energy go? Because the earth’s temperature rose during this period, it also lost increasingly more energy back into space. When greenhouse gases trap heat, the earth’s temperature keeps rising until outgoing energy and incoming energy are in balance again.

(5) Current rates of ocean acidification are unprecedented.

The IPCC report says ”The pH of seawater has decreased by 0.1 since the beginning of the industrial era, corresponding to a 26% increase in hydrogen ion concentration. … It is virtually certain that the increased storage of carbon by the ocean will increase acidification in the future, continuing the observed trends of the past decades. … Estimates of future atmospheric and oceanic carbon dioxide concentrations indicate that, by the end of this century, the average surface ocean pH could be lower than it has been for more than 50 million years”.

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings. [The numbers indicate the number of models used in each ensemble.]

Ocean acidification has sometimes been ignored in discussions about climate change, but it is a much simpler process, and is much easier to calculate (notice the uncertainty range on the graph above is much smaller than most of the other graphs). This graph shows the projected acidification in the best and worst case scenarios (RCP2.6 and RCP8.5). Recall that RCP8.5 is the “business as usual” future.

Note that this doesn’t mean the ocean will become acid. The ocean has always been slightly alkaline – well above the neutral value of pH7. So “acidification” refers to a drop in pH, rather than a drop below pH7. As this continues, the ocean becomes steadily less alkaline. Unfortunately, as the pH drops, the ocean stops being supersaturated for calcium carbonate. If it’s no longer supersaturated, anything made of calcium carbonate starts dissolving. Corals and shellfish can no longer form. If you kill these off, the entire ocean foodchain is affected. Here’s what the IPCC report says: “Surface waters are projected to become seasonally corrosive to aragonite in parts of the Arctic and in some coastal upwelling systems within a decade, and in parts of the Southern Ocean within 1–3 decades in most scenarios. Aragonite, a less stable form of calcium carbonate, undersaturation becomes widespread in these regions at atmospheric CO2 levels of 500–600 ppm”.

(6) We have to choose which future we want very soon.

In the previous IPCC reports, projections of future climate change were based on a set of scenarios that mapped out different ways in which human society might develop over the rest of this century, taking account of likely changes in population, economic development and technological innovation. However, none of the old scenarios took into account the impact of strong global efforts at climate mitigation. In other words, they all represented futures in which we don’t take serious action on climate change. For this report, the new “RCPs” have been chosen to allow us to explore the choice we face.

This chart sums it up nicely. If we do nothing about climate change, we’re choosing a path that will look most like RCP8.5. Recall that this is the one where emissions keep rising just as they have done throughout the 20th century. On the other hand, if we get serious about curbing emissions, we’ll end up in a future that’s probably somewhere between RCP2.6 and RCP4.5 (the two blue lines). All of these futures give us a much warmer planet. All of these futures will involve many challenges as we adapt to life on a warmer planet. But by curbing emissions soon, we can minimize this future warming.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

Note also that the uncertainty range (the shaded region) is much bigger for RCP8.5 than it is for the other scenarios. The more the climate changes beyond what we’ve experienced in the recent past, the harder it is to predict what will happen. We tend to use the difference across different models as an indication of uncertainty (the coloured numbers shows how many different models participated in each experiment). But there’s also the possibility of “unknown unknowns” – surprises that aren’t in the models, so the uncertainty range is likely to be even bigger than this graph shows.

(7) To stay below 2°C of warming, the world must become carbon negative.

Only one of the four future scenarios (RCP2.6) shows us staying below the UN’s commitment to no more than 2ºC of warming. In RCP2.6, emissions peak soon (within the next decade or so), and then drop fast, under a stronger emissions reduction policy than anyone has ever proposed in international negotiations to date. For example, the post-Kyoto negotiations have looked at targets in the region of 80% reductions in emissions over say a 50 year period. In contrast, the chart below shows something far more ambitious: we need more than 100% emissions reductions. We need to become carbon negative:

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

The graph on the left shows four possible CO2 emissions paths that would all deliver the RCP2.6 scenario, while the graph on the right shows the resulting temperature change for these four. They all give similar results for temperature change, but differ in how we go about reducing emissions. For example, the black curve shows CO2 emissions peaking by 2020 at a level barely above today’s, and then dropping steadily until emissions are below zero by about 2070. Two other curves show what happens if emissions peak higher and later: the eventual reduction has to happen much more steeply. The blue dashed curve offers an implausible scenario, so consider it a thought experiment: if we held emissions constant at today’s level, we have exactly 30 years left before we would have to instantly reduce emissions to zero forever.

Notice where the zero point is on the scale on that left-hand graph. Ignoring the unrealistic blue dashed curve, all of these pathways require the world to go net carbon negative sometime soon after mid-century. None of the emissions targets currently being discussed by any government anywhere in the world are sufficient to achieve this. We should be talking about how to become carbon negative.

One further detail. The graph above shows the temperature response staying well under 2°C for all four curves, although the uncertainty band reaches up to 2°C. But note that this analysis deals only with CO2. The other greenhouse gases have to be accounted for too, and together they push the temperature change right up to the 2°C threshold. There’s no margin for error.

(8) To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Perhaps the most profound advance since the previous IPCC report is a characterization of our global carbon budget. This is based on a finding that has emerged strongly from a number of studies in the last few years: the expected temperature change has a simple linear relationship with cumulative CO2 emissions since the beginning of the industrial era:

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

The chart is a little hard to follow, but the main idea should be clear: whichever experiment we carry out, the results tend to lie on a straight line on this graph. You do get a slightly different slope in one experiment, the “1%/yr” experiment, where only CO2 rises, and much more slowly than it has over the last few decades. All the more realistic scenarios lie in the orange band, and all have about the same slope.

This linear relationship is a useful insight, because it means that for any target ceiling for temperature rise (e.g. the UN’s commitment to not allow warming to rise more than 2°C above pre-industrial levels), we can easily determine a cumulative emissions budget that corresponds to that temperature. So that brings us to the most important paragraph in the entire report, which occurs towards the end of the summary for policymakers:

Limiting the warming caused by anthropogenic CO2 emissions alone with a probability of >33%, >50%, and >66% to less than 2°C since the period 1861–1880, will require cumulative CO2 emissions from all anthropogenic sources to stay between 0 and about 1560 GtC, 0 and about 1210 GtC, and 0 and about 1000 GtC since that period respectively. These upper amounts are reduced to about 880 GtC, 840 GtC, and 800 GtC respectively, when accounting for non-CO2 forcings as in RCP2.6. An amount of 531 [446 to 616] GtC, was already emitted by 2011.

Unfortunately, this paragraph is a little hard to follow, perhaps because there was a major battle over the exact wording of it in the final few hours of inter-governmental review of the “Summary for Policymakers”. Several oil states objected to any language that put a fixed limit on our total carbon budget. The compromise was to give several different targets for different levels of risk. Let’s unpick them. First notice that the targets in the first sentence are based on looking at CO2 emissions alone; the lower targets in the second sentence take into account other greenhouse gases, and other earth systems feedbacks (e.g. release of methane from melting permafrost), and so are much lower. It’s these targets that really matter:

  • To give us a one third (33%) chance of staying below 2°C of warming over pre-industrial levels, we cannot ever emit more than 880 gigatonnes of Carbon. 
  • To give us a 50% chance, we cannot ever emit more than 840 gigatonnes of Carbon.
  • To give us a 66% chance, we cannot ever emit more than 800 gigatonnes of Carbon.

Since the beginning of industrialization, we have already emitted a little more than 500 gigatonnes. So our remaining budget is somewhere between 300 and 400 gigatonnes of carbon. Existing known fossil fuel reserves are enough to release at least 1000 gigatonnes. New discoveries and unconventional sources will likely more than double this. That leads to one inescapable conclusion:

Most of the remaining fossil fuel reserves must stay buried in the ground.

We’ve never done that before. There is no political or economic system anywhere in the world currently that can persuade an energy company to leave a valuable fossil fuel resource untapped. There is no government in the world that has demonstrated the ability to forgo the economic wealth from natural resource extraction, for the good of the planet as a whole. We’re lacking both the political will and the political institutions to achieve this. Finding a way to achieve this presents us with a challenge far bigger than we ever imagined.

Update (10 Oct 2013): An earlier version of this post omitted the phrase “To stay below 2°C of warming” from the last key point.

Yesterday I talked about three re-inforcing feedback loops in the earth system, each of which has the potential to accelerate a warming trend once it has started. I also suggested there are other similar feedback loops, some of which are known, and others perhaps yet to be discovered. For example, a paper published last month suggested a new feedback loop, to do with ocean acidification. In a nutshell, as the ocean absorbs more CO2, it becomes more acidic, which inhibits the growth of phytoplankton. These plankton are a major source of sulphur compounds that end up as aerosols in the atmosphere, which seeds the formation of clouds. Less clouds mean lower albedo, which means more warming. Whether this feedback loop is important remains to be seen, but we do know that clouds have an important role to play in climate change.

I didn’t include clouds on my diagrams yet, because clouds deserve a special treatment, in part because they are involved in two major feedback loops that have opposite effects:

Two opposing cloud feedback loops

Two opposing cloud feedback loops. An increase in temperature leads to an increase in moisture in the atmosphere. This leads to two new loops…

As the earth warms, we get more moisture in the atmosphere (simply because there is more evaporation from the surface, and warmer air can hold more moisture). Water vapour is a powerful greenhouse gas, so the more there is in the atmosphere, the more warming we get (greenhouse gases reduce the outgoing radiation). So this sets up a reinforcing feedback loop: more moisture causes more warming causes more moisture.

However, if there is more moisture in the atmosphere, there’s also likely to be more cloud formation. Clouds raise the albedo of the planet and reflect sunlight back into space before it can reach the surface. Hence, there is also a balancing loop: by blocking more sunlight, extra clouds will help to put the brakes on any warming. Note that I phrased this carefully: this balancing loop can slow a warming trend, but it does not create a cooling trend. Balancing loops tend to stop a change from occurring, but they do not create a change in the opposite direction. For example, if enough clouds form to completely counteract the warming, they also remove the mechanism (i.e. warming!) that causes growth in cloud cover in the first place. If we did end up with so many extra clouds that it cooled the planet, the cooling would then remove the extra clouds, so we’d be back where we started. In fact, this loop is nowhere near that strong anyway. [Note that under some circumstances, balancing loops can lead to oscillations, rather than gently converging on an equilibrium point, and the first wave of a very slow oscillation might be mistaken for a cooling trend. We have to be careful with our assumptions and timescales here!].

So now we have two new loops that set up opposite effects – one tends to accelerate warming, and the other tends to decelerate it. You can experience both these effects directly: cloudy days tend to be cooler than sunny days, because the clouds reflect away some of the sunlight. But cloudy nights tend to be warmer than clear nights because the water vapour traps more of the escaping heat from the surface. In the daytime, both effects are operating, and the cooling effect tends to dominate. During the night, there is no sunlight to block, so only the warming effect works.

If we average out the effects of these loops over many days, months, or years, which of the effects dominate? (i.e. which loop is stronger?) Does the extra moisture mean more warming or less warming? This is clearly an area where building a computer model and experimenting with it might help, as we need to quantify the effects to understand them better. We can build good computer models of how clouds form at the small scale, by simulating the interaction of dust and water vapour. But running such a model for the whole planet is not feasible with today’s computers.

To make things a little more complicated, these two feedback loops interact with other things. For example, another likely feedback loop comes from a change in the vertical temperature profile of the atmosphere. Current models indicate that, at least in the tropics, the upper atmosphere will warm faster than the surface (in technical terms, it will reduce the lapse rate – the rate at which temperature drops as you climb higher). This then increases the outgoing radiation, because it’s from the upper atmosphere that the earth loses its heat to space. This creates another (small) balancing feedback:

The lapse rate feedback - if the upper troposphere warms faster than the surface (i.e. a lower lapse rate), this increases outgoing radiation from the planet.

The lapse rate feedback – if the upper troposphere warms faster than the surface (i.e. a lower lapse rate), this increases outgoing radiation from the planet.

Note that this lapse rate feedback operates in the same way as the main energy balance loop – the two ‘-’ links have the same effect as the existing ‘+’ link from temperature to outgoing infra-red radiation. In other words this new loop just strengthens the effect of the existing loop – for convenience we could just fold both paths into the one link.

However, water vapour feedback can interact with this new feedback loop, because the warmer upper atmosphere will hold more water vapour in exactly the place where it’s most effective as a greenhouse gas. Not only that, but clouds themselves can change the vertical temperature profile, depending on their height. I said it was complicated!

The difficulty of simulating all these different interactions of clouds accurately leads to one of the biggest uncertainties in climate science. In 1979, the Charney report calculated that all these cloud and water vapour feedback loops roughly cancel out, but pointed out that there was a large uncertainty bound on this estimate. More than thirty years later, we understand much more about the how cloud formation and distribution are altered in a warming world, but our margins of error for calculating cloud effects have barely reduced, because of the difficulty of simulating them on a global scale. Our best guess is now that the (reinforcing) water vapour feedback loop is slightly stronger than than the (balancing) cloud albedo and lapse rate loops. So the net effect of these three loops is an amplifying effect on the warming.

Other posts in this series, so far:

At the start of this series, I argued that Climate Science is inherently a Systems Discipline. To develop that idea, I described two important systems as feedback loops: the earth’s temperature equilibrium loop and economic growth and energy consumption, and then we put these two systems together.

The basic climate system now looks like this (leaving out, for now, the dynamics that drive economic development and energy use):

The basic planetary energy balancing loop, with the burning of fossil fuels forcing the temperature to change

The basic planetary energy balancing loop, with the burning of fossil fuels forcing the temperature to change

Recall that the balancing loop (marked with a ‘B’) ensures that for each change to the input forcings (in this case greenhouse gases and aerosols in the atmosphere), the earth system will settle down to a new equilibrium point: a temperature at which the incoming and outgoing energy flows are balanced again. Each time we increase the concentration of greenhouse gases in the atmosphere, we can expect the earth to warm, slowly, until it reaches this new equilibrium. The economy-energy system (not shown above) is ensuring that we keep on adding more greenhouse gases, so we’re continually pushing the system further and further out of balance. That means we’re continually increasing the eventual temperature rise that the earth will experience before it reaches a new equilibrium.

Meanwhile, the aerosols provide a slight cooling effect, but they wash out of the atmosphere fairly quickly, so their overall concentration isn’t rising much. Carbon dioxide does not wash out quickly – it can remain in the atmosphere for thousands of years. Hence the warming effect dominates.

Now, if that was the whole picture, climate change would be very predictable, using basic thermodynamic principles. Unfortunately, there are other feedback loops that we haven’t considered yet. Here’s one:

The basic climate system with the ice albedo feedback

The basic climate system with the ice albedo feedback

As the temperature rises, the ice sheets start to melt and shrink. These include the Arctic sea ice, glaciers on Greenland and the Antarctic, and mountain glaciers across the world. When sea ice melts, it leaves more sea exposed, which is much darker than the ice. When land ice melts, it uncovers rocks, soils, and (eventually) plants, all of which are also darker than ice. Because of this, loss of ice leads to a lower albedo for the planet. A lower albedo means less of the incoming sunlight is reflected straight back into space, so more reaches the surface. In other words, less albedo means more incoming solar radiation. And, as we already know, this leads to more energy retained and more warming. In other words, it is a re-inforcing feedback loop.

As a quick check, we can use the rule of thumb that reinforcing loops have an even number of ‘-’ links. Trace the path of this loop to check:

Ice albedo feedback loop on its own

Ice albedo feedback loop on its own

Because this is a reinforcing loop, it can modify the behaviour of the basic energy balancing loop. If a warming process starts, this loop can accelerate it, and cause more warming than we’d expect from just the main balancing loop. In extreme cases, a reinforcing loop can completely destabilize a system that is normally dominated by balancing loops. However, all reinforcing loops also must have limits (remember: nothing can grow forever). In this case, there is clearly a limit once all the ice sheets on the planet have melted. The loop can no longer function at that point.

Here’s another reinforcing loop:

Climate system with permafrost feedback

Climate system with permafrost feedback

In this loop, as the temperature rises, it melts the permafrost across Northern Canada and Russia. This releases the methane from the frozen soils. Methane is a greenhouse gas, so this loop also accelerates the warming. Again, it’s a re-inforcing loop, and again, there’s a limit: the loop must stop once all the permafrost has melted.

Here’s another:

Climate system with carbon sinks feedback

Climate system with carbon sinks feedback

This loop occurs because the more greenhouse gases we put into the atmosphere, the more work the carbon sinks have to do. Carbon sinks include the ocean and soils – they slowly remove carbon dioxide from the atmosphere. But the more carbon they have to absorb, the less effective they are at taking more. There’s an additional effect for the ocean, because a warmer ocean is less able to absorb CO2. Some model studies even suggest that after a few degrees of warming, the ocean might stop being a carbon sink and start being a source.

So, put that altogether and we have three re-inforcing loops working to destabilize the main energy balance loop. The main loop tends to limit the amount of warming we might expect, and the reinforcing loops all tend to increase it:

All three reinforcing loops working together

All three reinforcing loops working together

Remember, all three re-inforcing loops might operate at once. More likely, each will kick in at different times as the planet warms. Predicting when that might occur is hard, as is calculating the likely size of the effect. We can calculate absolute limits to each of these reinforcing loops, but there are likely to be other reasons why the loop stops working before reaching these absolute limits.

One of the goals of climate modelling is to capture these kinds of feedbacks in a computational model, to attempt to quantify the effects, so that we can understand them better. We can use both basic physics and empirical observations to put numbers on each of the relationships in the diagram, and we can experiment with the model to test how sensitive it is to different kinds of perturbation, especially in areas where it’s hard to be sure about the numbers.

However, there’s also the possibility that we missed some important feedback loops. In the model above, we have missed an important one, to do with clouds. We’ll meet that in the next post…

Other posts in this series, so far:

The story so far: First, I argued that Climate Science is inherently a Systems Discipline. To develop that idea, I described two important systems as feedback loops: the earth’s temperature equilibrium loop and economic growth and energy consumption. Now it’s time to put those two systems together…

First, we’ll need to capture the unintended consequences of burning fossil fuels for energy, in the form of two distinct kinds of pollution:

Effect of two different kinds of pollutant

Effect of two different kinds of pollutant

Aerosols are tiny particles (smoke, dust, etc) produced when dirtier fossil fuels are burnt, particularly, sulphur dioxide. Coal is the worst for producing these, but oil produces them as well, especially from poorly tuned gasoline and diesel engines. The effect of aerosols is easy to understand, because we can see them. They hang around in the air and block out the light. They contribute to the clouds of smog that hang over our cities in the summer, and they react with water vapour to create sulphuric acid, leading to acid rain. It’s possible to greatly reduce the amount of aerosols produced when we burn fossil fuels, by processing the fuels first to remove the impurities that otherwise would end up as aerosols. For example, low-sulphur coal is much “cleaner” than regular coal, because it produces very few aerosols when you burn it. That’s good for our air quality.

Greenhouse gases include carbon dioxide, methane, water vapour, and a number of other gases such as Chlorofluorocarbons (CFCs). By volume, CO2 is by far the most common byproduct from fossil fuels, although some of the rarer gases actually have a larger “greenhouse effect”. Some greenhouse gases are “short-lived”, because they are chemically unstable, and break down fairly rapidly (for example, carbon monoxide). Others are “long-lived” because they are very stable. For example, carbon dioxide stays in the atmosphere for thousands of years. Unfortunately, we can’t remove these compounds before we burn fossil fuels, because fossil fuels are primarily made of carbon, and it is the carbon that makes them useful as fuels. So, unlike sulphur, you can’t “clean up” the fuel first. When the coal industry talks about “clean coal” these days, they don’t mean the coal itself is clean; they mean they’re working on technology to capture the CO2 after it is produced, but before it disappears up the chimney. Whether this can work cost-effectively on a large scale is an open question.

These two pollutants have opposite effects on the climate system, because each blocks a different part of the spectrum. Aerosols block visible light, and hence reduce the incoming sunlight (like adding a sunshade). Greenhouse gases block infrared radiation, and hence reduce the outgoing radiation from the planet (like adding an extra blanket):

The effect of these two different kinds of pollutant

The effect of these two different kinds of pollutant

Now when we look at these two effects in the context of all the feedback loops we’ve explored so far, we get the following:

The energy system interacting with the basic climate system

The energy system interacting with the basic climate system

So aerosols reduce the net radiative forcing (causing cooling), and greenhouse gases increase it (causing warming). The earth’s energy balance loop means that each time the concentrations of aerosols and greenhouse gases in the atmosphere change, the earth will change its temperature until all the forces balance out again. Unfortunately, the reinforcing loop that drives energy consumption means that the concentrations of these pollutants are continually changing, and they’re changing at a rate that’s faster than the earth’s balancing loop can cope with. We already noted that the earth’s balancing loop can take several decades to find a new equilibrium. If we were able to “turn off the tap”, so that we’re not adding any more of these pollutants (but we leave the ones that are already in the atmosphere), we’d find the earth’s temperature continues to change.

Which one is winning? Satellites allow us to measure the different effects fairly accurately, and observations from the pre-satellite era allow us to extrapolate backwards, so we can estimate the total effect of each from pre-industrial times to the present. Here’s a chart summarizing the effects:

Total Radiative Forcing from different sources for the period 1750 to 2005. From the IPCC Fourth Assessment Report (AR4). Click for bigger version and original IPCC caption.

 

Note that aerosols have two different effects. The direct effect is the one we described in the system diagram above – it blocks incoming sunlight. The indirect effect is because aerosols also interact with clouds. We’ll explore the indirect effect in a future post. However we look at it, the greenhouse gases are winning, by a large margin. That should mean the planet is warming. And it is:

Land-surface temperatures from 1750-2005, from the Berkeley Earth Surface Temperature project (click for original source)

Note the steep rise from the 1980s onwards, and compare it to the exponential curve of greenhouse gas emissions we saw earlier. More interestingly note the slight fall in the immediate postwar period (1940s to 1970s). One hypothesis for this is that during this period the sulphate aerosols were winning. There’s some uncertainty about the exact size of the aerosol effect during this period (note the size of the ‘uncertainty whiskers’ on the bar graph above). However, it’s true that concern about acid rain led legislation and international agreements in the 1980s to reduce sulphate emissions from fossil fuels.

The fact that sulphate aerosols have a cooling effect that can counteract the warming effect from greenhouse gases leads to an interesting proposal. If we can’t reduce the amount of greenhouse gases we emit, maybe instead we can increase the amount of sulphate aerosols. This has been studied as a serious geo-engineering proposal, and could be done quite cheaply, although we’d have to keep pumping up more of the stuff, as it washes out of the atmosphere fairly quickly. Alan Robock has identified 20 reasons why this is a bad idea. But really, we only need to know one reason why it’s a silly idea, and that comes directly from our analysis of the feedback loops in the economic growth and energy consumption system. As long as that loop is producing an exponential growth in greenhouse gas emissions, any attempt to counter-act them would also have to grow exponentially to keep up. The dimming effect from sulphate aerosols will affect many things on earth, including crop production. Committing ourselves to a path of exponential growth in sulphate aerosols in the stratosphere is quite clearly ridiculous. So if we ever do try this, it can only ever be a short-term solution, perhaps to buy us a few years to get the growth in greenhouse gases under control.

One other comment about the system diagrams we’ve created so far. Energy is mentioned twice in the diagrams: once in the loop describing economic growth, and once in the earth’s energy balance loop. We can compare these two. In the top loop, the current worldwide energy consumption by humans is about 16 terawatts. In the bottom loop, the current amount of energy being added to the earth due to greenhouse gases is about 300 terawatts. So the earth is currently gaining about 18 times the amount of energy that the entire human race is actually using. (Here’s how this is calculated)

Finally, note that although the diagram contains four different feedback loops, none of these are what climate scientists mean when they talk about feedbacks in the climate system. To understand why, we have to make a distinction between the basic operation of the system I’ve described so far (which drives global warming), and additional chains of cause-and-effect that respond to changes in the basic system. If you start warming the planet, using the system we’ve described so far, there are many other consequences. Some of those consequences can come back to bite you as reinforcing feedbacks. We’ll start looking at these in the next post.

Other posts in this series, so far:

In part 1, I described the central equilibrium loop that controls the earth’s temperature. Now it’s time to look at other loops that interact with central loop, and tend to push it out of balance. The most important of these is the use of fossil fuels that produce greenhouse gases, which change the composition of the atmosphere. Let’s first have a look at energy consumption on its own. Here’s the basic loop:

Core economic growth and energy consumption loop

Core economic growth and energy consumption loop

This reinforcing loop has driven the growth in the economy and energy use since the beginning of the industrial era. As we might expect from a reinforcing loop, this dynamic creates an exponential curve – in both the size of the global economy and the consumption of fossil fuels. For example, here’s the curve for carbon produced per year from fossil fuels (data from CDIAC):

Global Greenhouse gas emissions per year

The exponential rise in global carbon emissions (click for bigger version)

For the first century, the curve looks flat, but it’s not zero. In 1751 the world was producing about 3 tonnes of carbon per year, and this rises to about 50 tonnes per year by 1851. The growth really gets going in the postwar period. There are dips for global recession in the 1930s and 1980s, but these barely dent the overall rise. (For a slightly more detailed exploration of the dynamics that drive this exponential growth, see the Postcarbon Institute’s 300 years of fossil fuels in 300 seconds).

Exponential growth cannot go on forever, so there must be a balancing loop somewhere that (eventually) brings this growth to a halt. The world’s supply of fossil fuels is finite, so if we keep climbing the exponential curve, we must eventually run out. But long before that happens, prices start to rise because of scarcity. So the actual balancing loop looks like this:

The "peak oil" balancing loop for economic growth

I call the left hand loop the “peak oil” balancing loop for economic growth

In this new loop, each link inverts the direction of change: as consumption of fossil fuels rises, the remaining reserves fall. As reserves fall, the price tends to go up. As the price goes up, the rate of consumption falls. It’s a balancing loop because (once it starts operating) each rise in fossil fuel prices should cause a price rise that then damps down further consumption.

If these two loops operate on their own, we might expect an initial exponential curve in the use of fossil fuels, until there are signs that reserves are being depleted, and then a gradual phasing out of fossil fuels, producing the classic bell-shaped curve of Hubbert’s peak theory. In the process, economic development would also grind to a halt, unless we manage to decouple the first loop fairly quickly, by switching to renewable sources of energy. In theory, the rising price of fossil fuels should cause this switch to happen gracefully, once prices start rising – a properly functioning economic system should guarantee this. Unfortunately, the switch is not easy, because we’ve build a massive energy infrastructure that is based exclusively on fossil fuels, and this locks us in to a dependency on fossil fuels. This lock-in, along with the exponential growth in demand, means that we cannot just switch to alternative energy as the prices rise – a more likely outcome is an overshoot, where the rate of consumption is stuck in an upwards trend, causing prices to shoot up, while the balancing loop is unable to do its stuff.

However, there’s another complication. Conventional sources aren’t the only way to get fossil fuels. As the price rises, other sources become viable. The classic example is the Alberta oil sands. Twenty years ago, nobody could extract these because it was too expensive, compared to the price of oil. Today, the price of oil is high enough that exploiting the oil sands becomes profitable. So there’s another loop:

I call this new loop the "tar sands" balancing loop

I call this new loop the “tar sands” balancing loop

This new loop balances the rising prices from the middle loop when reserves start to fall. So now we have a system that could keep the economy functioning well beyond the point at which we exhaust conventional sources of fossil fuels. At each new price point, there’s a stimulus to start tapping new sources of these fuels, and as these new sources come on stream, they allow the global economy to keep growing, and the consumption of fossil fuels to keep rising. To someone who just studies economics of energy, everything looks okay for the foreseeable future (except that cheap oil is never coming back). To someone who predicts doom because of peak oil, it complicates the picture (except that the resource depletion predictions were basically correct). But to someone who studies climate, it means the challenge just got harder…

In the next post, I’ll link this system with the basic climate system.

Other posts in this series, so far:

I wrote earlier this week that we should incorporate more of the key ideas from systems thinking into discussions about climate change and sustainability. Here’s an example: I think it’s very helpful to think about the climate as a set of interacting feedback loops. If you understand how those feedback loops work, you’ve captured the main reasons why climate change is such a massive challenge for humanity. So, this is the first in a series of posts where I attempt to substantiate my argument. I’ll describe the global climate in terms of a set of balancing and reinforcing feedback loops. (Note: This is a very elementary introduction. If you prefer a detailed mathematical treatment of feedbacks in the climate system, try this paper by Gerard Roe)

Before we start, we need some basic concepts. The first is the idea of a feedback loop. We’re used to thinking in terms of linear sequences of cause and effect: A causes B, which causes C, and so on. However, our interactions with the world are rarely like this. More often, change tends to feed back on itself. For example, we identify a problem that needs solving, we take some action to solve it, and that action ends up changing our perception of the problem. The feedback usually comes in one of two forms. The first is a balancing feedback: The more you try to change something, the more the world pushes back and makes it harder. Take dieting for example: if we manage to lose a few pounds, the sense of achievement can make us complacent, and then we put the weight all back on again. The second form is a reinforcing feedback. This is where success feeds on itself. For example, perhaps we try a new exercise regime, and it makes us feel energized, so we end up exercising even more, and so on.

In physics and engineering, these are usually called ‘positive’ and ‘negative’ feedback loops. I prefer to call them ‘reinforcing’ and ‘balancing’ loops, because it’s a better description of what they do. People tend to think ‘positive’ means good and ‘negative’ means bad. In fact, both types of loop can be good or bad, depending on what you think the system ought to be doing. A reinforcing loop is good when you want to achieve a change (e.g. your protest movement goes viral), but is certainly not good when it’s driving a change you don’t want (a forest fire spreading, for example). Similarly, a balancing loop is good when it keeps a system stable that you depend on (prices in the marketplace, perhaps), but is bad when it defeats your attempts to bring about change (as in the dieting example above).

It helps to draw pictures. Here’s an example of how both types of loop affect a tech company trying to sell a new product (say, the iPhone):

The action of reinforcing and balancing feedback loops in selling iPhones

The action of reinforcing and balancing feedback loops in selling iPhones

You can read the arrows labelled “+” as “more of A tends to cause more of B than there otherwise would have been, while less of A tends to cause less of B than there otherwise would have been”. The arrows labelled “-” mean “more of A tends to cause less of B, and less of A tends to cause more of B”. [Note: there are some subtleties to this interpretation, but we can ignore them for now.]

On the left, we have a reinforcing loop (labelled with an ‘R’): the effect of word of mouth. The more iPhones we sell, the more people there are to spread the word, which in turn means more get sold. This tends to create an exponential growth in sales figures. However, this cannot go on forever. Sooner or later, the balancing loop on the right starts to matter (labelled with a ‘B’). The more iPhones sold, the fewer there are people left without one – we start to saturate the market. The more the market is saturated, the fewer iPhones we can sell. The growth in sales slows, and may even stop altogether. The resulting graph of sales over time might look like this:

How the sales of iPhones might look over time

How the sales of iPhones might look over time

When the reinforcing loop dominates, sales grow exponentially. When the balancing loop dominates, sales stagnate. In this case, the natural limit is when everyone who might ever want an iPhone has one. Of course, in real life, the curves are never this smooth – other feedback loops (that we haven’t mentioned yet) kick in, and temporarily push sales up or down. However, we could hypothesize that these two loops do explain most of the dynamic behaviour of the sales of a new product, and everything else is just noise. In many cases this is true – diffusion of innovation studies frequently reveal this type of S-shaped curve.

The structure of these two loops and the S-shaped curve they produce describe many real world phenomena: the spread of disease, growth of a population, the growth of a firm, the spread of a forest fire. In each case, there may well be other feedback loops that complicate the picture. But the underlying story about growth and its limits still captures a basic truth: exponential growth occurs when there is a reinforcing feedback loop, and as nothing can grow exponentially forever, there must always be a balancing loop somewhere that provides a limit to growth.

Okay, that’s enough background. Time to look at the first feedback loop in the global climate system. We’ll start with the global climate system in its equilibrium state – i.e. when the climate is not changing. The climate has been remarkably stable for the last 10,000 years, since the end of the last ice age. Over that time, it has varied only within less than 1°C. That stability suggests there are likely to be balancing feedback loops keeping it stable. The most important of these is the basic energy balance loop:

The Earth's energy balance as a balancing loop

The Earth’s energy balance as a balancing loop

The temperature of the planet is determined primarily by the balance between the incoming energy from the sun and the outgoing energy lost back into space. The incoming energy is in the form of shortwave radiation from the sun, and the amount we get is determined by the solar constant (which, of course, is not really constant, although the variations were too small to measure before the satellite era). The incoming energy from the sun, averaged out over the surface of the earth, is about 340 watts per square meter. If this is greater than the outgoing energy, the imbalance causes the earth to retain more energy, and so the temperature rises. As a warmer planet loses energy faster, this increases the outgoing radiation, which in turn reduces the imbalance again (i.e. this is a balancing loop).

Imagine there’s an overshoot – i.e. the outgoing radiation rises, but goes a little too far, so that it’s now more than the incoming solar radiation. This reduces the net radiative forcing so far that it becomes negative. But a decrease in net radiative forcing tends to cause a decrease in energy retained, which causes a decrease in temperature, which causes a decrease in outgoing radiation again. So the balancing loop also cancels out any overshoot sooner or later. In other words, the structure of this loop always pushes the planet to find a (roughly) stable equilibrium: essentially, if the incoming and outgoing energy ever get out of balance, the temperature of the planet rises or falls until they are balanced again.

Note that we could tell this is a balancing loop, without tracing the effects, just by counting up the number of “-” links. If it’s an odd number, it’s a balancing loop; if it’s even (or zero), it’s a reinforcing loop. In my systems thinking class, we play a game that simulates different kinds of loop, with each person acting as one link (some are “+” links, some are “-” links). The students usually find it hard to predict how loops of different structure will behave, but once we’ve played it a few times, everyone has a good intuition for the difference between reinforcing loops and balancing loops.

There is one more complication for this loop. The net radiative forcing determines the rate at which energy is retained, rather than the total amount. If the net forcing is positive, the earth keeps on retaining energy. So although this leads to an increase temperature, and, if you follow the loop around, a decrease in the net radiative forcing, it will reduce the rate at which energy is retained (and hence the rate of warming), it won’t actually stop the warming until the net radiative balance falls to zero. And then, when the warming stops, it doesn’t cool off again – the loop ensures the planet stays at this new temperature. It’s a slow process because it takes time for the planet to warm up. For example, the oceans can absorb a huge amount of energy before you’ll notice any increase in temperature. This means the loop operates slowly. We know from simulations (and from studies of the distant past) that it can take many decades for the planet to find a new balance in response to a change in net radiative forcing.

There are, of course, other feedback loops to complicate the picture, and some of them are reinforcing loops. I’ll describe some of these in my next post. But from an understanding of this one loop, we can gain a number of insights:

  1. This loop, on its own, cannot produce a runaway global warming (or cooling) – the earth will eventually find a new equilibrium in response to a change in net radiative forcing. More precisely, for a runaway warming to occur, some other reinforcing loop must dominate this one. As I said, there are some reinforcing loops, and they complicate the picture, but nobody has managed to demonstrate that any of them are strong enough to overcome the balancing effect of this loop.
  2. The balancing loop has a delay, because it takes a lot of energy to warm the oceans. Hence, once a change starts in this loop, it takes many decades for the balancing effect to kick in. That’s the main reason why we have to take action on climate change many decades before we see the full effect. On human timescales, the earth’s natural balancing mechanism is a very slow process.
  3. If we make a one-time change to the radiative balance, the earth will slowly change its temperature until it reaches a new balance point, and then will stay there, because the balancing loop keeps it there. However, if there is some other force that keeps changing the radiative balance, despite this loop’s attempts to adjust, then the temperature will keep on changing. Our current dilemma with respect to climate change isn’t that we’ve made a one-time change to the amount of greenhouse gases in the atmosphere – the dilemma is that we’re continually changing them. This balancing loop only really helps once we stop changing the atmosphere.

Other posts in this series, so far:

Early in my career I trained as a systems analyst. My PhD was about the ability to identify and make use of multiple perspectives on a system when understanding people’s needs, and designing new information systems to meet them. I became a “systems thinker”, although I didn’t encounter the term until later.

I also didn’t really appreciate until recently how much systems thinking changes everything about how you perceive the world. Perhaps the best analogy is the scene in The Matrix, when Morpheus offers Neo the choice of the red pill or the blue pill. One of these choices will allow him to step outside of the system and see it in a new way. Once he has done that he can never go back to seeing the world the way he used to (although there’s an interesting subplot in the movie where one of the characters tries to do exactly that).

When I think about climate change, I approach it as a systems thinker. I look for parts of the problem that I can characterize as a system: where are the inputs and outputs, boundaries and control mechanisms, positive and negative feedbacks, interactions with other systems? I want to build systems dynamics models that capture a system as a set of stocks and flows, and explore how cycles and delays affect the overall behaviour of the system. And of course, I’m always looking out for emergent properties: things that arise as a result of interactions across a system but that cannot be studied through reductionism.

It’s not surprising then, that I’m fascinated by Earth System Models (ESMs). These capture some of the most complex systems interactions ever described in a computational model – on a planetary scale! ESMs can be used to explore how processes at small scales give rise to emergent properties on a global scale. They provide a test-bed for what-if questions, to explore whether our understanding of the physical systems makes sense. And fundamentally, they’re used to probe questions of stability of the system: the relationship between the size of a “forcing” (which tends to push the system out of equilibrium) and the size of its “effect” (e.g. how sensitive is the global average temperature to a doubling of CO2?). To connect the two, you have to explore the positive and negative feedbacks that amplify (or dampen) the effects. And of course, we’d like to understand the nature of tipping points, thresholds beyond which positive feedbacks can push the system towards entirely different equilibrium points.

People who don’t understand climate change tend to lack a grasp of how complex systems work, and that’s unfortunate because for any system of sufficient complexity, most of its behaviour is counter-intuitive. People ask how a gas that forms such a tiny fraction of the atmosphere can have such a large effect, because they don’t understand that the earth constantly receives and emits huge amounts of energy into space, and that it only takes a tiny imbalance between the input and output to disrupt the planet’s equilibrium. People assume the climate system will always tend to revert to the stable pattern it has exhibited in the past, because they don’t understand positive feedbacks and exponential change. People assume we can wait to fix the climate system once we’ve seen how bad it might get, because they don’t understand the ideas of inertia and overshoot when a system has a delayed response to a stimulus. And people wonder how we can predict anything at all about climate dynamics, because they confuse chaos with randomness.

Climate science (and especially climate modeling) is inherently a systems discipline. However, climate scientists tend to hail from the physical sciences, and hence sometimes seem to miss an important aspect of systems analysis. In the physical sciences, you learn how to observe and experiment with physical systems in order to understand and explain them. But you’re not trained to re-design them to work better – that’s generally left to the engineers. Unfortunately, most engineering disciplines don’t cover systems thinking either. They concern themselves with the properties of families of devices (e.g. electrical circuits), and how such devices can be applied to solve problems. Engineers are not usually trained to re-conceptualize systems in entirely new ways, to understand how they can be changed. (Systems Engineering would be the exception here, but it’s a very young discipline).

So systems thinkers are quite rare, both across the physical sciences and the engineering disciplines. You actually encounter more of them in the social sciences, because social systems tend to defy attempts at understanding them through reductionism, and because social scientists are often more comfortable with constructivism: the idea that the systems we describe as existing in the world are really only mental constructs, arrived at through social processes. My favourite definition of a system, from Gerald Weinberg is “a way of looking at the world”. In a sense, systems aren’t “out there” in the world, waiting to be studied. Systems are a convenient mental tool for making sense of how things in the world interact with one another. This means there’s no such thing as the “climate system”, just lots of interacting thermodynamic and chemical processes. That we choose to call it a ‘system’, name its parts, and treat it as a whole, is a convenience. But it’s a very useful one, because it offers rich insights for understanding, for example, how human activities alter the climate. Modelling the climate as a system means that we have to decide which clusters of things in the world to include in the models, and where we might usefully draw system boundaries. And if we’re doing this right, we ought to acknowledge that there are other ways of viewing these systems - no decision about where to draw system boundaries can ever be ‘correct’, but some decisions lead to more insights than others (compare with Box’s famous saying about models: “All models are wrong, but some are useful”).

While traditional branches of science offer tools and methods for understanding each of the pieces of the climate system, the study of the climate system as a whole requires a different approach. It is a trans-disciplinary field, because the interactions that matter include physical, chemical, biological, geographic, social, and economic processes. It goes beyond traditional methodologies of the physical sciences because it is anti-reductionist: it must grapple with understanding holistic properties of systems, even when the detailed behaviour of those systems is not sufficiently understood. In other words it’s a systems science, and climate modellers have to be systems thinkers.

All this leads me to argue that we should incorporate more of the key ideas from systems thinking into discussions about climate change and sustainability. I think that a better understanding of systems dynamics would help a lot in giving people the right intuitions about climate change. And I think a better understanding of critical systems approaches would give people a better understanding of how to improve collective decision-making around climate policy.

Note: This is the first of a series of posts exploring the systems dynamics of climate change. Here’s the rest of the series, so far:

I love working in a University. Every day I encounter new ideas, and I get to chat to some of the smartest people on the planet. But I see signs, almost every day, that universities are poorly equipped to face the complex challenges of the 21st Century. Challenges like poverty, climate change, resource depletion, sustainable agriculture, and so on. The problem is that universities are organized into departments that correspond to disciplines like physics, computer science, sociology, geography, etc. Most of the strategic decision-making is made in these departments – which faculty to hire, which degree programs to offer, what research to focus on. But the grand challenges of the 21st Century are trans-disciplinary. To address them, we need people who can transcend their own disciplinary background; people who are not only comfortable working with a range of experts from many different fields, but who actively go out and seek such interactions. In marketing speak, we need T-shaped people:

Jelinski, who is vice chancellor for research and graduate studies at Louisiana State University, talked about a new “T-shaped” person with disciplinary depth, in biology for example, but with the ability, or arms, to reach out to other disciplines. “We need to encourage this new breed of scientist,” she said. ["Researchers Seek Basics Of Nano Scale," The Scientist, August 21, 2000]

Universities don’t do this well because the entire reward structure for departments and professors is based purely on demonstrating disciplinary strength. Occasionally, we manage to establish inter-disciplinary centres and institutes, to act as places where faculty and students from different departments can come together and learn how to collaborate. A few of these prosper, but most of them get shut down fairly rapidly by the university. Here’s what happens. A new centre is set up with an initial research grant, perhaps for 3-5 years, which typically pays only for researchers’ salaries and equipment. The university agrees to provide space, administrative staff, and pay the utility bills for a limited time, because opening a new facility is good press, but then expects each centre to become “self-sufficient”. This is, of course, impossible, because no granting agency ever covers the full cost of running a research centre. The professors who want to make the centre succeed spend most of their time writing more grant proposals, most of which don’t get funded because competition for funding is tough. Nobody has much time to do the important inter-disciplinary work that the centre was established for. After five years, the university shuts it down because it didn’t become self-sufficient. A research centre at U of T that I’ve spent a lot of time at over the past few years is being shut down this month for these very reasons.

The same thing happens to inter-disciplinary graduate programs. While departments run graduate programs focusing on disciplinary strength, some enterprising professors do manage to set up “collaborative programs”, which students from a range of participating departments can sign up for. The collaborative programs are set up using seed money, some of which is donated by the participating departments, and some of which comes from the university teaching initiative funds, because they all agree the program is a good idea, and the students will benefit. However, after a few years, the seed money has been used up, and no unit within the university will kick in more, because the program is supposed to be “self-sufficient”. No such program can ever be self sufficient, because the students who participate are accounted for in their home departments. The collaborative program doesn’t generate any extra revenue, and the departments view it as ‘stealing their students’. Without funding, the program shuts down. A collaborative graduate program at U of T that I serve on the advisory board for is ending this month for these very reasons.

Not only does the university structure tend to squeeze out anything that does not fit into a neat disciplinary silo, it also generates rules that actively prevent students acquiring the skills needed to be “T-shaped”. For example, my own department has “breadth requirements” that graduate students have to meet when selecting a set of courses. “Breadth” here means breadth across the discipline. So students have to demonstrate they’ve taken courses that cover several different subfields of computer science, and several different research methodologies within the field. But this is the opposite of T-shaped! A T-shaped student has disciplinary *depth* and *inter-disciplinary* breadth. This would mean deep expertise in a particular subfield of computer science, and the ability to apply that expertise in many different contexts outside of computer science. Instead, we prevent students from getting the depth by forcing them to take more introductory courses within computer science, and we prevent them from getting inter-discipinary breadth for the same reason.

Working within a university sometimes feels like the intellectual equivalent of being at a lavish buffet but prevented from ever leaving the pasta section.

Next year, I’ll be teaching a new undergraduate course, as part of an initiative by the Faculty of Arts and Science known as Big Ideas courses. The idea is to offer trans-disciplinary courses, team taught by professors from across the physical sciences, social sciences, and humanities, that will probe important ideas about the world from different disciplinary perspectives. For the coming year, U of T is launching three Big Ideas courses:

  • BIG100: “The end of the world as we know it”;
  • BIG101: “Energy: From Fire to the Future”;
  • BIG102: “The Internet: Saving Civilization or Trashing the Planet?”

I’m delighted to be teaming up with Prof Miriam Diamond from Earth Sciences and Prof Pamela Klassen from Study of Religion to teach BIG102. Our aim is to give students some understanding of how the technologies that drive the internet work, and then to explore how the internet has reshaped the way we use information, our knowledge and beliefs about the world, and the impact that creating (and disposing of) internet technologies has on the environment, on the economy, and on the dynamics of innovation. A key goal is to foster critical thinking and information literacy skills, and especially to be able to think about and analyze a complex system-of-systems from different perspectives.

For the first term, we’re planning to cover a broad set of provocative questions, to get students thinking about the internet from different perspectives:

  1. What is a big idea? (A course introduction, and a primer on trans-disciplinary thinking)
  2. Who invented the internet? (Myths about the internet, and why they stick)
  3. How does the internet work? (An introduction to some of the key technologies)
  4. How new is the internet? (A short history of communications technologies, to put the internet in its historical context)
  5. Has the internet changed us? (We’ll explore in particular, how the internet is transforming universities and learning)
  6. What is the environmental footprint of the internet? (An initial assessment of energy consumption, resource extraction, and waste disposal)
  7. Does the internet make us smarter? (An exploration of how internet search works, and how it affects our approaches to problem-solving)
  8. Is the internet a time-saver or time-waster? (How the internet offers endless distractions, blurs distinctions between work and leisure, and its overall effect on productivity)
  9. Can you be anonymous on the internet? (The idea of your information footprint – who’s keeping track of data about you, how they do it, and why)
  10. Is the Internet a Cheater’s Paradise? (From plagiarism to adultery – how the internet facilitates cheating, new ways of discovering it, and virtual vigilante justice)
  11. Who’s Not Online? (The idea of the digital divide, and the demographic and socio-economic factors that limit people’s access)
  12. Gadgets as Gifts? (Just in time for the Christmas break, we’ll explore the environmental impact of our love of new gadgets, and whether there are sustainable alternatives)

In the second term, we plan to pick three themes to explore in more detail, so that we can explore inter-connections between some of these questions, and get the students engaged in independent research projects that synthesize what they’re learning:

  1. The Internet and the Innovation Imperative.
    • Is the Internet Innovative? How Moore’s law has driven innovation; the dotcom boom and bust; and the current hype around new technologies such as 3D printing, sensor networks, and the semantic web.
    • What are the Resource Implications of the Internet? We’ll use material flow analysis to explore extraction and disposal and likely shortages of strategic minerals, and the geo-political implications of attempting to feed an exponential growth in demand.
    • The Environmental and Human Health Burden of the Internet. Building on the discussion of resource implications, we’ll look at the health implications of mineral extraction and e-waste disposal, and the burden this places on people and ecosystems, especially in poorer countries.
    • What is the Opportunity Cost of the Internet? Does investment in internet innovation mean we’re underinvesting in other things (eg clean energy, transport, social innovation). Have we developed an over-optimistic belief that IT technologies can solve all problems?
  2. The Internet, Democracy, and Security.
    • Censorship & Internet Governance. How much power do governments have to control what happens on the internet? Does the internet enhance or undermine democracy?
    • The Underbelly of the Internet: Hackers, Espionage, and Trolls. How internet systems can be exploited by different groups, for example by crime syndicates who break into secure systems, by political groups who use a web presence to spread misinformation, and by internet trolls who violate social norms to disrupt and intimidate online discussions.
    • Does the Internet make us a more open society? The open source movement and its successors (open government, creative commons, etc) are based on the idea that if everyone has access to the inner workings of systems, this removes barriers to participation, fosters creativity, and makes those systems better for everyone. But does it work?
    • Transnational Jurisdiction: Legal boundaries and the Internet. We’ll wrap up this theme with a question about who should police the internet.
  3. The Internet, Communities, and Interpersonal Relationships
    • Does your Google-Brain make you forget? How has instant access to vast amounts of information changed our memories and our perceptions of ourselves? For example, does GPS route-finding mean we lose our ability to navigate and our sense of place? And what are the implications of the kind of personal digital archives that technologies such as Google Glass might allow us to create?
    • Can you find love on the Internet? An exploration of how the internet changes personal relationships, from the role of dating sites and virtual social networks, to the way that online porn affects our perceptions of gender roles and body image.
    • Can you find God on the Internet? How the internet affects religious communities, tolerance of different worldviews, and the very nature of faith.

Of course, this outline is still a draft – we’ll refine it over the next few months as we prepare for the first group of students in September.

We’re still exploring which textbooks to use, and even whether ‘books’ makes sense for a course like this – we’re hoping to make this a constructivist learning experience by using a variety of different internet-based media and information access tools throughout the course.  However, we’re currently evaluating these books:

Feel free to suggest other books and material!

We’re taking the kids to see their favourite band: Muse are playing in Toronto tonight. I’m hoping they play my favourite track:

I find this song fascinating, partly because of the weird mix of progressive rock and dubstep. But more for the lyrics:

All natural and technological processes proceed in such a way that the availability of the remaining energy decreases. In all energy exchanges, if no energy enters or leaves an isolated system, the entropy of that system increases. Energy continuously flows from being concentrated to becoming dispersed, spread out, wasted and useless. New energy cannot be created and high grade energy is destroyed. An economy based on endless growth is unsustainable. The fundamental laws of thermodynamics will place fixed limits on technological innovation and human advancement. In an isolated system, the entropy can only increase. A species set on endless growth is unsustainable.

This summarizes, perhaps a little too succinctly, the core of the critique of our current economy, first articulated clearly in 1972 by the Club of Rome in the Limits to Growth Study. Unfortunately, that study was widely dismissed by economists and policymakers. As Jorgen Randers points out in a 2012 paper, the criticism of the Limits to Growth study was largely based on misunderstandings, and the key lessons are absolutely crucial to understanding the state of the global economy today, and the trends that are likely over the next few decades. In a nutshell, humans exceeded the carrying capacity of the planet sometime in the latter part of the 20th century. We’re now in the overshoot portion, where it’s only possible to feed the world and provide energy for economic growth by consuming irreplaceable resources and using up environmental capital. This cannot be sustained.

In general systems terms, there are three conditions for sustainability (I believe it was Herman Daly who first set them out in this way):

  1. We cannot use renewable resources faster than they can be replenished.
  2. We cannot generate wastes faster than they can be absorbed by the environment.
  3. We cannot use up any non-renewable resource.

We can and do violate all of these conditions all the time. Indeed, modern economic growth is based on systematically violating all three of them, but especially #3, as we rely on cheap fossil fuel energy. But any system that violates these rules cannot be sustained indefinitely, unless it is also able to import resources and export wastes to other (external) systems. The key problem for the 21st century is that we’re now violating all three conditions on a global scale, and there are no longer other systems that we can rely on to provide a cushion – the planet as a whole is an isolated system. There are really only two paths forward: either we figure out how to re-structure the global economy to meet Daly’s three conditions, or we face a global collapse (for an understanding of the latter, see GrahamTurner’s 2012 paper).

A species set on endless growth is unsustainable.

We now have a fourth paper added to our special issue of the journal Geoscientific Model Development, on Community software to support the delivery of CMIP5. All papers are open access:

  • M. Stockhause, H. Höck, F. Toussaint, and M. Lautenschlager, Quality assessment concept of the World Data Center for Climate and its application to CMIP5 data, Geosci. Model Dev., 5, 1023-1032, 2012.
    Describes the distributed quality control concept that was developed for handling the terabytes of data generated from CMIP5, and the challenges in ensuring data integrity (also includes a useful glossary in an appendix).
  • B. N. Lawrence, V. Balaji, P. Bentley, S. Callaghan, C. DeLuca, S. Denvil, G. Devine, M. Elkington, R. W. Ford, E. Guilyardi, M. Lautenschlager, M. Morgan, M.-P. Moine, S. Murphy, C. Pascoe, H. Ramthun, P. Slavin, L. Steenman-Clark, F. Toussaint, A. Treshansky, and S. Valcke, Describing Earth system simulations with the Metafor CIM, Geosci. Model Dev., 5, 1493-1500, 2012.
    Explains the Common Information Model, which was developed to describe climate model experiments in a uniform way, including the model used, the experimental setup and the resulting simulation.
  • S. Valcke, V. Balaji, A. Craig, C. DeLuca, R. Dunlap, R. W. Ford, R. Jacob, J. Larson, R. O’Kuinghttons, G. D. Riley, and M. Vertenstein, Coupling technologies for Earth System Modelling, Geosci. Model Dev., 5, 1589-1596, 2012.
    An overview paper that compares different approaches to model coupling used by different earth system models in the CMIP5 ensemble.
  • S. Valcke, The OASIS3 coupler: a European climate modelling community software, Geosci. Model Dev., 6, 373-388, 2013 (See also the Supplement)
    A detailed description of the OASIS3 coupler, which is used in all the European models contributing to CMIP5. The OASIS User Guide is included as a supplement to this paper.

(Note: technically speaking, the call for papers for this issue is still open – if there are more software aspects of CMIP5 that you want to write about, feel free to submit them!)

Last week, Damon Matthews from Concordia visited, and gave a guest CGCS lecture, “Cumulative Carbon and the Climate Mitigation Challenge”. The key idea he addressed in his talk is the question of “committed warming” – i.e. how much warming are we “owed” because of carbon emissions in the past (irrespective of what we do with emissions in the future). But before I get into the content of Damon’s talk, here’s a little background.

The question of ‘owed’ or ‘committed’ warming arises because we know it takes some time for the planet to warm up in response to an increase in greenhouse gases in the atmosphere. You can calculate a first approximation of how much it will warm up from a simple energy balance model (like the ones I posted about last month). However, to calculate how long it takes to warm up you need to account for the thermal mass of the oceans, which absorb most of the extra energy and hence slow the rate of warming of surface temperatures. For this you need more than a simple energy balance model.

You can do a very simple experiment with a Global Circulation Model, by setting CO2 concentrations at double their pre-industrial levels, and then leave them constant at this level, to see how long the earth takes to reach a new equilibrium temperature. Typically, this takes several decades, although the models differ on exactly how long. Here’s what it looks like if you try this with EdGCM (I ran it with doubled CO2 concentrations starting in 1958):

EVA_time

Of course, the concentrations would never instantaneously double like that, so a more common model experiment is to increase CO2 levels gradually, say by 1% per year (that’s a little faster than how they have risen in the last few decades) until they reach double the pre-industrial concentrations (which takes approx 70 years), and then leave them constant at that level. This particular experiment is a standard way of estimating the Transient Climate Response - the expected warming at the moment we first reach a doubling of CO2 - and is included in the CMIP5 experiments. In these model experiments, it typically takes a few decades more of warming until a new equilibrium point is reached, and the models indicate that the transient response is expected to be a little over half of the eventual equilibrium warming.

This leads to a (very rough) heuristic that as the planet warms, we’re always ‘owed’ almost as much warming again as we’ve already seen at any point, irrespective of future emissions, and it will take a few decades for all that ‘owed’ warming to materialize. But, as Damon argued in his talk, there are two problems with this heuristic. First, it confuses the issue when discussing the need for an immediate reduction in carbon emissions, because it suggests that no matter how fast we reduce them, the ‘owed’ warming means such reductions will make little difference to the expected warming in the next two decades. Second, and more importantly, the heuristic is wrong! How so? Read on!

For an initial analysis, we can view the climate problem just in terms of carbon dioxide, as the most important greenhouse gas. Increasing CO2 emissions leads to increasing CO2 concentrations in the atmosphere, which leads to temperature increases, which lead to climate impacts. And of course, there’s a feedback in the sense that our perceptions of the impacts (whether now or in the future) lead to changed climate policies that constrain CO2 emissions.

So, what happens if we were to stop all CO2 emissions instantly? The naive view is that temperatures would continue to rise, because of the ‘climate commitment’  - the ‘owed’ warming that I described above. However, most models show that the temperature stabilizes almost immediately. To understand why, we need to realize there are different ways of defining ‘climate commitment’:

  • Zero emissions commitment – How much warming do we get if we set CO2 emissions from human activities to be zero?
  • Constant composition commitment – How much warming do we get if we hold atmospheric concentrations constant? (in this case, we can still have some future CO2 emissions, as long as they balance the natural processes that remove CO2 from the atmosphere).

The difference between these two definition is shown here. Note that in the zero emissions case, concentrations drop from an initial peak, and then settle down at a lower level:

Committed-concentrations

CommittedWarming

The model experiments most people are familiar with are the constant composition experiments, in which there is continued warming. But in the zero emissions scenarios, there is almost no further warming. Why is this?

The relationship between carbon emissions and temperature change (the “Carbon Climate Response”) is complicated, because it depends two factors, each of which is complicated by (different types of) inertia in the system:

  • Climate Sensitivity – how much temperature changes in response to difference levels of CO2 in the atmosphere. The temperature response is slowed down by the thermal inertia of the oceans, which means it takes several decades for the earth’s surface temperatures to respond fully to a change in CO2 concentrations.
  • Carbon sensitivity – how much concentrations of CO2 in the atmosphere change in response to different levels of carbon emissions. A significant fraction (roughly half) of our CO2 emissions are absorbed by the oceans, but this also takes time. We can think of this as “carbon cycle inertia” – the delay in uptake of the extra CO2, which also takes several decades. [Note: there is a second kind of carbon system inertia, by which it takes tens of thousands of years for the rest of the CO2 to be removed, via very slow geological processes such as rock weathering.]

Carbon-Response

It turns out that the two forms of inertia roughly balance out. The thermal inertia of the oceans slows the rate of warming, while the carbon cycle inertia accelerates it. Our naive view of the “owed” warming is based on an understanding of only one of these, the thermal inertia of the ocean, because much of the literature talks only about climate sensitivity, and ignores the question of carbon sensitivity.

The fact that these two forms of inertia tend to balance leads to another interesting observation. The models all show an approximately linear response to cumulative emissions. For example, here are the CMIP3 models, used in the IPCC AR4 report (the average of the models, indicated by the arrow, is around 1.6C of warming per 1,000 gigatonnes of carbon):

Temp-Against-Cum-Emissions

The same relationship seems to hold for the CMIP5 models, many of which now include a dynamic carbon cycle:

Temp-against-cum-emissions-CMIP5

This linear relationship isn’t determined by any physical properties of the climate system, and probably won’t hold in much warmer or cooler climates, nor when other feedback processes kick in. So we could say it’s a coincidental property of our current climate. However, it’s rather fortuitous for policy discussions.

Historically, we have emitted around 550 billion tonnes since the beginning of the industrial era, which gives us an expected temperature response of around 0.9°C. If we want to hold temperature rises to be no more than 2°C of warming, total future emissions should not exceed a further 700 billion tonnes of Carbon. In effect, this gives us a total worldwide carbon budget for the future. The hard policy question, of course, is then how to allocate this budget among the nations (or people) of the world in an equitable way.

[A few years ago, I blogged about a similar analysis, which says that cumulative carbon emissions should not exceed 1 trillion tonnes in total, ever. That calculation gives us a smaller future budget of less then 500 billion tonnes. That result came from analysis using the Hadley model, which has one of the higher slopes on the graphs above. Which number we use for a global target then might depend on which model we believe gives the most accurate projections, and perhaps how we also factor in the uncertainties. If the uncertainty range across models is accurate, then picking the average would give us a 50:50 chance of staying within the temperature threshold of 2°C. We might want better odds than this, and hence a smaller budget.]

In the National Academies report in 2011, the cumulative carbon budgets for each temperature threshold were given as follows (note the size of the uncertainty whiskers on each bar):

emissions-targets-NAS2011

[For a more detailed analysis see: Matthews, H. D., Solomon, S., & Pierrehumbert, R. (2012). Cumulative carbon as a policy framework for achieving climate stabilization. Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 370(1974), 4365–79. doi:10.1098/rsta.2012.0064]

So, this allows us to clear up some popular misconceptions:

The idea that there is some additional warming owed, no matter what emissions pathway we follow is incorrect. Zero future emissions means little to no future warming, so future warming depends entirely on future emissions. And while the idea of zero future emissions isn’t policy-relevant (because zero emissions is impossible, at least in the near future), it does have implications for how we discuss policy choices. In particular, it means the idea that CO2 emissions cuts will not have an effect on temperature change for several decades is also incorrect. Every tonne of CO2 emissions avoided has an immediate effect on reducing the temperature response.

Another source of confusion is the emissions scenarios used in the IPCC report. They don’t diverge significantly for the first few decades, largely because we’re unlikely (and to some extent unable) to make massive emissions reductions in the next 1-2 decades, because society is very slow to respond to the threat of climate change, and even when we do respond, the amount of existing energy infrastructure that has to be rebuilt is huge. In this sense, there is some inevitable future warming, but it comes from future emissions that we cannot or will not avoid. In other words, political, socio-economic and technological inertia are the primary causes of future climate warming, rather than any properties of the physical climate system.

Like most universities, U of T had a hiring freeze for new faculty for the last few years, as we struggled with budget cuts. Now, we’re starting to look at hiring again, to replace faculty we lost over that time, and to meet the needs of rapidly growing student enrolments. Our department (Computer Science) is just beginning the process of deciding what new faculty positions we wish to argue for, for next year. This means we get to engage in a fascinating process of exploring what we expect to be the future of our field, and where there are opportunities to build exciting new research and education programs. To get a new faculty position, our department has to make a compelling case to the Dean, and the Dean has to balance our request with those from 28 other departments and 46 interdisciplinary groups. So the pitch has to be good.

So here’s my draft pitch:

(1) Create a joint faculty position between the Department of Computer Science and the new School of Environment.

Last summer U of T’s Centre for Environment was relaunched as a School of Environment, housed wholly within the Faculty of Arts and Science. As a school, it can now make up to 49% faculty appointments. [The idea is that to do interdisciplinary research, you need a base in a home department/discipline, where your tenure and promotion will be evaluated, but would spend half your time engaged in inter-disciplinary research and teaching at the School. Hence, a joint position for us would be 51% CS and 49% in the School of Environment.]

A strong relationship between Computer Science and the School of Environment makes sense for a number of reasons. Most environmental science research makes extensive use of computational modelling as a core research tool, and the environmental sciences are one of the greatest producers of big data. As an example, the Earth System Grid currently stores more than 3 petabytes of data from climate models, and this is expected to grow to the point where by the end of the decade a single experiment with a climate model would generate an exabyte of data. This creates a number of exciting opportunities for application of CS tools and algorithms, in a domain that will challenge our capabilities. At the same time, this research is increasingly important to society, as we seek to find ways to feed 9 billion people, protect vital ecosystems, and develop strategies to combat climate change.

There are a number of directions we could go with such a collaboration. My suggestion is to pick one of:

  • Climate informatics. A small but growing community is applying machine learning and data mining techniques to climate datasets. Two international workshops have been held in the last two years, and the field has had a number of successes in knowledge discovery that have established its importance to climate science. For a taste of what the field covers, see the agenda of the last CI Workshop.
  • Computational Sustainability. Focuses on the decision-support needed for resource allocation to develop sustainable solutions in large-scale complex adaptive systems. This could be viewed as a field of applied artificial intelligence, but to do it properly requires strong interdisciplinary links with ecologists, economists, statisticians, and policy makers. This growing community has run run an annual conference, CompSust, since 2009, as well as tracks at major AI conferences for the last few years.
  • Green Computing. Focuses on the large environmental footprint of computing technology, and how to reduce it. Energy efficient computing is a central concern, although I believe an even more interesting approach is when we take a systems approach to understand how and why we consume energy (whether in IT equipment directly, or in devices that IT can monitor and optimize). Again, a series of workshops in the last few years has brought together an active research community (see for example, Greens’2013),

(2) Hire more software engineering professors!

Our software engineering group is now half the size it was a decade ago, as several of our colleagues retired. Here’s where we used to be, but that list of topics and faculty is now hopelessly out of date. A decade ago we had five faculty and plans to grow this to eight by now. Instead, because of the hiring freeze and the retirements, we’re down to three. There were a number of reasons we expected to grow the group, not least because for many years, software engineering was our most popular undergraduate specialist program and we had difficulty covering all the teaching, and also because the SE group had proved to be very successful in bringing in research funding, research prizes, and supervising large numbers of grad students.

Where do we go from here? Deans generally ignore arguments that we should just hire more faculty to replace losses, largely because when faculty retire or leave, that’s the only point at which a university can re-think its priorities. Furthermore, some of our arguments for a bigger software engineering group at U of T went away. Our department withdrew the specialist degree in software engineering, and reduced the number of SE undergrad courses, largely because we didn’t have the faculty to teach them, and finding qualified sessional instructors was always a struggle. In effect, our department has gradually walked away from having a strong software engineering group, due to resource constraints.

I believe very firmly that our department *does* need a strong software engineering group, for a number of reasons. First, it’s an important part of an undergrad CS education. The majority of our students go on to work in the software industry, and for this, it is vital that they have a thorough understanding of the engineering principles of software construction. Many of our competitors in N America run majors and/or specialist programs in software engineering, to feed the enormous demand from the software industry for more graduates. One could argue that this should be left to the engineering schools, but these schools tend to lack sufficient expertise in discrete math and computing theory. I believe that software engineering is rooted intellectually in computer science and that a strong software engineering program needs the participation (and probably the leadership) of a strong computer science department. This argument suggests we should be re-building the strength in software engineering that we used to have in our undergrad program, rather than quietly letting it whither.

Secondly, the complexity of modern software systems makes software engineering research ever more relevant to society. Our ability to invent new software technology continues to outpace our ability to understand the principles by which that software can be made safe and reliable. Software companies regularly come to us seeking to partner with us in joint research and to engage with our grad students. Currently, we have to walk away from most of these opportunities. That means research funding we’re missing out on.

I’ve been collecting examples of different types of climate model that students can use in the classroom to explore different aspects of climate science and climate policy. In the long run, I’d like to use these to make the teaching of climate literacy much more hands-on and discovery-based. My goal is to foster more critical thinking, by having students analyze the kinds of questions people ask about climate, figure out how to put together good answers using a combination of existing data, data analysis tools, simple computational models, and more sophisticated simulations. And of course, learn how to critique the answers based on the uncertainties in the lines of evidence they have used.

Anyway, as a start, here’s a collection of runnable and not-so-runnable models, some of which I’ve used in the classroom:

Simple Energy Balance Models (for exploring the basic physics)

General Circulation Models (for studying earth system interactions)

  • EdGCM – an educational version of the NASA GISS general circulation model (well, an older version of it). EdGCM provides a simplified user interface for setting up model runs, but allows for some fairly sophisticated experiments. You typically need to let the model run overnight for a century-long simulation.
  • Portable University Model of the Atmosphere (PUMA) – a planet Simulator designed by folks at the University of Hamburg for use in the classroom to help train students interested in becoming climate scientists.

Integrated Assessment Models (for policy analysis)

  • C-Learn, a simple policy analysis tool from Climate Interactive. Allows you to specify emissions trajectories for three groups of nations, and explore the impact on global temperature. This is a simplified version of the C-ROADS model, which is used to analyze proposals during international climate treaty negotiations.
  • Java Climate Model (JVM) – a detailed desktop assessment model that offers detailed controls over different emissions scenarios and regional responses.

Systems Dynamics Models (to foster systems thinking)

  • Bathtub Dynamics and Climate Change from John Sterman at MIT. This simulation is intended to get students thinking about the relationship between emissions and concentrations, using the bathtub metaphor. It’s based on Sterman’s work on mental models of climate change.
  • The Climate Challenge: Our Choices, also from Sterman’s team at MIT. This one looks fancier, but gives you less control over the simulation – you can just pick one of three emissions paths: increasing, stabilized or reducing. On the other hand, it’s very effective at demonstrating the point about emissions vs. concentrations.
  • Carbon Cycle Model from Shodor, originally developed using Stella by folks at Cornell.
  • And while we’re on systems dynamics, I ought to mention toolkits for building your own systems dynamics models, such as Stella from ISEE Systems (here’s an example of it used to teach the global carbon cycle).

Other Related Models

  • A Kaya Identity Calculator, from David Archer at U Chicago. The Kaya identity is a way of expressing the interaction between the key drivers of carbon emissions: population growth, economic growth, energy efficiency, and the carbon intensity of our energy supply. Archer’s model allows you to play with these numbers.
  • An Orbital Forcing Calculator, also from David Archer. This allows you to calculate what the effect changes in the earth’s orbit and the wobble on its axis have on the solar energy that the earth receives, in any year in the past of future.

Useful readings on the hierarchy of climate models

A high school student in Ottawa, Jin, writes to ask me for help with a theme on the question of whether global warming is caused by human activities. Here’s my answer:

The simple answer is ‘yes’, global warming is caused by human activities. In fact we’ve known this for over 100 years. Scientists in the 19th Century realized that some gases in the atmosphere help to keep the planet warm by stopping the earth losing heat to outer space, just like a blanket keeps you warm by trapping heat near your body. The most important of these gases is Carbon Dioxide (CO2). If there were no CO2 in the atmosphere, the entire earth would be a frozen ball of ice. Luckily, that CO2 keeps the planet at the temperatures that are suitable for human life. But as we dig up coal and oil and natural gas, and burn them for energy, we increase the amount of CO2 in the atmosphere and hence we increase the temperature of the planet. Now, while scientists have known this since the 19th century, it’s only in the last 30 years that scientists were able to calculate precisely how fast the earth would warm up, and which parts of the planet would be affected the most.

Here are three really good explanations, which might help you for your theme:

  1. NASA’s Climate Kids website:
    http://climatekids.nasa.gov/big-questions/
    It’s probably written for kids younger than you, but has really simple explanations, in case anything isn’t clear.
  2. Climate Change in a Nutshell – a set of short videos that I really like:
    http://www.planetnutshell.com/climate
  3. The IPCC’s frequently asked question list. The IPCC is the international panel on climate change, whose job is to summarize what scientists know, so that politicians can make good decisions. Their reports can be a bit technical, but have a lot more detail than most other material:
    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faqs.html

Also, you might find this interesting. It’s a list of successful predictions by climate scientists. One of the best ways we know that science is right about something is that we are able to use our theories to predict what while happen in the future. When those predictions turn out to be correct, it gives us a lot more confidence that the theories are right: http://www.easterbrook.ca/steve/?p=3031

By the way, if you use google to search for information about global warming or climate change, you’ll find lots of confusing information, and different opinions. You might wonder why that is, if scientists are so sure about the causes of climate change. There’s a simple reason. Climate change is a really big problem, one that’s very hard to deal with. Most of our energy supply comes from fossil fuels, in one way or another. To prevent dangerous levels of warming, we have to stop using them. How we do that is hard for many people to think about. We really don’t want to stop using them, because the cheap energy from fossil fuels powers our cars, heats our homes, gives us cheap flights, powers our factories, and so on.

For many people it’s easier to choose not to believe in global warming than it is to think about how we would give up fossil fuels. Unfortunately, our climate doesn’t care what we believe – it’s changing anyway, and the warming is accelerating. Luckily, humans are very intelligent, and good at inventing things. If we can understand the problem, then we should be able to solve it. But it will require people to think clearly about it, and not to fool themselves by wishing the problem away.