At the start of this series, I argued that Climate Science is inherently a Systems Discipline. To develop that idea, I described two important systems as feedback loops: the earth’s temperature equilibrium loop and economic growth and energy consumption, and then we put these two systems together.

The basic climate system now looks like this (leaving out, for now, the dynamics that drive economic development and energy use):

The basic planetary energy balancing loop, with the burning of fossil fuels forcing the temperature to change

The basic planetary energy balancing loop, with the burning of fossil fuels forcing the temperature to change

Recall that the balancing loop (marked with a ‘B’) ensures that for each change to the input forcings (in this case greenhouse gases and aerosols in the atmosphere), the earth system will settle down to a new equilibrium point: a temperature at which the incoming and outgoing energy flows are balanced again. Each time we increase the concentration of greenhouse gases in the atmosphere, we can expect the earth to warm, slowly, until it reaches this new equilibrium. The economy-energy system (not shown above) is ensuring that we keep on adding more greenhouse gases, so we’re continually pushing the system further and further out of balance. That means we’re continually increasing the eventual temperature rise that the earth will experience before it reaches a new equilibrium.

Meanwhile, the aerosols provide a slight cooling effect, but they wash out of the atmosphere fairly quickly, so their overall concentration isn’t rising much. Carbon dioxide does not wash out quickly – it can remain in the atmosphere for thousands of years. Hence the warming effect dominates.

Now, if that was the whole picture, climate change would be very predictable, using basic thermodynamic principles. Unfortunately, there are other feedback loops that we haven’t considered yet. Here’s one:

The basic climate system with the ice albedo feedback

The basic climate system with the ice albedo feedback

As the temperature rises, the ice sheets start to melt and shrink. These include the Arctic sea ice, glaciers on Greenland and the Antarctic, and mountain glaciers across the world. When sea ice melts, it leaves more sea exposed, which is much darker than the ice. When land ice melts, it uncovers rocks, soils, and (eventually) plants, all of which are also darker than ice. Because of this, loss of ice leads to a lower albedo for the planet. A lower albedo means less of the incoming sunlight is reflected straight back into space, so more reaches the surface. In other words, less albedo means more incoming solar radiation. And, as we already know, this leads to more energy retained and more warming. In other words, it is a re-inforcing feedback loop.

As a quick check, we can use the rule of thumb that reinforcing loops have an even number of ‘-‘ links. Trace the path of this loop to check:

Ice albedo feedback loop on its own

Ice albedo feedback loop on its own

Because this is a reinforcing loop, it can modify the behaviour of the basic energy balancing loop. If a warming process starts, this loop can accelerate it, and cause more warming than we’d expect from just the main balancing loop. In extreme cases, a reinforcing loop can completely destabilize a system that is normally dominated by balancing loops. However, all reinforcing loops also must have limits (remember: nothing can grow forever). In this case, there is clearly a limit once all the ice sheets on the planet have melted. The loop can no longer function at that point.

Here’s another reinforcing loop:

Climate system with permafrost feedback

Climate system with permafrost feedback

In this loop, as the temperature rises, it melts the permafrost across Northern Canada and Russia. This releases the methane from the frozen soils. Methane is a greenhouse gas, so this loop also accelerates the warming. Again, it’s a re-inforcing loop, and again, there’s a limit: the loop must stop once all the permafrost has melted.

Here’s another:

Climate system with carbon sinks feedback

Climate system with carbon sinks feedback

This loop occurs because the more greenhouse gases we put into the atmosphere, the more work the carbon sinks have to do. Carbon sinks include the ocean and soils – they slowly remove carbon dioxide from the atmosphere. But the more carbon they have to absorb, the less effective they are at taking more. There’s an additional effect for the ocean, because a warmer ocean is less able to absorb CO2. Some model studies even suggest that after a few degrees of warming, the ocean might stop being a carbon sink and start being a source.

So, put that altogether and we have three re-inforcing loops working to destabilize the main energy balance loop. The main loop tends to limit the amount of warming we might expect, and the reinforcing loops all tend to increase it:

All three reinforcing loops working together

All three reinforcing loops working together

Remember, all three re-inforcing loops might operate at once. More likely, each will kick in at different times as the planet warms. Predicting when that might occur is hard, as is calculating the likely size of the effect. We can calculate absolute limits to each of these reinforcing loops, but there are likely to be other reasons why the loop stops working before reaching these absolute limits.

One of the goals of climate modelling is to capture these kinds of feedbacks in a computational model, to attempt to quantify the effects, so that we can understand them better. We can use both basic physics and empirical observations to put numbers on each of the relationships in the diagram, and we can experiment with the model to test how sensitive it is to different kinds of perturbation, especially in areas where it’s hard to be sure about the numbers.

However, there’s also the possibility that we missed some important feedback loops. In the model above, we have missed an important one, to do with clouds. We’ll meet that in the next post…

Other posts in this series, so far:

The story so far: First, I argued that Climate Science is inherently a Systems Discipline. To develop that idea, I described two important systems as feedback loops: the earth’s temperature equilibrium loop and economic growth and energy consumption. Now it’s time to put those two systems together…

First, we’ll need to capture the unintended consequences of burning fossil fuels for energy, in the form of two distinct kinds of pollution:

Effect of two different kinds of pollutant

Effect of two different kinds of pollutant

Aerosols are tiny particles (smoke, dust, etc) produced when dirtier fossil fuels are burnt, particularly, sulphur dioxide. Coal is the worst for producing these, but oil produces them as well, especially from poorly tuned gasoline and diesel engines. The effect of aerosols is easy to understand, because we can see them. They hang around in the air and block out the light. They contribute to the clouds of smog that hang over our cities in the summer, and they react with water vapour to create sulphuric acid, leading to acid rain. It’s possible to greatly reduce the amount of aerosols produced when we burn fossil fuels, by processing the fuels first to remove the impurities that otherwise would end up as aerosols. For example, low-sulphur coal is much “cleaner” than regular coal, because it produces very few aerosols when you burn it. That’s good for our air quality.

Greenhouse gases include carbon dioxide, methane, water vapour, and a number of other gases such as Chlorofluorocarbons (CFCs). By volume, CO2 is by far the most common byproduct from fossil fuels, although some of the rarer gases actually have a larger “greenhouse effect”. Some greenhouse gases are “short-lived”, because they are chemically unstable, and break down fairly rapidly (for example, carbon monoxide). Others are “long-lived” because they are very stable. For example, carbon dioxide stays in the atmosphere for thousands of years. Unfortunately, we can’t remove these compounds before we burn fossil fuels, because fossil fuels are primarily made of carbon, and it is the carbon that makes them useful as fuels. So, unlike sulphur, you can’t “clean up” the fuel first. When the coal industry talks about “clean coal” these days, they don’t mean the coal itself is clean; they mean they’re working on technology to capture the CO2 after it is produced, but before it disappears up the chimney. Whether this can work cost-effectively on a large scale is an open question.

These two pollutants have opposite effects on the climate system, because each blocks a different part of the spectrum. Aerosols block visible light, and hence reduce the incoming sunlight (like adding a sunshade). Greenhouse gases block infrared radiation, and hence reduce the outgoing radiation from the planet (like adding an extra blanket):

The effect of these two different kinds of pollutant

The effect of these two different kinds of pollutant

Now when we look at these two effects in the context of all the feedback loops we’ve explored so far, we get the following:

The energy system interacting with the basic climate system

The energy system interacting with the basic climate system

So aerosols reduce the net radiative forcing (causing cooling), and greenhouse gases increase it (causing warming). The earth’s energy balance loop means that each time the concentrations of aerosols and greenhouse gases in the atmosphere change, the earth will change its temperature until all the forces balance out again. Unfortunately, the reinforcing loop that drives energy consumption means that the concentrations of these pollutants are continually changing, and they’re changing at a rate that’s faster than the earth’s balancing loop can cope with. We already noted that the earth’s balancing loop can take several decades to find a new equilibrium. If we were able to “turn off the tap”, so that we’re not adding any more of these pollutants (but we leave the ones that are already in the atmosphere), we’d find the earth’s temperature continues to change.

Which one is winning? Satellites allow us to measure the different effects fairly accurately, and observations from the pre-satellite era allow us to extrapolate backwards, so we can estimate the total effect of each from pre-industrial times to the present. Here’s a chart summarizing the effects:

Total Radiative Forcing from different sources for the period 1750 to 2005. From the IPCC Fourth Assessment Report (AR4). Click for bigger version and original IPCC caption.


Note that aerosols have two different effects. The direct effect is the one we described in the system diagram above – it blocks incoming sunlight. The indirect effect is because aerosols also interact with clouds. We’ll explore the indirect effect in a future post. However we look at it, the greenhouse gases are winning, by a large margin. That should mean the planet is warming. And it is:

Land-surface temperatures from 1750-2005, from the Berkeley Earth Surface Temperature project (click for original source)

Note the steep rise from the 1980s onwards, and compare it to the exponential curve of greenhouse gas emissions we saw earlier. More interestingly note the slight fall in the immediate postwar period (1940s to 1970s). One hypothesis for this is that during this period the sulphate aerosols were winning. There’s some uncertainty about the exact size of the aerosol effect during this period (note the size of the ‘uncertainty whiskers’ on the bar graph above). However, it’s true that concern about acid rain led legislation and international agreements in the 1980s to reduce sulphate emissions from fossil fuels.

The fact that sulphate aerosols have a cooling effect that can counteract the warming effect from greenhouse gases leads to an interesting proposal. If we can’t reduce the amount of greenhouse gases we emit, maybe instead we can increase the amount of sulphate aerosols. This has been studied as a serious geo-engineering proposal, and could be done quite cheaply, although we’d have to keep pumping up more of the stuff, as it washes out of the atmosphere fairly quickly. Alan Robock has identified 20 reasons why this is a bad idea. But really, we only need to know one reason why it’s a silly idea, and that comes directly from our analysis of the feedback loops in the economic growth and energy consumption system. As long as that loop is producing an exponential growth in greenhouse gas emissions, any attempt to counter-act them would also have to grow exponentially to keep up. The dimming effect from sulphate aerosols will affect many things on earth, including crop production. Committing ourselves to a path of exponential growth in sulphate aerosols in the stratosphere is quite clearly ridiculous. So if we ever do try this, it can only ever be a short-term solution, perhaps to buy us a few years to get the growth in greenhouse gases under control.

One other comment about the system diagrams we’ve created so far. Energy is mentioned twice in the diagrams: once in the loop describing economic growth, and once in the earth’s energy balance loop. We can compare these two. In the top loop, the current worldwide energy consumption by humans is about 16 terawatts. In the bottom loop, the current amount of energy being added to the earth due to greenhouse gases is about 300 terawatts. So the earth is currently gaining about 18 times the amount of energy that the entire human race is actually using. (Here’s how this is calculated)

Finally, note that although the diagram contains four different feedback loops, none of these are what climate scientists mean when they talk about feedbacks in the climate system. To understand why, we have to make a distinction between the basic operation of the system I’ve described so far (which drives global warming), and additional chains of cause-and-effect that respond to changes in the basic system. If you start warming the planet, using the system we’ve described so far, there are many other consequences. Some of those consequences can come back to bite you as reinforcing feedbacks. We’ll start looking at these in the next post.

Other posts in this series, so far:

In part 1, I described the central equilibrium loop that controls the earth’s temperature. Now it’s time to look at other loops that interact with central loop, and tend to push it out of balance. The most important of these is the use of fossil fuels that produce greenhouse gases, which change the composition of the atmosphere. Let’s first have a look at energy consumption on its own. Here’s the basic loop:

Core economic growth and energy consumption loop

Core economic growth and energy consumption loop

This reinforcing loop has driven the growth in the economy and energy use since the beginning of the industrial era. As we might expect from a reinforcing loop, this dynamic creates an exponential curve – in both the size of the global economy and the consumption of fossil fuels. For example, here’s the curve for carbon produced per year from fossil fuels (data from CDIAC):

Global Greenhouse gas emissions per year

The exponential rise in global carbon emissions (click for bigger version)

For the first century, the curve looks flat, but it’s not zero. In 1751 the world was producing about 3 tonnes of carbon per year, and this rises to about 50 tonnes per year by 1851. The growth really gets going in the postwar period. There are dips for global recession in the 1930s and 1980s, but these barely dent the overall rise. (For a slightly more detailed exploration of the dynamics that drive this exponential growth, see the Postcarbon Institute’s 300 years of fossil fuels in 300 seconds).

Exponential growth cannot go on forever, so there must be a balancing loop somewhere that (eventually) brings this growth to a halt. The world’s supply of fossil fuels is finite, so if we keep climbing the exponential curve, we must eventually run out. But long before that happens, prices start to rise because of scarcity. So the actual balancing loop looks like this:

The "peak oil" balancing loop for economic growth

I call the left hand loop the “peak oil” balancing loop for economic growth

In this new loop, each link inverts the direction of change: as consumption of fossil fuels rises, the remaining reserves fall. As reserves fall, the price tends to go up. As the price goes up, the rate of consumption falls. It’s a balancing loop because (once it starts operating) each rise in fossil fuel prices should cause a price rise that then damps down further consumption.

If these two loops operate on their own, we might expect an initial exponential curve in the use of fossil fuels, until there are signs that reserves are being depleted, and then a gradual phasing out of fossil fuels, producing the classic bell-shaped curve of Hubbert’s peak theory. In the process, economic development would also grind to a halt, unless we manage to decouple the first loop fairly quickly, by switching to renewable sources of energy. In theory, the rising price of fossil fuels should cause this switch to happen gracefully, once prices start rising – a properly functioning economic system should guarantee this. Unfortunately, the switch is not easy, because we’ve build a massive energy infrastructure that is based exclusively on fossil fuels, and this locks us in to a dependency on fossil fuels. This lock-in, along with the exponential growth in demand, means that we cannot just switch to alternative energy as the prices rise – a more likely outcome is an overshoot, where the rate of consumption is stuck in an upwards trend, causing prices to shoot up, while the balancing loop is unable to do its stuff.

However, there’s another complication. Conventional sources aren’t the only way to get fossil fuels. As the price rises, other sources become viable. The classic example is the Alberta oil sands. Twenty years ago, nobody could extract these because it was too expensive, compared to the price of oil. Today, the price of oil is high enough that exploiting the oil sands becomes profitable. So there’s another loop:

I call this new loop the "tar sands" balancing loop

I call this new loop the “tar sands” balancing loop

This new loop balances the rising prices from the middle loop when reserves start to fall. So now we have a system that could keep the economy functioning well beyond the point at which we exhaust conventional sources of fossil fuels. At each new price point, there’s a stimulus to start tapping new sources of these fuels, and as these new sources come on stream, they allow the global economy to keep growing, and the consumption of fossil fuels to keep rising. To someone who just studies economics of energy, everything looks okay for the foreseeable future (except that cheap oil is never coming back). To someone who predicts doom because of peak oil, it complicates the picture (except that the resource depletion predictions were basically correct). But to someone who studies climate, it means the challenge just got harder…

In the next post, I’ll link this system with the basic climate system.

Other posts in this series, so far:

I wrote earlier this week that we should incorporate more of the key ideas from systems thinking into discussions about climate change and sustainability. Here’s an example: I think it’s very helpful to think about the climate as a set of interacting feedback loops. If you understand how those feedback loops work, you’ve captured the main reasons why climate change is such a massive challenge for humanity. So, this is the first in a series of posts where I attempt to substantiate my argument. I’ll describe the global climate in terms of a set of balancing and reinforcing feedback loops. (Note: This is a very elementary introduction. If you prefer a detailed mathematical treatment of feedbacks in the climate system, try this paper by Gerard Roe)

Before we start, we need some basic concepts. The first is the idea of a feedback loop. We’re used to thinking in terms of linear sequences of cause and effect: A causes B, which causes C, and so on. However, our interactions with the world are rarely like this. More often, change tends to feed back on itself. For example, we identify a problem that needs solving, we take some action to solve it, and that action ends up changing our perception of the problem. The feedback usually comes in one of two forms. The first is a balancing feedback: The more you try to change something, the more the world pushes back and makes it harder. Take dieting for example: if we manage to lose a few pounds, the sense of achievement can make us complacent, and then we put the weight all back on again. The second form is a reinforcing feedback. This is where success feeds on itself. For example, perhaps we try a new exercise regime, and it makes us feel energized, so we end up exercising even more, and so on.

In physics and engineering, these are usually called ‘positive’ and ‘negative’ feedback loops. I prefer to call them ‘reinforcing’ and ‘balancing’ loops, because it’s a better description of what they do. People tend to think ‘positive’ means good and ‘negative’ means bad. In fact, both types of loop can be good or bad, depending on what you think the system ought to be doing. A reinforcing loop is good when you want to achieve a change (e.g. your protest movement goes viral), but is certainly not good when it’s driving a change you don’t want (a forest fire spreading towards your town, for example). Similarly, a balancing loop is good when it keeps a system stable that you depend on (prices in the marketplace, perhaps), but is bad when it defeats your attempts to bring about change (as in the dieting example above). Of course, what’s good to one person might be bad to someone else, so we’ll set aside such value judgements for the moment, and just focus on how the loops work in the climate system.

It helps to draw pictures. Here’s an example of how both types of loop affect a tech company trying to sell a new product (say, the iPhone):

The action of reinforcing and balancing feedback loops in selling iPhones

The action of reinforcing and balancing feedback loops in selling iPhones

You can read the arrows labelled “+” as “more of A tends to cause more of B than there otherwise would have been, while less of A tends to cause less of B than there otherwise would have been”. The arrows labelled “-” mean “more of A tends to cause less of B, and less of A tends to cause more of B”. [Note: there are some subtleties to this interpretation, but we can ignore them for now.]

On the left, we have a reinforcing loop (labelled with an ‘R’): the effect of word of mouth. The more iPhones we sell, the more people there are to spread the word, which in turn means more get sold. This tends to create an exponential growth in sales figures. However, this cannot go on forever. Sooner or later, the balancing loop on the right starts to matter (labelled with a ‘B’). The more iPhones sold, the fewer there are people left without one – we start to saturate the market. The more the market is saturated, the fewer iPhones we can sell. The growth in sales slows, and may even stop altogether. The resulting graph of sales over time might look like this:

How the sales of iPhones might look over time

How the sales of iPhones might look over time

When the reinforcing loop dominates, sales grow exponentially. When the balancing loop dominates, sales stagnate. In this case, the natural limit is when everyone who might ever want an iPhone has one. Of course, in real life, the curves are never this smooth – other feedback loops (that we haven’t mentioned yet) kick in, and temporarily push sales up or down. However, we could hypothesize that these two loops do explain most of the dynamic behaviour of the sales of a new product, and everything else is just noise. In many cases this is true – diffusion of innovation studies frequently reveal this type of S-shaped curve.

The structure of these two loops and the S-shaped curve they produce describe many real world phenomena: the spread of disease, growth of a population, the growth of a firm, the spread of a forest fire. In each case, there may well be other feedback loops that complicate the picture. But the underlying story about growth and its limits still captures a basic truth: exponential growth occurs when there is a reinforcing feedback loop, and as nothing can grow exponentially forever, there must always be a balancing loop somewhere that provides a limit to growth.

Okay, that’s enough background. Time to look at the first feedback loop in the global climate system. We’ll start with the global climate system in its equilibrium state – i.e. when the climate is not changing. The climate has been remarkably stable for the last 10,000 years, since the end of the last ice age. Over that time, it has varied only within less than 1°C. That stability suggests there are likely to be balancing feedback loops keeping it stable. The most important of these is the basic energy balance loop:

The Earth's energy balance as a balancing loop

The Earth’s energy balance as a balancing loop

The temperature of the planet is determined primarily by the balance between the incoming energy from the sun and the outgoing energy lost back into space. The incoming energy is in the form of shortwave radiation from the sun, and the amount we get is determined by the solar constant (which, of course, is not really constant, although the variations were too small to measure before the satellite era). The incoming energy from the sun, averaged out over the surface of the earth, is about 340 watts per square meter. If this is greater than the outgoing energy, the imbalance causes the earth to retain more energy, and so the temperature rises. As a warmer planet loses energy faster, this increases the outgoing radiation, which in turn reduces the imbalance again (i.e. this is a balancing loop).

Imagine there’s an overshoot – i.e. the outgoing radiation rises, but goes a little too far, so that it’s now more than the incoming solar radiation. This reduces the net radiative forcing so far that it becomes negative. But a decrease in net radiative forcing tends to cause a decrease in energy retained, which causes a decrease in temperature, which causes a decrease in outgoing radiation again. So the balancing loop also cancels out any overshoot sooner or later. In other words, the structure of this loop always pushes the planet to find a (roughly) stable equilibrium: essentially, if the incoming and outgoing energy ever get out of balance, the temperature of the planet rises or falls until they are balanced again.

Note that we could tell this is a balancing loop, without tracing the effects, just by counting up the number of “-” links. If it’s an odd number, it’s a balancing loop; if it’s even (or zero), it’s a reinforcing loop. In my systems thinking class, we play a game that simulates different kinds of loop, with each person acting as one link (some are “+” links, some are “-” links). The students usually find it hard to predict how loops of different structure will behave, but once we’ve played it a few times, everyone has a good intuition for the difference between reinforcing loops and balancing loops.

There is one more complication for this loop. The net radiative forcing determines the rate at which energy is retained, rather than the total amount. If the net forcing is positive, the earth keeps on retaining energy. So although this leads to an increase temperature, and, if you follow the loop around, a decrease in the net radiative forcing, it will reduce the rate at which energy is retained (and hence the rate of warming), it won’t actually stop the warming until the net radiative balance falls to zero. And then, when the warming stops, it doesn’t cool off again – the loop ensures the planet stays at this new temperature. It’s a slow process because it takes time for the planet to warm up. For example, the oceans can absorb a huge amount of energy before you’ll notice any increase in temperature. This means the loop operates slowly. We know from simulations (and from studies of the distant past) that it can take many decades for the planet to find a new balance in response to a change in net radiative forcing.

There are, of course, other feedback loops to complicate the picture, and some of them are reinforcing loops. I’ll describe some of these in my next post. But from an understanding of this one loop, we can gain a number of insights:

  1. This loop, on its own, cannot produce a runaway global warming (or cooling) – the earth will eventually find a new equilibrium in response to a change in net radiative forcing. More precisely, for a runaway warming to occur, some other reinforcing loop must dominate this one. As I said, there are some reinforcing loops, and they complicate the picture, but nobody has managed to demonstrate that any of them are strong enough to overcome the balancing effect of this loop.
  2. The balancing loop has a delay, because it takes a lot of energy to warm the oceans. Hence, once a change starts in this loop, it takes many decades for the balancing effect to kick in. That’s the main reason why we have to take action on climate change many decades before we see the full effect. On human timescales, the earth’s natural balancing mechanism is a very slow process.
  3. If we make a one-time change to the radiative balance, the earth will slowly change its temperature until it reaches a new balance point, and then will stay there, because the balancing loop keeps it there. However, if there is some other force that keeps changing the radiative balance, despite this loop’s attempts to adjust, then the temperature will keep on changing. Our current dilemma with respect to climate change isn’t that we’ve made a one-time change to the amount of greenhouse gases in the atmosphere – the dilemma is that we’re continually changing them. This balancing loop only really helps once we stop changing the atmosphere.

Other posts in this series, so far:

Early in my career I trained as a systems analyst. My PhD was about the ability to identify and make use of multiple perspectives on a system when understanding people’s needs, and designing new information systems to meet them. I became a “systems thinker”, although I didn’t encounter the term until later.

I also didn’t really appreciate until recently how much systems thinking changes everything about how you perceive the world. Perhaps the best analogy is the scene in The Matrix, when Morpheus offers Neo the choice of the red pill or the blue pill. One of these choices will allow him to step outside of the system and see it in a new way. Once he has done that he can never go back to seeing the world the way he used to (although there’s an interesting subplot in the movie where one of the characters tries to do exactly that).

When I think about climate change, I approach it as a systems thinker. I look for parts of the problem that I can characterize as a system: where are the inputs and outputs, boundaries and control mechanisms, positive and negative feedbacks, interactions with other systems? I want to build systems dynamics models that capture a system as a set of stocks and flows, and explore how cycles and delays affect the overall behaviour of the system. And of course, I’m always looking out for emergent properties: things that arise as a result of interactions across a system but that cannot be studied through reductionism.

It’s not surprising then, that I’m fascinated by Earth System Models (ESMs). These capture some of the most complex systems interactions ever described in a computational model – on a planetary scale! ESMs can be used to explore how processes at small scales give rise to emergent properties on a global scale. They provide a test-bed for what-if questions, to explore whether our understanding of the physical systems makes sense. And fundamentally, they’re used to probe questions of stability of the system: the relationship between the size of a “forcing” (which tends to push the system out of equilibrium) and the size of its “effect” (e.g. how sensitive is the global average temperature to a doubling of CO2?). To connect the two, you have to explore the positive and negative feedbacks that amplify (or dampen) the effects. And of course, we’d like to understand the nature of tipping points, thresholds beyond which positive feedbacks can push the system towards entirely different equilibrium points.

People who don’t understand climate change tend to lack a grasp of how complex systems work, and that’s unfortunate because for any system of sufficient complexity, most of its behaviour is counter-intuitive. People ask how a gas that forms such a tiny fraction of the atmosphere can have such a large effect, because they don’t understand that the earth constantly receives and emits huge amounts of energy into space, and that it only takes a tiny imbalance between the input and output to disrupt the planet’s equilibrium. People assume the climate system will always tend to revert to the stable pattern it has exhibited in the past, because they don’t understand positive feedbacks and exponential change. People assume we can wait to fix the climate system once we’ve seen how bad it might get, because they don’t understand the ideas of inertia and overshoot when a system has a delayed response to a stimulus. And people wonder how we can predict anything at all about climate dynamics, because they confuse chaos with randomness.

Climate science (and especially climate modeling) is inherently a systems discipline. However, climate scientists tend to hail from the physical sciences, and hence sometimes seem to miss an important aspect of systems analysis. In the physical sciences, you learn how to observe and experiment with physical systems in order to understand and explain them. But you’re not trained to re-design them to work better – that’s generally left to the engineers. Unfortunately, most engineering disciplines don’t cover systems thinking either. They concern themselves with the properties of families of devices (e.g. electrical circuits), and how such devices can be applied to solve problems. Engineers are not usually trained to re-conceptualize systems in entirely new ways, to understand how they can be changed. (Systems Engineering would be the exception here, but it’s a very young discipline).

So systems thinkers are quite rare, both across the physical sciences and the engineering disciplines. You actually encounter more of them in the social sciences, because social systems tend to defy attempts at understanding them through reductionism, and because social scientists are often more comfortable with constructivism: the idea that the systems we describe as existing in the world are really only mental constructs, arrived at through social processes. My favourite definition of a system, from Gerald Weinberg is “a way of looking at the world”. In a sense, systems aren’t “out there” in the world, waiting to be studied. Systems are a convenient mental tool for making sense of how things in the world interact with one another. This means there’s no such thing as the “climate system”, just lots of interacting thermodynamic and chemical processes. That we choose to call it a ‘system’, name its parts, and treat it as a whole, is a convenience. But it’s a very useful one, because it offers rich insights for understanding, for example, how human activities alter the climate. Modelling the climate as a system means that we have to decide which clusters of things in the world to include in the models, and where we might usefully draw system boundaries. And if we’re doing this right, we ought to acknowledge that there are other ways of viewing these systems – no decision about where to draw system boundaries can ever be ‘correct’, but some decisions lead to more insights than others (compare with Box’s famous saying about models: “All models are wrong, but some are useful”).

While traditional branches of science offer tools and methods for understanding each of the pieces of the climate system, the study of the climate system as a whole requires a different approach. It is a trans-disciplinary field, because the interactions that matter include physical, chemical, biological, geographic, social, and economic processes. It goes beyond traditional methodologies of the physical sciences because it is anti-reductionist: it must grapple with understanding holistic properties of systems, even when the detailed behaviour of those systems is not sufficiently understood. In other words it’s a systems science, and climate modellers have to be systems thinkers.

All this leads me to argue that we should incorporate more of the key ideas from systems thinking into discussions about climate change and sustainability. I think that a better understanding of systems dynamics would help a lot in giving people the right intuitions about climate change. And I think a better understanding of critical systems approaches would give people a better understanding of how to improve collective decision-making around climate policy.

Note: This is the first of a series of posts exploring the systems dynamics of climate change. Here’s the rest of the series, so far:

I love working in a University. Every day I encounter new ideas, and I get to chat to some of the smartest people on the planet. But I see signs, almost every day, that universities are poorly equipped to face the complex challenges of the 21st Century. Challenges like poverty, climate change, resource depletion, sustainable agriculture, and so on. The problem is that universities are organized into departments that correspond to disciplines like physics, computer science, sociology, geography, etc. Most of the strategic decision-making is made in these departments – which faculty to hire, which degree programs to offer, what research to focus on. But the grand challenges of the 21st Century are trans-disciplinary. To address them, we need people who can transcend their own disciplinary background; people who are not only comfortable working with a range of experts from many different fields, but who actively go out and seek such interactions. In marketing speak, we need T-shaped people:

Jelinski, who is vice chancellor for research and graduate studies at Louisiana State University, talked about a new “T-shaped” person with disciplinary depth, in biology for example, but with the ability, or arms, to reach out to other disciplines. “We need to encourage this new breed of scientist,” she said. [“Researchers Seek Basics Of Nano Scale,” The Scientist, August 21, 2000]

Universities don’t do this well because the entire reward structure for departments and professors is based purely on demonstrating disciplinary strength. Occasionally, we manage to establish inter-disciplinary centres and institutes, to act as places where faculty and students from different departments can come together and learn how to collaborate. A few of these prosper, but most of them get shut down fairly rapidly by the university. Here’s what happens. A new centre is set up with an initial research grant, perhaps for 3-5 years, which typically pays only for researchers’ salaries and equipment. The university agrees to provide space, administrative staff, and pay the utility bills for a limited time, because opening a new facility is good press, but then expects each centre to become “self-sufficient”. This is, of course, impossible, because no granting agency ever covers the full cost of running a research centre. The professors who want to make the centre succeed spend most of their time writing more grant proposals, most of which don’t get funded because competition for funding is tough. Nobody has much time to do the important inter-disciplinary work that the centre was established for. After five years, the university shuts it down because it didn’t become self-sufficient. A research centre at U of T that I’ve spent a lot of time at over the past few years is being shut down this month for these very reasons.

The same thing happens to inter-disciplinary graduate programs. While departments run graduate programs focusing on disciplinary strength, some enterprising professors do manage to set up “collaborative programs”, which students from a range of participating departments can sign up for. The collaborative programs are set up using seed money, some of which is donated by the participating departments, and some of which comes from the university teaching initiative funds, because they all agree the program is a good idea, and the students will benefit. However, after a few years, the seed money has been used up, and no unit within the university will kick in more, because the program is supposed to be “self-sufficient”. No such program can ever be self sufficient, because the students who participate are accounted for in their home departments. The collaborative program doesn’t generate any extra revenue, and the departments view it as ‘stealing their students’. Without funding, the program shuts down. A collaborative graduate program at U of T that I serve on the advisory board for is ending this month for these very reasons.

Not only does the university structure tend to squeeze out anything that does not fit into a neat disciplinary silo, it also generates rules that actively prevent students acquiring the skills needed to be “T-shaped”. For example, my own department has “breadth requirements” that graduate students have to meet when selecting a set of courses. “Breadth” here means breadth across the discipline. So students have to demonstrate they’ve taken courses that cover several different subfields of computer science, and several different research methodologies within the field. But this is the opposite of T-shaped! A T-shaped student has disciplinary *depth* and *inter-disciplinary* breadth. This would mean deep expertise in a particular subfield of computer science, and the ability to apply that expertise in many different contexts outside of computer science. Instead, we prevent students from getting the depth by forcing them to take more introductory courses within computer science, and we prevent them from getting inter-discipinary breadth for the same reason.

Working within a university sometimes feels like the intellectual equivalent of being at a lavish buffet but prevented from ever leaving the pasta section.

Next year, I’ll be teaching a new undergraduate course, as part of an initiative by the Faculty of Arts and Science known as Big Ideas courses. The idea is to offer trans-disciplinary courses, team taught by professors from across the physical sciences, social sciences, and humanities, that will probe important ideas about the world from different disciplinary perspectives. For the coming year, U of T is launching three Big Ideas courses:

  • BIG100: “The end of the world as we know it”;
  • BIG101: “Energy: From Fire to the Future”;
  • BIG102: “The Internet: Saving Civilization or Trashing the Planet?”

I’m delighted to be teaming up with Prof Miriam Diamond from Earth Sciences and Prof Pamela Klassen from Study of Religion to teach BIG102. Our aim is to give students some understanding of how the technologies that drive the internet work, and then to explore how the internet has reshaped the way we use information, our knowledge and beliefs about the world, and the impact that creating (and disposing of) internet technologies has on the environment, on the economy, and on the dynamics of innovation. A key goal is to foster critical thinking and information literacy skills, and especially to be able to think about and analyze a complex system-of-systems from different perspectives.

For the first term, we’re planning to cover a broad set of provocative questions, to get students thinking about the internet from different perspectives:

  1. What is a big idea? (A course introduction, and a primer on trans-disciplinary thinking)
  2. Who invented the internet? (Myths about the internet, and why they stick)
  3. How does the internet work? (An introduction to some of the key technologies)
  4. How new is the internet? (A short history of communications technologies, to put the internet in its historical context)
  5. Has the internet changed us? (We’ll explore in particular, how the internet is transforming universities and learning)
  6. What is the environmental footprint of the internet? (An initial assessment of energy consumption, resource extraction, and waste disposal)
  7. Does the internet make us smarter? (An exploration of how internet search works, and how it affects our approaches to problem-solving)
  8. Is the internet a time-saver or time-waster? (How the internet offers endless distractions, blurs distinctions between work and leisure, and its overall effect on productivity)
  9. Can you be anonymous on the internet? (The idea of your information footprint – who’s keeping track of data about you, how they do it, and why)
  10. Is the Internet a Cheater’s Paradise? (From plagiarism to adultery – how the internet facilitates cheating, new ways of discovering it, and virtual vigilante justice)
  11. Who’s Not Online? (The idea of the digital divide, and the demographic and socio-economic factors that limit people’s access)
  12. Gadgets as Gifts? (Just in time for the Christmas break, we’ll explore the environmental impact of our love of new gadgets, and whether there are sustainable alternatives)

In the second term, we plan to pick three themes to explore in more detail, so that we can explore inter-connections between some of these questions, and get the students engaged in independent research projects that synthesize what they’re learning:

  1. The Internet and the Innovation Imperative.
    • Is the Internet Innovative? How Moore’s law has driven innovation; the dotcom boom and bust; and the current hype around new technologies such as 3D printing, sensor networks, and the semantic web.
    • What are the Resource Implications of the Internet? We’ll use material flow analysis to explore extraction and disposal and likely shortages of strategic minerals, and the geo-political implications of attempting to feed an exponential growth in demand.
    • The Environmental and Human Health Burden of the Internet. Building on the discussion of resource implications, we’ll look at the health implications of mineral extraction and e-waste disposal, and the burden this places on people and ecosystems, especially in poorer countries.
    • What is the Opportunity Cost of the Internet? Does investment in internet innovation mean we’re underinvesting in other things (eg clean energy, transport, social innovation). Have we developed an over-optimistic belief that IT technologies can solve all problems?
  2. The Internet, Democracy, and Security.
    • Censorship & Internet Governance. How much power do governments have to control what happens on the internet? Does the internet enhance or undermine democracy?
    • The Underbelly of the Internet: Hackers, Espionage, and Trolls. How internet systems can be exploited by different groups, for example by crime syndicates who break into secure systems, by political groups who use a web presence to spread misinformation, and by internet trolls who violate social norms to disrupt and intimidate online discussions.
    • Does the Internet make us a more open society? The open source movement and its successors (open government, creative commons, etc) are based on the idea that if everyone has access to the inner workings of systems, this removes barriers to participation, fosters creativity, and makes those systems better for everyone. But does it work?
    • Transnational Jurisdiction: Legal boundaries and the Internet. We’ll wrap up this theme with a question about who should police the internet.
  3. The Internet, Communities, and Interpersonal Relationships
    • Does your Google-Brain make you forget? How has instant access to vast amounts of information changed our memories and our perceptions of ourselves? For example, does GPS route-finding mean we lose our ability to navigate and our sense of place? And what are the implications of the kind of personal digital archives that technologies such as Google Glass might allow us to create?
    • Can you find love on the Internet? An exploration of how the internet changes personal relationships, from the role of dating sites and virtual social networks, to the way that online porn affects our perceptions of gender roles and body image.
    • Can you find God on the Internet? How the internet affects religious communities, tolerance of different worldviews, and the very nature of faith.

Of course, this outline is still a draft – we’ll refine it over the next few months as we prepare for the first group of students in September.

We’re still exploring which textbooks to use, and even whether ‘books’ makes sense for a course like this – we’re hoping to make this a constructivist learning experience by using a variety of different internet-based media and information access tools throughout the course.  However, we’re currently evaluating these books:

Feel free to suggest other books and material!

We’re taking the kids to see their favourite band: Muse are playing in Toronto tonight. I’m hoping they play my favourite track:

I find this song fascinating, partly because of the weird mix of progressive rock and dubstep. But more for the lyrics:

All natural and technological processes proceed in such a way that the availability of the remaining energy decreases. In all energy exchanges, if no energy enters or leaves an isolated system, the entropy of that system increases. Energy continuously flows from being concentrated to becoming dispersed, spread out, wasted and useless. New energy cannot be created and high grade energy is destroyed. An economy based on endless growth is unsustainable. The fundamental laws of thermodynamics will place fixed limits on technological innovation and human advancement. In an isolated system, the entropy can only increase. A species set on endless growth is unsustainable.

This summarizes, perhaps a little too succinctly, the core of the critique of our current economy, first articulated clearly in 1972 by the Club of Rome in the Limits to Growth Study. Unfortunately, that study was widely dismissed by economists and policymakers. As Jorgen Randers points out in a 2012 paper, the criticism of the Limits to Growth study was largely based on misunderstandings, and the key lessons are absolutely crucial to understanding the state of the global economy today, and the trends that are likely over the next few decades. In a nutshell, humans exceeded the carrying capacity of the planet sometime in the latter part of the 20th century. We’re now in the overshoot portion, where it’s only possible to feed the world and provide energy for economic growth by consuming irreplaceable resources and using up environmental capital. This cannot be sustained.

In general systems terms, there are three conditions for sustainability (I believe it was Herman Daly who first set them out in this way):

  1. We cannot use renewable resources faster than they can be replenished.
  2. We cannot generate wastes faster than they can be absorbed by the environment.
  3. We cannot use up any non-renewable resource.

We can and do violate all of these conditions all the time. Indeed, modern economic growth is based on systematically violating all three of them, but especially #3, as we rely on cheap fossil fuel energy. But any system that violates these rules cannot be sustained indefinitely, unless it is also able to import resources and export wastes to other (external) systems. The key problem for the 21st century is that we’re now violating all three conditions on a global scale, and there are no longer other systems that we can rely on to provide a cushion – the planet as a whole is an isolated system. There are really only two paths forward: either we figure out how to re-structure the global economy to meet Daly’s three conditions, or we face a global collapse (for an understanding of the latter, see GrahamTurner’s 2012 paper).

A species set on endless growth is unsustainable.

We now have a fourth paper added to our special issue of the journal Geoscientific Model Development, on Community software to support the delivery of CMIP5. All papers are open access:

  • M. Stockhause, H. Höck, F. Toussaint, and M. Lautenschlager, Quality assessment concept of the World Data Center for Climate and its application to CMIP5 data, Geosci. Model Dev., 5, 1023-1032, 2012.
    Describes the distributed quality control concept that was developed for handling the terabytes of data generated from CMIP5, and the challenges in ensuring data integrity (also includes a useful glossary in an appendix).
  • B. N. Lawrence, V. Balaji, P. Bentley, S. Callaghan, C. DeLuca, S. Denvil, G. Devine, M. Elkington, R. W. Ford, E. Guilyardi, M. Lautenschlager, M. Morgan, M.-P. Moine, S. Murphy, C. Pascoe, H. Ramthun, P. Slavin, L. Steenman-Clark, F. Toussaint, A. Treshansky, and S. Valcke, Describing Earth system simulations with the Metafor CIM, Geosci. Model Dev., 5, 1493-1500, 2012.
    Explains the Common Information Model, which was developed to describe climate model experiments in a uniform way, including the model used, the experimental setup and the resulting simulation.
  • S. Valcke, V. Balaji, A. Craig, C. DeLuca, R. Dunlap, R. W. Ford, R. Jacob, J. Larson, R. O’Kuinghttons, G. D. Riley, and M. Vertenstein, Coupling technologies for Earth System Modelling, Geosci. Model Dev., 5, 1589-1596, 2012.
    An overview paper that compares different approaches to model coupling used by different earth system models in the CMIP5 ensemble.
  • S. Valcke, The OASIS3 coupler: a European climate modelling community software, Geosci. Model Dev., 6, 373-388, 2013 (See also the Supplement)
    A detailed description of the OASIS3 coupler, which is used in all the European models contributing to CMIP5. The OASIS User Guide is included as a supplement to this paper.

(Note: technically speaking, the call for papers for this issue is still open – if there are more software aspects of CMIP5 that you want to write about, feel free to submit them!)

Last week, Damon Matthews from Concordia visited, and gave a guest CGCS lecture, “Cumulative Carbon and the Climate Mitigation Challenge”. The key idea he addressed in his talk is the question of “committed warming” – i.e. how much warming are we “owed” because of carbon emissions in the past (irrespective of what we do with emissions in the future). But before I get into the content of Damon’s talk, here’s a little background.

The question of ‘owed’ or ‘committed’ warming arises because we know it takes some time for the planet to warm up in response to an increase in greenhouse gases in the atmosphere. You can calculate a first approximation of how much it will warm up from a simple energy balance model (like the ones I posted about last month). However, to calculate how long it takes to warm up you need to account for the thermal mass of the oceans, which absorb most of the extra energy and hence slow the rate of warming of surface temperatures. For this you need more than a simple energy balance model.

You can do a very simple experiment with a Global Circulation Model, by setting CO2 concentrations at double their pre-industrial levels, and then leave them constant at this level, to see how long the earth takes to reach a new equilibrium temperature. Typically, this takes several decades, although the models differ on exactly how long. Here’s what it looks like if you try this with EdGCM (I ran it with doubled CO2 concentrations starting in 1958):


Of course, the concentrations would never instantaneously double like that, so a more common model experiment is to increase CO2 levels gradually, say by 1% per year (that’s a little faster than how they have risen in the last few decades) until they reach double the pre-industrial concentrations (which takes approx 70 years), and then leave them constant at that level. This particular experiment is a standard way of estimating the Transient Climate Response – the expected warming at the moment we first reach a doubling of CO2 – and is included in the CMIP5 experiments. In these model experiments, it typically takes a few decades more of warming until a new equilibrium point is reached, and the models indicate that the transient response is expected to be a little over half of the eventual equilibrium warming.

This leads to a (very rough) heuristic that as the planet warms, we’re always ‘owed’ almost as much warming again as we’ve already seen at any point, irrespective of future emissions, and it will take a few decades for all that ‘owed’ warming to materialize. But, as Damon argued in his talk, there are two problems with this heuristic. First, it confuses the issue when discussing the need for an immediate reduction in carbon emissions, because it suggests that no matter how fast we reduce them, the ‘owed’ warming means such reductions will make little difference to the expected warming in the next two decades. Second, and more importantly, the heuristic is wrong! How so? Read on!

For an initial analysis, we can view the climate problem just in terms of carbon dioxide, as the most important greenhouse gas. Increasing CO2 emissions leads to increasing CO2 concentrations in the atmosphere, which leads to temperature increases, which lead to climate impacts. And of course, there’s a feedback in the sense that our perceptions of the impacts (whether now or in the future) lead to changed climate policies that constrain CO2 emissions.

So, what happens if we were to stop all CO2 emissions instantly? The naive view is that temperatures would continue to rise, because of the ‘climate commitment’  – the ‘owed’ warming that I described above. However, most models show that the temperature stabilizes almost immediately. To understand why, we need to realize there are different ways of defining ‘climate commitment’:

  • Zero emissions commitment – How much warming do we get if we set CO2 emissions from human activities to be zero?
  • Constant composition commitment – How much warming do we get if we hold atmospheric concentrations constant? (in this case, we can still have some future CO2 emissions, as long as they balance the natural processes that remove CO2 from the atmosphere).

The difference between these two definition is shown here. Note that in the zero emissions case, concentrations drop from an initial peak, and then settle down at a lower level:



The model experiments most people are familiar with are the constant composition experiments, in which there is continued warming. But in the zero emissions scenarios, there is almost no further warming. Why is this?

The relationship between carbon emissions and temperature change (the “Carbon Climate Response”) is complicated, because it depends two factors, each of which is complicated by (different types of) inertia in the system:

  • Climate Sensitivity – how much temperature changes in response to difference levels of CO2 in the atmosphere. The temperature response is slowed down by the thermal inertia of the oceans, which means it takes several decades for the earth’s surface temperatures to respond fully to a change in CO2 concentrations.
  • Carbon sensitivity – how much concentrations of CO2 in the atmosphere change in response to different levels of carbon emissions. A significant fraction (roughly half) of our CO2 emissions are absorbed by the oceans, but this also takes time. We can think of this as “carbon cycle inertia” – the delay in uptake of the extra CO2, which also takes several decades. [Note: there is a second kind of carbon system inertia, by which it takes tens of thousands of years for the rest of the CO2 to be removed, via very slow geological processes such as rock weathering.]


It turns out that the two forms of inertia roughly balance out. The thermal inertia of the oceans slows the rate of warming, while the carbon cycle inertia accelerates it. Our naive view of the “owed” warming is based on an understanding of only one of these, the thermal inertia of the ocean, because much of the literature talks only about climate sensitivity, and ignores the question of carbon sensitivity.

The fact that these two forms of inertia tend to balance leads to another interesting observation. The models all show an approximately linear response to cumulative emissions. For example, here are the CMIP3 models, used in the IPCC AR4 report (the average of the models, indicated by the arrow, is around 1.6C of warming per 1,000 gigatonnes of carbon):


The same relationship seems to hold for the CMIP5 models, many of which now include a dynamic carbon cycle:


This linear relationship isn’t determined by any physical properties of the climate system, and probably won’t hold in much warmer or cooler climates, nor when other feedback processes kick in. So we could say it’s a coincidental property of our current climate. However, it’s rather fortuitous for policy discussions.

Historically, we have emitted around 550 billion tonnes since the beginning of the industrial era, which gives us an expected temperature response of around 0.9°C. If we want to hold temperature rises to be no more than 2°C of warming, total future emissions should not exceed a further 700 billion tonnes of Carbon. In effect, this gives us a total worldwide carbon budget for the future. The hard policy question, of course, is then how to allocate this budget among the nations (or people) of the world in an equitable way.

[A few years ago, I blogged about a similar analysis, which says that cumulative carbon emissions should not exceed 1 trillion tonnes in total, ever. That calculation gives us a smaller future budget of less then 500 billion tonnes. That result came from analysis using the Hadley model, which has one of the higher slopes on the graphs above. Which number we use for a global target then might depend on which model we believe gives the most accurate projections, and perhaps how we also factor in the uncertainties. If the uncertainty range across models is accurate, then picking the average would give us a 50:50 chance of staying within the temperature threshold of 2°C. We might want better odds than this, and hence a smaller budget.]

In the National Academies report in 2011, the cumulative carbon budgets for each temperature threshold were given as follows (note the size of the uncertainty whiskers on each bar):


[For a more detailed analysis see: Matthews, H. D., Solomon, S., & Pierrehumbert, R. (2012). Cumulative carbon as a policy framework for achieving climate stabilization. Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 370(1974), 4365–79. doi:10.1098/rsta.2012.0064]

So, this allows us to clear up some popular misconceptions:

The idea that there is some additional warming owed, no matter what emissions pathway we follow is incorrect. Zero future emissions means little to no future warming, so future warming depends entirely on future emissions. And while the idea of zero future emissions isn’t policy-relevant (because zero emissions is impossible, at least in the near future), it does have implications for how we discuss policy choices. In particular, it means the idea that CO2 emissions cuts will not have an effect on temperature change for several decades is also incorrect. Every tonne of CO2 emissions avoided has an immediate effect on reducing the temperature response.

Another source of confusion is the emissions scenarios used in the IPCC report. They don’t diverge significantly for the first few decades, largely because we’re unlikely (and to some extent unable) to make massive emissions reductions in the next 1-2 decades, because society is very slow to respond to the threat of climate change, and even when we do respond, the amount of existing energy infrastructure that has to be rebuilt is huge. In this sense, there is some inevitable future warming, but it comes from future emissions that we cannot or will not avoid. In other words, political, socio-economic and technological inertia are the primary causes of future climate warming, rather than any properties of the physical climate system.

Like most universities, U of T had a hiring freeze for new faculty for the last few years, as we struggled with budget cuts. Now, we’re starting to look at hiring again, to replace faculty we lost over that time, and to meet the needs of rapidly growing student enrolments. Our department (Computer Science) is just beginning the process of deciding what new faculty positions we wish to argue for, for next year. This means we get to engage in a fascinating process of exploring what we expect to be the future of our field, and where there are opportunities to build exciting new research and education programs. To get a new faculty position, our department has to make a compelling case to the Dean, and the Dean has to balance our request with those from 28 other departments and 46 interdisciplinary groups. So the pitch has to be good.

So here’s my draft pitch:

(1) Create a joint faculty position between the Department of Computer Science and the new School of Environment.

Last summer U of T’s Centre for Environment was relaunched as a School of Environment, housed wholly within the Faculty of Arts and Science. As a school, it can now make up to 49% faculty appointments. [The idea is that to do interdisciplinary research, you need a base in a home department/discipline, where your tenure and promotion will be evaluated, but would spend half your time engaged in inter-disciplinary research and teaching at the School. Hence, a joint position for us would be 51% CS and 49% in the School of Environment.]

A strong relationship between Computer Science and the School of Environment makes sense for a number of reasons. Most environmental science research makes extensive use of computational modelling as a core research tool, and the environmental sciences are one of the greatest producers of big data. As an example, the Earth System Grid currently stores more than 3 petabytes of data from climate models, and this is expected to grow to the point where by the end of the decade a single experiment with a climate model would generate an exabyte of data. This creates a number of exciting opportunities for application of CS tools and algorithms, in a domain that will challenge our capabilities. At the same time, this research is increasingly important to society, as we seek to find ways to feed 9 billion people, protect vital ecosystems, and develop strategies to combat climate change.

There are a number of directions we could go with such a collaboration. My suggestion is to pick one of:

  • Climate informatics. A small but growing community is applying machine learning and data mining techniques to climate datasets. Two international workshops have been held in the last two years, and the field has had a number of successes in knowledge discovery that have established its importance to climate science. For a taste of what the field covers, see the agenda of the last CI Workshop.
  • Computational Sustainability. Focuses on the decision-support needed for resource allocation to develop sustainable solutions in large-scale complex adaptive systems. This could be viewed as a field of applied artificial intelligence, but to do it properly requires strong interdisciplinary links with ecologists, economists, statisticians, and policy makers. This growing community has run run an annual conference, CompSust, since 2009, as well as tracks at major AI conferences for the last few years.
  • Green Computing. Focuses on the large environmental footprint of computing technology, and how to reduce it. Energy efficient computing is a central concern, although I believe an even more interesting approach is when we take a systems approach to understand how and why we consume energy (whether in IT equipment directly, or in devices that IT can monitor and optimize). Again, a series of workshops in the last few years has brought together an active research community (see for example, Greens’2013),

(2) Hire more software engineering professors!

Our software engineering group is now half the size it was a decade ago, as several of our colleagues retired. Here’s where we used to be, but that list of topics and faculty is now hopelessly out of date. A decade ago we had five faculty and plans to grow this to eight by now. Instead, because of the hiring freeze and the retirements, we’re down to three. There were a number of reasons we expected to grow the group, not least because for many years, software engineering was our most popular undergraduate specialist program and we had difficulty covering all the teaching, and also because the SE group had proved to be very successful in bringing in research funding, research prizes, and supervising large numbers of grad students.

Where do we go from here? Deans generally ignore arguments that we should just hire more faculty to replace losses, largely because when faculty retire or leave, that’s the only point at which a university can re-think its priorities. Furthermore, some of our arguments for a bigger software engineering group at U of T went away. Our department withdrew the specialist degree in software engineering, and reduced the number of SE undergrad courses, largely because we didn’t have the faculty to teach them, and finding qualified sessional instructors was always a struggle. In effect, our department has gradually walked away from having a strong software engineering group, due to resource constraints.

I believe very firmly that our department *does* need a strong software engineering group, for a number of reasons. First, it’s an important part of an undergrad CS education. The majority of our students go on to work in the software industry, and for this, it is vital that they have a thorough understanding of the engineering principles of software construction. Many of our competitors in N America run majors and/or specialist programs in software engineering, to feed the enormous demand from the software industry for more graduates. One could argue that this should be left to the engineering schools, but these schools tend to lack sufficient expertise in discrete math and computing theory. I believe that software engineering is rooted intellectually in computer science and that a strong software engineering program needs the participation (and probably the leadership) of a strong computer science department. This argument suggests we should be re-building the strength in software engineering that we used to have in our undergrad program, rather than quietly letting it whither.

Secondly, the complexity of modern software systems makes software engineering research ever more relevant to society. Our ability to invent new software technology continues to outpace our ability to understand the principles by which that software can be made safe and reliable. Software companies regularly come to us seeking to partner with us in joint research and to engage with our grad students. Currently, we have to walk away from most of these opportunities. That means research funding we’re missing out on.

I’ve been collecting examples of different types of climate model that students can use in the classroom to explore different aspects of climate science and climate policy. In the long run, I’d like to use these to make the teaching of climate literacy much more hands-on and discovery-based. My goal is to foster more critical thinking, by having students analyze the kinds of questions people ask about climate, figure out how to put together good answers using a combination of existing data, data analysis tools, simple computational models, and more sophisticated simulations. And of course, learn how to critique the answers based on the uncertainties in the lines of evidence they have used.

Anyway, as a start, here’s a collection of runnable and not-so-runnable models, some of which I’ve used in the classroom:

Simple Energy Balance Models (for exploring the basic physics)

General Circulation Models (for studying earth system interactions)

  • EdGCM – an educational version of the NASA GISS general circulation model (well, an older version of it). EdGCM provides a simplified user interface for setting up model runs, but allows for some fairly sophisticated experiments. You typically need to let the model run overnight for a century-long simulation.
  • Portable University Model of the Atmosphere (PUMA) – a planet Simulator designed by folks at the University of Hamburg for use in the classroom to help train students interested in becoming climate scientists.

Integrated Assessment Models (for policy analysis)

  • C-Learn, a simple policy analysis tool from Climate Interactive. Allows you to specify emissions trajectories for three groups of nations, and explore the impact on global temperature. This is a simplified version of the C-ROADS model, which is used to analyze proposals during international climate treaty negotiations.
  • Java Climate Model (JVM) – a detailed desktop assessment model that offers detailed controls over different emissions scenarios and regional responses.

Systems Dynamics Models (to foster systems thinking)

  • Bathtub Dynamics and Climate Change from John Sterman at MIT. This simulation is intended to get students thinking about the relationship between emissions and concentrations, using the bathtub metaphor. It’s based on Sterman’s work on mental models of climate change.
  • The Climate Challenge: Our Choices, also from Sterman’s team at MIT. This one looks fancier, but gives you less control over the simulation – you can just pick one of three emissions paths: increasing, stabilized or reducing. On the other hand, it’s very effective at demonstrating the point about emissions vs. concentrations.
  • Carbon Cycle Model from Shodor, originally developed using Stella by folks at Cornell.
  • And while we’re on systems dynamics, I ought to mention toolkits for building your own systems dynamics models, such as Stella from ISEE Systems (here’s an example of it used to teach the global carbon cycle).

Other Related Models

  • A Kaya Identity Calculator, from David Archer at U Chicago. The Kaya identity is a way of expressing the interaction between the key drivers of carbon emissions: population growth, economic growth, energy efficiency, and the carbon intensity of our energy supply. Archer’s model allows you to play with these numbers.
  • An Orbital Forcing Calculator, also from David Archer. This allows you to calculate what the effect changes in the earth’s orbit and the wobble on its axis have on the solar energy that the earth receives, in any year in the past of future.

Useful readings on the hierarchy of climate models

A high school student in Ottawa, Jin, writes to ask me for help with a theme on the question of whether global warming is caused by human activities. Here’s my answer:

The simple answer is ‘yes’, global warming is caused by human activities. In fact we’ve known this for over 100 years. Scientists in the 19th Century realized that some gases in the atmosphere help to keep the planet warm by stopping the earth losing heat to outer space, just like a blanket keeps you warm by trapping heat near your body. The most important of these gases is Carbon Dioxide (CO2). If there were no CO2 in the atmosphere, the entire earth would be a frozen ball of ice. Luckily, that CO2 keeps the planet at the temperatures that are suitable for human life. But as we dig up coal and oil and natural gas, and burn them for energy, we increase the amount of CO2 in the atmosphere and hence we increase the temperature of the planet. Now, while scientists have known this since the 19th century, it’s only in the last 30 years that scientists were able to calculate precisely how fast the earth would warm up, and which parts of the planet would be affected the most.

Here are three really good explanations, which might help you for your theme:

  1. NASA’s Climate Kids website:
    It’s probably written for kids younger than you, but has really simple explanations, in case anything isn’t clear.
  2. Climate Change in a Nutshell – a set of short videos that I really like:
  3. The IPCC’s frequently asked question list. The IPCC is the international panel on climate change, whose job is to summarize what scientists know, so that politicians can make good decisions. Their reports can be a bit technical, but have a lot more detail than most other material:

Also, you might find this interesting. It’s a list of successful predictions by climate scientists. One of the best ways we know that science is right about something is that we are able to use our theories to predict what while happen in the future. When those predictions turn out to be correct, it gives us a lot more confidence that the theories are right:

By the way, if you use google to search for information about global warming or climate change, you’ll find lots of confusing information, and different opinions. You might wonder why that is, if scientists are so sure about the causes of climate change. There’s a simple reason. Climate change is a really big problem, one that’s very hard to deal with. Most of our energy supply comes from fossil fuels, in one way or another. To prevent dangerous levels of warming, we have to stop using them. How we do that is hard for many people to think about. We really don’t want to stop using them, because the cheap energy from fossil fuels powers our cars, heats our homes, gives us cheap flights, powers our factories, and so on.

For many people it’s easier to choose not to believe in global warming than it is to think about how we would give up fossil fuels. Unfortunately, our climate doesn’t care what we believe – it’s changing anyway, and the warming is accelerating. Luckily, humans are very intelligent, and good at inventing things. If we can understand the problem, then we should be able to solve it. But it will require people to think clearly about it, and not to fool themselves by wishing the problem away.

A few weeks back, Randall Munroe (of XKCD fame) attempted to explain the parts of a Saturn V rocket (“Up Goer Five”) using only the most common one thousand words of English. I like the idea, but found many of his phrasings awkward, and some were far harder to understand than if he’d used the usual word.

Now there’s a web-based editor that let’s everyone try their hand at this, and a tumblr of scientists trying to explain their work this way. Some of them are brilliant, but many almost unreadable. It turns out this is much harder than it looks.

Here’s mine. I cheated once, by introducing one new word that’s not on the list, although it’s not really cheating because the whole point of science education is to equip people the right words and concepts to talk about important stuff:

If the world gets hotter or colder, we call that ‘climate’ change. I study how people use computers to understand such change, and to help them decide what we should do about it. The computers they use are very big and fast, but they are hard to work with. My job is to help them check that the computers are working right, and that the answers they get from the computers make sense. I also study what other things people want to know about how the world will change as it gets hotter, and how we can make the answers to their questions easier to understand.

[Update] And here’s a few others that I think are brilliant:

Emily S. Cassidy, Environmental Scientist at University of Minnesota:

In 50 years the world will need to grow two times as much food as we grow today. Meeting these growing needs for food will be hard because we need to make sure meeting these needs doesn’t lead to cutting down more trees or hurting living things. In the past when we wanted more food we cut down a lot of trees, so we could use the land. So how are we going to grow more food without cutting down more trees? One answer to this problem is looking at how we use the food we grow today. People eat food, but food is also used to make animals and run cars. In fact, animals eat over one-third of the food we grow. In some places, animals eat over two-thirds of the food grown! If the world used all of the food we grow for people, instead of animals and cars, we could have 70% more food and that would be enough food for a lot of people!

Anthony Finkelstein, at University College London, explaining requirements analysis:

I am interested in computers and how we can get them to do what we want. Sometimes they do not do what we expect because we got something wrong. I would like to know this before we use the computer to do something important and before we spend too much time and money. Sometimes they do something wrong because we did not ask the people who will be using them what they wanted the computer to do. This is not as easy as it sounds! Often these people do not agree with each other and do not understand what it is possible for the computer to do. When we know what they want the computer to do we must write it down in a way that people building the computer can also understand it.

This week, I start teaching a new grad course on computational models of climate change, aimed at computer science grad students with no prior background in climate science or meteorology. Here’s my brief blurb:

Detailed projections of future climate change are created using sophisticated computational models that simulate the physical dynamics of the atmosphere and oceans and their interaction with chemical and biological processes around the globe. These models have evolved over the last 60 years, along with scientists’ understanding of the climate system. This course provides an introduction to the computational techniques used in constructing global climate models, the engineering challenges in coupling and testing models of disparate earth system processes, and the scaling challenges involved in exploiting peta-scale computing architectures. The course will also provide a historical perspective on climate modelling, from the early ENIAC weather simulations created by von Neumann and Charney, through to today’s Earth System Models, and the role that these models play in the scientific assessments of the UN’s Intergovernmental Panel on Climate Change (IPCC). The course will also address the philosophical issues raised by the role of computational modelling in the discovery of scientific knowledge, the measurement of uncertainty, and a variety of techniques for model validation. Additional topics, based on interest, may include the use of multi-model ensembles for probabilistic forecasting, data assimilation techniques, and the use of models for re-analysis.

I’ve come up with a draft outline for the course, and some possible readings for each topic. Comments are very welcome:

  1. History of climate and weather modelling. Early climate science. Quick tour of range of current models. Overview of what we knew about climate change before computational modeling was possible.
  2. Calculating the weather. Bjerknes’ equations. ENIAC runs. What does a modern dynamical core do? [Includes basic introduction to thermodynamics of atmosphere and ocean]
  3. Chaos and complexity science. Key ideas: forcings, feedbacks, dynamic equilibrium, tipping points, regime shifts, systems thinking. Planetary boundaries. Potential for runaway feedbacks. Resilience & sustainability. (way too many readings this week. Have to think about how to address this – maybe this is two weeks worth of material?)
    • Liepert, B. G. (2010). The physical concept of climate forcing. Wiley Interdisciplinary Reviews: Climate Change, 1(6), 786-802.
    • Manson, S. M. (2001). Simplifying complexity: a review of complexity theory. Geoforum, 32(3), 405-414.
    • Rind, D. (1999). Complexity and Climate. Science, 284(5411), 105-107.
    • Randall, D. A. (2011). The Evolution of Complexity In General Circulation Models. In L. Donner, W. Schubert, & R. Somerville (Eds.), The Development of Atmospheric General Circulation Models: Complexity, Synthesis, and Computation. Cambridge University Press.
    • Meadows, D. H. (2008). Chapter One: The Basics. Thinking In Systems: A Primer (pp. 11-34). Chelsea Green Publishing.
    • Randers, J. (2012). The Real Message of Limits to Growth: A Plea for Forward-Looking Global Policy, 2, 102-105.
    • Rockström, J., Steffen, W., Noone, K., Persson, Å., Chapin, F. S., Lambin, E., Lenton, T. M., et al. (2009). Planetary boundaries: exploring the safe operating space for humanity. Ecology and Society, 14(2), 32.
    • Lenton, T. M., Held, H., Kriegler, E., Hall, J. W., Lucht, W., Rahmstorf, S., & Schellnhuber, H. J. (2008). Tipping elements in the Earth’s climate system. Proceedings of the National Academy of Sciences of the United States of America, 105(6), 1786-93.
  4. Typology of climate Models. Basic energy balance models. Adding a layered atmosphere. 3-D models. Coupling in other earth systems. Exploring dynamics of the socio-economic system. Other types of model: EMICS; IAMS.
  5. Earth System Modeling. Using models to study interactions in the earth system. Overview of key systems (carbon cycle, hydrology, ice dynamics, biogeochemistry).
  6. Overcoming computational limits. Choice of grid resolution; grid geometry, online versus offline; regional models; ensembles of simpler models; perturbed ensembles. The challenge of very long simulations (e.g. for studying paleoclimate).
  7. Epistemic status of climate models. E.g. what does a future forecast actually mean? How are model runs interpreted? Relationship between model and theory. Reproducibility and open science.
    • Shackley, S. (2001). Epistemic Lifestyles in Climate Change Modeling. In P. N. Edwards (Ed.), Changing the Atmosphere: Expert Knowledge and Environmental Government (pp. 107-133). MIT Press.
    • Sterman, J. D., Jr, E. R., & Oreskes, N. (1994). The Meaning of Models. Science, 264(5157), 329-331.
    • Randall, D. A., & Wielicki, B. A. (1997). Measurement, Models, and Hypotheses in the Atmospheric Sciences. Bulletin of the American Meteorological Society, 78(3), 399-406.
    • Smith, L. a. (2002). What might we learn from climate forecasts? Proceedings of the National Academy of Sciences of the United States of America, 99 Suppl 1, 2487-92.
  8. Assessing model skill – comparing models against observations, forecast validation, hindcasting. Validation of the entire modelling system. Problems of uncertainty in the data. Re-analysis, data assimilation. Model intercomparison projects.
  9. Uncertainty. Three different types: initial state uncertainty, scenario uncertainty and structural uncertainty. How well are we doing? Assessing structural uncertainty in the models. How different are the models anyway?
  10. Current Research Challenges. Eg: Non-standard grids – e.g. non-rectangular, adaptive, etc; Probabilistic modelling – both fine grain (e.g. ECMWF work) and use of ensembles; Petascale datasets; Reusable couplers and software frameworks. (need some more readings on different research challenges for this topic)
  11. The future. Projecting future climates. Role of modelling in the IPCC assessments. What policymakers want versus what they get. Demands for actionable science and regional, decadal forecasting. The idea of climate services.
  12. Knowledge and wisdom. What the models tell us. Climate ethics. The politics of doubt. The understanding gap. Disconnect between our understanding of climate and our policy choices.