It’s the first week of term here in Toronto, and I’m busy launching a new course. It’s a small-seminar, first-year undergraduate course, open to any student in the Faculty of Arts and Science, called Confronting the Climate Crisis. I’m launching it as a small seminar course this year as a pilot project, with the aim that we go big next year, turning it into a much larger lecture course, open to hundreds (and maybe thousands) of students. I’ll have to think about how to make it scale.

The idea for this course arose in response to an initiative at the University of Barcelona to create a mandatory course on the climate crisis for all undergraduate students, to meet one of the demands of a large-scale student protest in the fall of 2022. The University of Barcelona expects to launch such a course later this year. Increasingly, our students are demanding that Universities respond to declarations of a climate emergency (e.g. by the Federal Government and the City of Toronto) by re-thinking how our programs are preparing them with the resilience and skills needed in a world that will be radically re-shaped by climate change in the coming decades.

The design of this course responds directly to the challenge posed at U Barcelona: If every student were required to take (at least!) one course on the climate crisis, what would such a course look like? Climate change is a complex, trans-disciplinary problem, and needs to be viewed through multiple lenses to create an integrated understanding of how we arrived at this moment in history, and what paths we now face as a society to stabilize the climate system and create a just transition to a sustainable society. The course needs to give the students a clear-eyed understanding of how serious and urgent the crisis is, but also needs to give them the tools to deal with that understanding, psychologically, politically, and sociologically. So it needs to balance the big picture view, and a very personal response: what do you do to avoid falling into despair, once you understand.

It’s not clear to me that in any reasonable amount of time we could get the University of Toronto to agree to make such a course mandatory for every student, given the complex and devolved governance structure of the second largest university in North America. But we can make a lot of progress by starting bottom up: by launching the course now, we intend to provoke a much wider response across the University: How are we preparing all of our students for a climate changed world? What do different departments and programs need to do in response? If other departments want to add this course to their undergrad programs, I’ll be delighted. Or if they want to create versions of the course that are more specifically tailored to their own students’ needs, I’ll be equally delighted.

Alright, enough pre-amble. Here’s the syllabus entry:

This course is a comprehensive, interdisciplinary introduction to the climate crisis, suitable for any undergraduate student at U of T. The course examines the climate crisis from scientific, social, economic, political, and cultural perspectives, from the physical science basis through to the choices we now face to stabilize the climate system. The course uses a mixture of lectures, hands-on activities, group projects, online discussion, and guest speakers to give students a deeper understanding of climate change as a complex, interconnected set of problems, while equipping them with a framework to evaluate the choices we face as a society, and to cultivate a culture of hope in the face of a challenging future.

And here’s the outline I’ve developed for a 12-week course:

  1. How long have we known?
    • Course intro and brief overview of the history of climate science
  2. What causes climate change?
    • Greenhouse gases – where they come from and what they do
    • Sources of data about climate change
    • How scientists use models to assess climate sensitivity
  3. How bad is it?
    • Future projections of climate change
    • Understanding targets: 350ppm, 1.5°C & 2°C; Net Zero
    • Irreversibility, overshoot, long-term implications, and emergency measures (geoengineering)
  4. Who does it affect?
    • Key impacts: extreme weather, sea level rise, ocean acidification, ecosystem collapse, etc
    • Regional disparities in climate impacts and adaptation, and the rise of climate migrants
    • Inequities in responsibility and impacts – the role of climate justice.
  5. Do we have the technology to fix it?
    • Decarbonization pathways
    • Sectoral analysis: energy, buildings, transport, food systems, waste, etc
    • Interaction effects among climate solutions
  6. Can we agree to fix it?
    • International policymaking: UNFCC, IPCC, Kyoto, Paris, etc.
    • Policy tools: carbon taxes, carbon trading, subsidies, direct investment, etc.
    • Barriers to political action
  7. What will it cost to fix it?
    • Intro to climate economics
    • Costs and benefits of adaptation and mitigation
    • Ecomodernism vs. Degrowth
  8. What’s stopping us?
    • Climate communication and climate disinformation
    • The role of political lobbying
    • How we talk about climate change and the role of framing
  9. What are we afraid of?
    • The psychology of climate change
    • Affective responses to climate change: ecoanxiety, doomerism, denial, etc.
    • Maintaining mental health in the climate crisis
  10. How can we make our voices heard?
    • Protest movements and climate activism
    • Theories of Change
    • Modes of activism and the ethics of disruptive protest
  11. What gives us hope?
    • Constructive hope as a response to eco-anxiety
    • The role of worldviews, culture, and language
    • Reconnecting with nature
  12. Where do we go from here?
    • Importance of systems thinking and multisolving.
    • The role of storytelling in creating a narrative of hope
    • Making your studies count: the role of universities in a climate emergency.

In honour of today’s announcement that Syukuro Manabe, Klaus Hasselmann and Giorgio Parisi have been awarded the Nobel prize in physics for their contributions to understanding and modeling complex systems, I’m posting here some extracts from my forthcoming book, “Computing the Climate”, describing Manabe’s early work on modeling the climate system. We’ll start the story with the breakthrough by Norman Phillips at Princeton University’s Institute for Advanced Study (IAS), which I wrote about in my last post.

The Birth of General Circulation Modeling

Phillips had built what we now acknowledge as the first general circulation model (GCM), in 1955. It was ridiculously simple, representing the earth as a cylinder rather than a globe, with the state of the atmosphere expressed using a single variable—air pressure—at two different heights, at each of 272 points around the planet (a grid of 16 x 17 points). Despite its simplicity, Phillips’ model did something remarkable. When started with a uniform atmosphere—the same values at every grid point—the model gradually developed its own stable jet stream, under the influence of the equations that describe the effect of heat from the sun and rotation of the earth. The model was hailed as a remarkable success, and inspired a generation of atmospheric scientists to develop their own global circulation models.

The idea of starting the model with the atmosphere at rest—and seeing what patterns emerge—is a key feature that makes this style of modelling radically different how models are used in weather forecasting. Numerical weather forecasting had taken off rapidly, and by 1960, three countries—the United States, Sweden and Japan—had operational numerical weather forecasting services up and running. So there was plenty of expertise already in numerical methods and computational modelling among the meteorological community, especially in those three countries. 

But whereas a weather model only simulates a few days starting from data about current conditions, a general circulation model has to simulate long-term stable patterns, which means many of the simplifications to the equations of motion that worked in early weather forecasting models don’t work in GCMs. The weather models of the 1950s all ignore fast moving waves that are irrelevant in short-term weather forecasts. But these simplifications make the model unstable over longer runs. The atmosphere would steadily lose energy—and sometimes air and moisture too—so that realistic climatic patterns don’t emerge. The small group of scientists interested in general circulation modelling began to diverge from the larger numerical weather forecasting community, choosing to focus on versions of the equations and numerical algorithms with conservation of mass and energy built in, to give stable long-range simulations.

In 1955, the US Weather Bureau established a General Circulation Research Laboratory, specifically to build on Phillips’ success. It was headed by Joseph Smagorinsky, one of the original members of the ENIAC weather modelling team. Originally located just outside Washington DC, the lab has undergone several name changes and relocations, and is now the Geophysical Fluid Dynamics Lab (GFDL), housed at Princeton University, where it remains a major climate modelling lab today.

In 1959, Smagorinsky recruited the young Japanese meteorologist, Syukuro Manabe from Tokyo, and they began work on a primitive equation model. Like Phillips, they began with a model that represented only one hemisphere. Manabe concentrated on the mathematical structure of the models, while Smagorinsky hired a large team of programmers to develop the code. By 1963, they had developed a nine-layer atmosphere model which exchanged water—but not heat—between the atmosphere and surface. The planet’s surface, however, was flat and featureless—a continuous swamp from which water could evaporate, but which had no internal dynamics of its own. The model could simulate radiation passing through the atmosphere, interacting with water vapour, ozone and CO2. Like most of the early GCMs, this model captured realistic global patterns, but had many of the details wrong. 

Meanwhile, at the University of California, Los Angeles (UCLA), Yale Mintz, the associate director of the Department of Meteorology, recruited another young Japanese meteorologist, Akio Arakawa, to help him build their own general circulation model. From 1961, Mintz and Arakawa developed a series of models, with Mintz providing the theoretical direction, and Arakawa designing the model, with help from the department’s graduate students. By 1964, their model represented the entire globe with a 2-layer atmosphere and realistic geography.

Computational limitations dominated the choices these two teams had to make. For example, the GFDL team modelled only the northern hemisphere, with a featureless surface, so that they could put more layers into the atmosphere, while the UCLA team chose the opposite route: an entire global model with realistic layout of continents and oceans, but with only 2 layers of atmosphere.

Early Warming Signals

Meanwhile, in the early 1950s, oceanographers at the Scripps Institute for Oceanography in California, under the leadership of their new director, Roger Revelle, were investigating the spread of radioactive fallout in the oceans from nuclear weapons testing. Their work was funded by the US military, who needed to know how quickly the oceans would absorb these contaminants, to assess the risks to human health. But Revelle had many other research interests. He had read about the idea that carbon dioxide from fossil fuels could warm the planet, and realized radiocarbon dating could be used to measure how quickly the ocean absorbs CO2. Revelle understood the importance of a community effort, so he persuaded a number of colleagues to do similar analysis, and in a coordinated set of three papers [Craig, 1957; Revelle & Süess, 1957; and Arnold & Anderson, 1957], published in 1957, the group presented their results.

They all found a consistent pattern: the surface layer of the ocean continuously absorbs CO2 from the atmosphere, so on average, a molecule of CO2 molecule stays in the atmosphere only for about 7 years, before being dissolved into the ocean. But the surface waters also release CO2, especially when they warm up in the sun. So the atmosphere and surface waters exchange CO2 molecules continuously—any extra CO2 will end up shared between them

All three papers also confirmed that the surface waters don’t mix much with the deeper ocean. So it takes hundreds of years for any extra carbon to pass down into deeper waters. The implications were clear—the oceans weren’t absorbing CO2 anywhere near as fast as we were producing it.

These findings set alarm bells ringing amongst the geosciences community. If this was correct, the effects of climate change would be noticeable within a few decades. But without data, it would be hard to test their prediction. At Scripps, Revelle hired a young chemist, David Charles Keeling, to begin detailed measurements. In 1958, Keeling set up an observing station on Mauna Loa in Hawaii, and a second station in the Antarctic, both far enough from any major sources of emissions to give a reliable baseline measurements of CO2 in the atmosphere. Funding for the Antarctic station was cut a few years later, but Keeling managed to keep the recordings going at Mauna Loa, where they are still collected regularly today. Within two years, Keeling had enough data to confirm Bolin and Ericsson’s analysis: CO2 levels in the atmosphere were rising sharply.

Keeling’s data helped to spread awareness of the issue rapidly among the ocean and atmospheric science research communities, even as scientists in other fields remained unaware of the issue. Alarm at the implications of the speed at which CO2 levels were rising led some scientists to alert the country’s political leaders. When President Lyndon Johnson commissioned a report on the state of the environment, in 1964, the president’s science advisory committee invited a small subcommittee—including Revelle, Keeling, and Smagorinsky—to write an appendix to the report, focusing on the threat of climate change. And so, on February 8th, 1965, President Johnson became the first major world leader to mention the threat of climate change, in speech to congress: “This generation has altered the composition of the atmosphere on a global scale through…a steady increase in carbon dioxide from the burning of fossil fuels.”

Climate Modeling Takes Off

So awareness of the CO2 problem was spreading rapidly through the scientific community just as the general circulation modelling community was getting established. However, it wasn’t clear that global circulation models would be suited to this task. Computational power was limited, and it wasn’t yet possible to run the models long enough to simulate the decades or centuries over which climate change would occur. Besides, the first generation of GCMs had so many simplifications, it seemed unlikely they could simulate the effects of increasing CO2—that wasn’t what they were designed for.

To do this properly, the models would need to include all the relevant energy exchanges between the surface, atmosphere and space. That would mean a model that accurately captured the vertical temperature profile of the atmosphere, along with the process of radiation, convection, evaporation and precipitation, all of which move energy vertically. None of these processes are adequately captured in the primitive equations, so they would all need to be added as parameterization schemes in the models.

Smagorinsky and Manabe at GFDL were the only group anywhere near ready to try running CO2 experiments in their global circulation model. Their nine-layer model already captured some of the vertical structure of the atmosphere, and Suki Manabe had built in a detailed radiation code from the start, with the help of a visiting German meteorologist, Fritz Möller. Manabe had a model of the relevant heat exchanges in the full height of the atmosphere working by 1967, and together with his colleague, Richard Wetherald, published what is now recognized as the first accurate computational experiment of climate change [Manabe and Wetherald, 1967].

Running the general circulation model for this experiment was still too computationally expensive, so they ignored all horizontal heat exchanges, and instead built a one dimensional model of just a single column of atmosphere. The model could be run with 9 or 18 layers, and included the effects of upwards and downwards radiation through the column, exchanges of heat through convection, and the latent heat of evaporation and condensation of water. Manabe and Wetherald first tested the model with current atmospheric conditions, to check it could reproduce the correct vertical distribution of temperatures in the atmosphere, which it did very well. They then doubled the amount of carbon dioxide in the model and ran it again. They found temperatures rose throughout the lower atmosphere, with a rise of about 2°C at the surface, while the stratosphere showed a corresponding cooling. This pattern—warming in the lower atmosphere and cooling in the stratosphere—shows up in all the modern global climate models, but wasn’t confirmed by satellite readings until the 2000s.

By the mid 1970s, a broad community of scientists were replicating Manabe and Wetherald’s experiment in a variety of simplified models, although it would take nearly a decade before anyone could run the experiment in a full 3-dimensional GCM. But the community was beginning to use the term climate modelling to describe their work—a term given much greater impetus when it was used as the title of a comprehensive survey of the field by two NCAR scientists, Steven Schneider and Robert Dickinson in 1975. Remarkably, their paper [Schneider and Dickinson, 1974] charts a massive growth of research, citing the work of over 150 authors who published work on climate modelling the in period from 1967-1975, after Manabe and Wetherald’s original experiment.

It took some time, however, to get the general circulation models to the point where they could also run a global climate change experiment. Perhaps unsurprisingly, Manabe and Wetherald were also the first to do this, in 1975. Their GCM produced a higher result for the doubled CO2 experiment—an average surface warming of 3°C—and they attributed this to the snow-albedo feedback, which is included in the GCM, but not in their original single column model. Their experiment [Manabe and Wetherald 1975] also showed an important effect first noted by Arrhenius: a much greater warming at the poles than towards the equator—because polar temperatures are much more sensitive to changes in the rate at which heat escapes to space. And their model predicted another effect—global warming would speed up evaporation and precipitation, and hence produce more intense rainfalls. This prediction has already been demonstrated in the rapid uptick of extreme weather events in the 2010s.

In hindsight, Manabe’s simplified models produced remarkably accurate predictions of future climate change. Manabe used his early experiments to predict a temperature rise of about 0.8°C by the year 2000, assuming a 25% increase in CO2 over the course of the twentieth century. Manabe’s assumption about the rate that CO2 would increase was almost spot on, and so was his calculation for the resulting temperature rise. CO2 levels rose from about 300ppm in 1900 to 370ppm in 2000, a rise of 23%. The change in temperature over this period, calculated as the change in decadal means in the HadCRUT5 dataset was 0.82°C. [Hausfather et al 2020].

References

Arnold, J. R., & Anderson, E. C. (1957). The Distribution of Carbon-14 in Nature. Tellus, 9(1), 28–32.

Craig, H. (1957). The Natural Distribution of Radiocarbon and the Exchange Time of Carbon Dioxide Between Atmosphere and Sea. Tellus, 9(1), 1–17.

Hausfather, Z., Drake, H. F., Abbott, T., & Schmidt, G. A. (2020). Evaluating the Performance of Past Climate Model Projections. Geophysical Research Letters, 47(1), 2019GL085378.

Manabe, S., & Wetherald, R. T. (1967). Thermal Equilibrium of the Atmosphere with a Given Distribution of Relative Humidity. Journal of the Atmospheric Sciences.

Manabe, S., & Wetherald, R. T. (1975). The Effects of Doubling the CO2 Concentration on the Climate of a General Circulation Model. Journal of the Atmospheric Sciences, 32(1), 3–15.

Revelle, R., & Suess, H. E. (1957). Carbon Dioxide Exchange Between Atmosphere and Ocean and the Question of an Increase of Atmospheric CO 2 during the Past Decades. Tellus, 9(1), 18–27.

Schneider, S. H., & Dickinson, R. E. (1974). Climate modeling. Reviews of Geophysics, 12(3), 447.

One of the biggest challenges in understanding climate change is that the timescales involved are far longer than most people are used to thinking about. Garvey points out that this makes climate change different from any other ethical question, because both the causes and consequences are smeared out across time and space:

“There is a sense in which my actions and the actions of my present fellows join with the past actions of my parents, grandparents and great-grandparents, and the effects resulting from our actions will still be felt hundreds, even thousands of years in the future. It is also true that we are, in a way, stuck with the present we have because of our past. The little actions I undertake which keep me warm and dry and fed are what they are partly because of choices made by people long dead. Even if I didn’t want to burn fossil fuels, I’m embedded in a culture set up to do so.” (Garvey, 2008, p60)

Part of the problem is that the physical climate system is slow to respond to our additional greenhouse gas emissions, and similarly slow to respond to reductions in emissions. The first part of this is core to a basic understanding of climate change, as it’s built into the idea of equilibrium climate sensitivity (roughly speaking, the expected temperature rise for each doubling of CO2 concentrations in the atmosphere). The extra heat that’s trapped by the additional greenhouse gases builds up over time, and the planet warms slowly, but the oceans have such a large thermal mass, it takes decades for this warming process to complete.

Unfortunately, the second part, that the planet takes a long time to respond to reductions in emissions, is harder to explain, largely because of the common assumption that CO2 will behave like other pollutants, which wash out of the atmosphere fairly quickly once we stop emitting them. This assumption underlies much of the common wait-and-see response to climate change, as it gives rise to the myth that once we get serious about climate change (e.g. because we start to see major impacts), we can fix the problem fairly quickly. Unfortunately, this is not true at all, because CO2 is a long-lived greenhouse gas. About half of human CO2 emissions are absorbed by the oceans and soils, over a period of several decades. The remainder stays in the atmosphere  There are several natural processes that remove the remaining CO2 from the atmosphere, but they take thousands of years, which means that even with zero greenhouse gas emissions, we’re likely stuck with the consequences of life on a warmer planet for centuries.

So the physical climate system presents us with two forms of inertia, one that delays the warming due to greenhouse gas emissions, and, one that delays the reduction in that warming in response to reduced emissions:

  1. The thermal inertia of the planet’s surface (largely due to the oceans), by which the planet can keep absorbing extra heat for years before it makes a substantial difference to surface temperatures. (scale: decades)
  2. The carbon cycle inertia by which CO2 is only removed from the atmosphere very slowly, and has a continued warming effect for as long as it’s there. (scale: decades to millennia)

For more on how these forms of inertia affect future warming scenarios, see my post on committed warming.

But these are not the only forms of inertia that matter. There are also various kinds of inertia in the socio-economic system that slow down our response to climate change. For example, Davis et. al. attempt to quantify the emissions from all the existing energy infrastructure (power plants, factories, cars, buildings, etc that already exist and are in use), because even under the most optimistic scenario, it will take decades to replace all this infrastructure with clean energy alternatives. Here’s an example of their analysis, under the assumption that things we’ve already built will not be retired early. This assumption is reasonable because (1) its rare that we’re willing to bear the cost of premature retirement of infrastructure and (2) it’s going to be hard enough building enough new clean energy infrastructure fast enough to replace stuff that has worn out while meeting increasing demand.

infrastructural-inertia

Expected ongoing carbon dioxide emissions from existing infrastructure. Includes primary infrastructure only – i.e. infrastructure that directly releases CO2 (e.g. cars & trucks), but not infrastructure that encourages the continued production of devices that emit CO2. (e.g. the network of interstate highways in the US). From Davis et al, 2010.

So that gives us our third form of inertia:

  1. Infrastructural inertia from existing energy infrastructure, as emissions of greenhouse gases will continue from everything we’ve built in the past, until it can be replaced. (scale: decades)

We’ve known about the threat of climate change for decades, and various governments and international negotiations have attempted to deal with it, and yet have made very little progress. Which suggests there are more forms of inertia that we ought to be able to name and quantify. To do this, we need to look at the broader socio-economic system that ought to allow us as a society to respond to the threat of climate change. Here’s a schematic of that system, as a systems dynamic model:

The socio-geophysical system. Arrows labelled ‘+’ are positive influence links (“A rise in X tends to cause a rise in Y, and a fall in X tends to cause a fall in Y”). Arrows labelled ‘-‘ represent negative links, where a rise in X tends to cause a fall in Y, and vice versa. The arrow labelled with a tap (faucet) is an accumulation link: Y will continue to rise even while X is falling, until X reaches net zero.

Broadly speaking, decarbonization will require both changes in technology and changes in human behaviour. But before we can do that, we have to recognize and agree that there is a problem, develop an agreed set of coordinated actions to tackle it, and then implement the policy shifts and behaviour changes to get us there.

At first, this diagram looks promising: once we realise how serious climate change is, we’ll take the corresponding actions, and that will bring down emissions, solving the problem. In other words, the more carbon emissions go up, the more they should drive a societal response, which in turn (eventually) will reduce emissions again. But the diagram includes a subtle but important twist: the link from carbon emissions to atmospheric concentrations is an accumulation link. Even as emissions fall, the amount of greenhouse gases in the atmosphere continue to rise. The latter rise only stops when carbon emissions reach zero. Think of a tap on the bathtub – if you reduce the inflow of water, the level of water in the tub still rises, until you turn the tap off completely.

Worse still, there are plenty more forms of inertia hidden in the diagram, because each of the causal links takes time to operate. I’ve given these additional sources of inertia names:

Sources of inertia in the socio-geophysical climate system

Sources of inertia in the socio-geophysical climate system

For example, there are forms of inertia that delay the impacts of increased temperatures, both on ecosystems and on human society. Most of the systems that are impacted by climate change can absorb smaller changes in the climate without much noticeable difference, but then reach a threshold whereby they can no longer be sustained. I’ve characterized two forms of inertia here:

  1. Natural variability (or “signal to noise”) inertia, which arises because initially, temperature increases due to climate change are much smaller than internal variability with daily and seasonal weather patterns. Hence it takes a long time for the ‘signal’ of climate change to emerge from the noise of natural variability. (scale: decades)
  2. Ecosystem resilience. We tend to think of resilience as a good thing – defined informally as the ability of a system to ‘bounce back’ after a shock. But resilience can also mask underlying changes that push a system closer and closer to a threshold beyond which it cannot recover. So this form of inertia acts by masking the effect of that change, sometimes until it’s too late to act. (scale: years to decades)

Then, once we identify the impacts of climate change (whether in advance or after the fact), it takes time for these to feed into the kind of public concern needed to build agreement on the need for action:

  1. Societal resilience. Human society is very adaptable. When storms destroy our buildings, we just rebuild them a little stronger. When drought destroys our crops, we just invent new forms of irrigation. Just as with ecosystems, there is a limit to this kind of resilience, when subjected to a continual change. But our ability to shrug and get on with things causes a further delay in the development of public concern about climate change. (scale: decades?)
  2. Denial. Perhaps even stronger than human resilience is our ability to fool ourselves into thinking that something bad is not happening, and to look for other explanations than the ones that best fit the evidence. Denial is a pretty powerful form of inertia. Denial stops addicts from acknowledging they need to seek help to overcome addiction, and it stops all of us from acknowledging we have a fossil fuel addiction, and need help to deal with it. (scale: decades to generations?)

Even then, public concern doesn’t immediately translate into effective action because of:

  1. Individualism. A frequent response to discussions on climate change is to encourage people to make personal changes in their lives: change your lightbulbs, drive a little less, fly a little less. While these things are important in the process of personal discovery, by helping us understanding our individual impact on the world, they are a form of voluntary action only available to the privileged, and hence do not constitute a systemic solution to climate change. When the systems we live in drive us towards certain consumption patterns, it takes a lot of time and effort to choose a low-carbon lifestyle. So the only way this scales is through collective political action: getting governments to change the regulations and price structures that shape what gets built and what we consume, and making governments and corporations accountable for cutting their greenhouse gas contributions. (scale: decades?)

When we get serious about the need for coordinated action, there are further forms of inertia that come into play:

  1. Missing governance structures. We simply don’t have the kind of governance at either the national or international level that can put in place meaningful policy instruments to tackle climate change. The Kyoto process failed because the short term individual interests of the national governments who have the power to act always tend to outweigh the long term collective threat of climate change. The Paris agreement is woefully inadequate for the same reason. Similarly, national governments are hampered by the need to respond to special interest groups (especially large corporations), which means legislative change is a slow, painful process. (scale: decades!)
  2. Bureaucracy. Hampers implementation of new policy tools. It takes time to get legislation formulated and agreed, and it takes time to set up the necessary institutions to ensure they are implemented. (scale: years)
  3. Social Resistance. People don’t like change, and some groups fight hard to resist changes that conflict with their own immediate interests. Every change in social norms is accompanied by pushback. And even when we welcome change and believe in it, we often slip back into old habits. (scale: years? generations?)

Finally, development and deployment of clean energy solutions experience a large number of delays:

  1. R&D lag. It takes time to ramp up new research and development efforts, due to the lack of qualified personnel, the glacial speed that research institutions such as universities operate, and the tendency, especially in academia, for researchers to keep working on what they’ve always worked on in the past, rather than addressing societally important issues. Research on climate solutions is inherently trans-disciplinary, and existing research institutions tend to be very bad at supporting work that crosses traditional boundaries. (scale: decades?)
  2. Investment lag: A wholescale switch from fossil fuels to clean energy and energy efficiency will require huge upfront investment. Agencies that have funding to enable this switch (governments, investment portfolio managers, venture capitalists) tend to be very risk averse, and so prefer things that they know offer a return on investment – e.g. more oil wells and pipelines rather than new cleantech alternatives (scale: years to decades)
  3. Diffusion of innovation: new technologies tend to take a long time to reach large scale deployment, following the classic s-shaped curve, with a small number of early adopters, and, if things go well, a steadily rising adoption curve, followed by a tailing off as laggards resist new technologies. Think about electric cars: while the technology has been available for years, they still only constitute less than 1% of new car sales today. Here’s a study that predicts this will rise to 35% by 2040. Think about that for a moment – if we follow the expected diffusion of innovation pattern, two thirds of new cars in 2040 will still have internal combustion engines. (scale: decades)

All of these forms of inertia slow the process of dealing with climate change, allowing the warming to steadily increase while we figure out how to overcome them. So the key problem isn’t how to address climate change by switching from the current fossil fuel economy to a carbon-neutral one – we probably have all the technologies to do this today. The problem is how to do it fast enough. To stay below 2°C of warming, the world needs to cut greenhouse gas emissions by 50% by 2030, and achieve carbon neutrality in the second half of the century. We’ll have to find a way of overcoming many different types of inertia if we are to make it.

I’ve been exploring how Canada’s commitments to reduce greenhouse gas emissions stack up against reality, especially in the light of the government’s recent decision to stick with the emissions targets set by the previous administration.

Once upon a time, Canada was considered a world leader on climate and environmental issues. The Montreal Protocol on Substances that Deplete the Ozone Layer, signed in 1987, is widely regarded as the most successful international agreement on environmental protection ever. A year later, Canada hosted a conference on The Changing Atmosphere: Implications for Global Security, which helped put climate change on the international political agenda. This conference was one of the first to identify specific targets to avoid dangerous climate change, recommending a global reduction in greenhouse gas emissions of 20% by 2005. It didn’t happen.

It took another ten years before an international agreement to cut emissions was reached: the Kyoto Protocol in 1997. Hailed as a success at the time, it became clear over the ensuing years that with non-binding targets, the agreement was pretty much a sham. Under Kyoto, Canada agreed to cut emissions to 6% below 1990 levels by the 2008-2012 period. It didn’t happen.

At the Copenhagen talks in 2009, Canada proposed an even weaker goal: 17% below 2005 levels (which corresponds to 1.5% above 1990 levels) by 2020. Given that emissions have risen steadily since then, it probably won’t happen. By 2011, facing an embarrassing gap between its Kyoto targets and reality, the Harper administration formally withdrew from Kyoto – the only country ever to do so.

Last year, in preparation for the Paris talks, the Harper administration submitted a new commitment: 30% below 2005 levels by 2030. At first sight it seems better than previous goals. But it includes a large slice of expected international credits and carbon sequestered in wood products, as Canada incorporates Land Use, Land Use Change and Forestry (LULUCF) into its carbon accounting. In terms of actual cuts in greenhouse gas emissions, the target represents approximately 8% above 1990 levels.

The new government, elected in October 2015, trumpeted a renewed approach to climate change, arguing that Canada should be a world leader again. At the Paris talks in 2015, the Trudeau administration proudly supported both the UN’s commitment to keep global temperatures below 2°C of warming (compared to the pre-industrial average), and voiced strong support for an even tougher limit of 1.5°C. However, the government has chosen to stick with the Harper administration’s original Paris targets.

It is clear that that the global commitments under the Paris agreement fall a long way short of what is needed to stay below 2°C, and Canada’s commitment has been rated as one of the weakest. Based on IPCC assessments, to limit warming below 2°C, global greenhouse gas emissions will need to be cut by about 50% by 2030, and eventually reach zero net emissions globally (which will probably mean zero use of fossil fuels, as assumptions about negative emissions seem rather implausible). As Canada has much greater wealth and access to resources than most nations, much greater per capita emissions than all but a few nations, and much greater historical responsibility for emissions than most nations, a “fair” effort would have Canada cutting emissions much faster than the global average, to allow room for poorer nations to grow their emissions, at least initially, to alleviate poverty. Carbon Action Tracker suggests 67% below 1990 emissions by 2030 is a fair target for Canada.

Here’s what all of this looks like – click for bigger version. Note: emissions data from Government of Canada; the Toronto 1988 target was never formally adopted, but was Liberal party policy in the early 90’s. Global 2°C pathway 2030 target from SEI;  Emissions projection, LULUCF adjustment, and “fair” 2030 target from CAT.

Canada's Climate Targets

Several things jump out at me from this chart. First, the complete failure to implement policies that would have allowed us to meet any of these targets. The dip in emissions from 2008-2010, which looked promising for a while, was due to the financial crisis and economic downturn, rather than any actual climate policy. Second, the similar slope of the line to each target, which represents the expected rate of decline from when the target was proposed to when it ought to be attained. At no point has there been any attempt to make up lost ground after each failed target. Finally, in terms of absolute greenhouse gas emissions, each target is worse than the previous ones. Shifting the baseline from 1990 to 2005 masks much of this, and shows that successive governments are more interested in optics than serious action on climate change.

At no point has Canada ever adopted science-based targets capable of delivering on its commitment to keep warming below 2°C.

At the beginning of March, I was invited to give a talk at TEDxUofT. Colleagues tell me the hardest part of giving these talks is deciding what to talk about. I decided to see if I could answer the question of whether we can trust climate models. It was a fascinating and nerve-wracking experience, quite unlike any talk I’ve given before. Of course, I’d love to do another one, as I now know more about what works and what doesn’t.

Here’s the video and a transcript of my talk. [The bits in square brackets in are things I intended to say but forgot!] 

Computing the Climate: How Can a Computer Model Forecast the Future? TEDxUofT, March 1, 2014.

Talking about the weather forecast is a great way to start a friendly conversation. The weather forecast matters to us. It tells us what to wear in the morning; it tells us what to pack for a trip. We also know that weather forecasts can sometimes be wrong, but we’d be foolish to ignore them when they tell us a major storm is heading our way.

[Unfortunately, talking about climate forecasts is often a great way to end a friendly conversation!] Climate models tell us that by the end of this century, if we carry on burning fossil fuels at the rate we have been doing, and we carry on cutting down forests at the rate we have been doing, the planet will warm by somewhere between 5 to 6 degrees centigrade. That might not seem much, but, to put it into context, in the entire history of human civilization, the average temperature of the planet has not varied by more than 1 degree. So that forecast tells us something major is coming, and we probably ought to pay attention to it.

But on the other hand, we know that weather forecasts don’t work so well the longer into the future we peer. Tomorrow’s forecast is usually pretty accurate. Three day and five day forecasts are reasonably good. But next week? They always change their minds before next week comes. So how can we peer 100 years into the future and look at what is coming with respect to the climate? Should we trust those forecasts? Should we trust the climate models that provide them to us?

Six years ago, I set out to find out. I’m a professor of computer science. I study how large teams of software developers can put together complex pieces of software. I’ve worked with NASA, studying how NASA builds the flight software for the Space Shuttle and the International Space Station. I’ve worked with large companies like Microsoft and IBM. My work focusses not so much on software errors, but on the reasons why people make those errors, and how programmers then figure out they’ve made an error, and how they know how to fix it.

To start my study, I visited four major climate modelling labs around the world: in the UK, in Paris, Hamburg, Germany and in Colorado. Each of these labs have typically somewhere between 50-100 scientists who are contributing code to their climate models. And although I only visited four of these labs, there are another twenty or so around the world, all doing similar things. They run these models on some of the fastest supercomputers in the world, and many of the models have been in construction, the same model, for more than 20 years.

When I started this study, I asked one of my students to attempt to measure how many bugs there are in a typical climate model. We know from our experience with software there are always bugs. Sooner or later the machine crashes. So how buggy are climate models? More specifically, what we set out to measure is what we call “defect density” – How many errors are there per thousand lines of code. By this measure, it turns out climate models are remarkably high quality. In fact, they’re better than almost any commercial software that’s ever been studied. They’re about the same level of quality as the Space Shuttle flight software. Here’s my results (For the actual results you’ll have to read the paper):

DefectDensityResults-sm

We know it’s very hard to build a large complex piece of software without making mistakes.  Even the space shuttle’s software had errors in it. So the question is not “is the software perfect for predicting the future?”. The question is “Is it good enough?” Is it fit for purpose?

To answer that question, we’d better understand what that purpose of a climate model is. First of all, I’d better be clear what a climate model is not. A climate model is not a projection of trends we’ve seen in the past extrapolated into the future. If you did that, you’d be wrong, because you haven’t accounted for what actually causes the climate to change, and so the trend might not continue. They are also not decision-support tools. A climate model cannot tell us what to do about climate change. It cannot tell us whether we should be building more solar panels, or wind farms. It can’t tell us whether we should have a carbon tax. It can’t tell us what we ought to put into an international treaty.

What it does do is tell us how the physics of planet earth work, and what the consequences are of changing things, within that physics. I could describe it as “computational fluid dynamics on a rotating sphere”. But computational fluid dynamics is complex.

I went into my son’s fourth grade class recently, and I started to explain what a climate model is, and the first question they asked me was “is it like Minecraft?”. Well, that’s not a bad place to start. If you’re not familiar with Minecraft, it divides the world into blocks, and the blocks are made of stuff. They might be made of wood, or metal, or water, or whatever, and you can build things out of them. There’s no gravity in Minecraft, so you can build floating islands and it’s great fun.

Climate models are a bit like that. To build a climate model, you divide the world into a number of blocks. The difference is that in Minecraft, the blocks are made of stuff. In a climate model, the blocks are really blocks of space, through which stuff can flow. At each timestep, the program calculates how much water, or air, or ice is flowing into, or out of, each block, and if so, in which directions? It calculates changes in temperature, density, humidity, and so on. And whether stuff such as dust, salt, and pollutants are passing through or accumulating in each block. We have to account for the sunlight passing down through the block during the day. Some of what’s in each block might filter some of the the incoming sunlight, for example if there are clouds or dust, so some of the sunlight doesn’t get down to the blocks below. There’s also heat escaping upwards through the blocks, and again, some of what is in the block might trap some of that heat — for example clouds and greenhouse gases.

As you can see from this diagram, the blocks can be pretty large. The upper figure shows blocks of 87km on a side. If you want more detail in the model, you have to make the blocks smaller. Some of the fastest climate models today look more like the lower figure:

ModelResolution

Ideally, you want to make the blocks as small as possible, but then you have many more blocks to keep track of, and you get to the point where the computer just can’t run fast enough. A typical run of a climate model, to simulate a century’s worth of climate, you might have to wait a couple of weeks on some of the fastest supercomputers for that run to complete. So the speed of the computer limits how small we can make the blocks.

Building models this way is remarkably successful. Here’s video of what a climate model can do today. This simulation shows a year’s worth of weather from a climate model. What you’re seeing is clouds and, in orange, that’s where it’s raining. Compare that to a year’s worth of satellite data for the year 2013. If you put them side by side, you can see many of the same patterns. You can see the westerlies, the winds at the top and bottom of the globe, heading from west to east, and nearer the equator, you can see the trade winds flowing in the opposite direction. If you look very closely, you might even see a pulse over South America, and a similar one over Africa in both the model and the satellite data. That’s the daily cycle as the land warms up in the morning and the moisture evaporates from soils and plants, and then later on in the afternoon as it cools, it turns into rain.

Note that the bottom is an actual year, 2013, while the top, the model simulation is not a real year at all – it’s a typical year. So the two don’t correspond exactly. You won’t get storms forming at the same time, because it’s not designed to be an exact simulation; the climate model is designed to get the patterns right. And by and large, it does. [These patterns aren’t coded into this model. They emerge as a consequences of getting the basic physics of the atmosphere right].

So how do you build a climate model like this? The answer is “very slowly”. It takes a lot of time, and a lot of failure. One of the things that surprised me when I visited these labs is that the scientists don’t build these models to try and predict the future. They build these models to try and understand the past. They know their models are only approximations, and they regularly quote the statistician, George Box, who said “All models are wrong, but some are useful”. What he meant is that any model of the world is only an approximation. You can’t get all the complexity of the real world into a model. But even so, even a simple model is a good way to test your theories about the world.

So the way that modellers work, is they spend their time focussing on places where the model does isn’t quite right. For example, maybe the model isn’t getting the Indian monsoon right. Perhaps it’s getting the amount of rain right, but it’s falling in the wrong place. They then form a hypothesis. They’ll say, I think I can improve the model, because I think this particular process is responsible, and if I improve that process in a particular way, then that should fix the simulation of the monsoon cycle. And then they run a whole series of experiments, comparing the old version of the model, which is getting it wrong, with the new version, to test whether the hypothesis is correct. And if after a series of experiments, they believe their hypothesis is correct, they have to convince the rest of the modelling team that this really is an improvement to the model.

In other words, to build the models, they are doing science. They are developing hypotheses, they are running experiments, and using peer review process to convince their colleagues that what they have done is correct:

ModelDevelopmentProcess-sm

Climate modellers also have a few other weapons up their sleeves. Imagine for a moment if Microsoft had 25 competitors around the world, all of whom were attempting to build their own versions of Microsoft Word. Imagine further that every few years, those 25 companies all agreed to run their software on a very complex battery of tests, designed to test all the different conditions under which you might expect a word processor to work. And not only that, but they agree to release all the results of those tests to the public, on the internet, so that anyone who wanted to use any of that software can pore over all the data and find out how well each version did, and decide which version they want to use for their own purposes. Well, that’s what climate modellers do. There is no other software in the world for which there are 25 teams around the world trying to build the same thing, and competing with each other.

Climate modellers also have some other advantages. In some sense, climate modelling is actually easier than weather forecasting. I can show you what I mean by that. Imagine I had a water balloon (actually, you don’t have to imagine – I have one here):

AboutToThrowTheWaterBalloon

I’m going to throw it at the fifth row. Now, you might want to know who will get wet. You could measure everything about my throw: Will I throw underarm, or overarm? Which way am I facing when I let go of it? How much swing do I put in? If you could measure all of those aspects of my throw, and you understand the physics of how objects move, you could come up with a fairly accurate prediction of who is going to get wet.

That’s like weather forecasting. We have to measure the current conditions as accurately as possible, and then project forward to see what direction it’s moving in:

WeatherForecasting

If I make any small mistakes in measuring my throw, those mistakes will multiply as the balloon travels further. The further I attempt to throw it, the more room there is for inaccuracy in my estimate. That’s like weather forecasting. Any errors in the initial conditions multiply up rapidly, and the current limit appears to be about a week or so. Beyond that, the errors get so big that we just cannot make accurate forecasts.

In contrast, climate models would be more like releasing a balloon into the wind, and predicting where it will go by knowing about the wind patterns. I’ll make some wind here using a fan:

BalloonInTheWind

Now that balloon is going to bob about in the wind from the fan. I could go away and come back tomorrow and it will still be doing about the same thing. If the power stays on, I could leave it for a hundred years, and it might still be doing the same thing. I won’t be able to predict exactly where that balloon is going to be at any moment, but I can predict, very reliably, the space in which it will move. I can predict the boundaries of its movement. And if the things that shape those boundaries change, for example by moving the fan, and I know what the factors are that shape those boundaries, I can tell you how the patterns of its movements are going to change – how the boundaries are going to change. So we call that a boundary problem:

ClimateAsABoundaryProblem

The initial conditions are almost irrelevant. It doesn’t matter where the balloon started, what matters is what’s shaping its boundary.

So can these models predict the future? Are they good enough to predict the future? The answer is “yes and no”. We know the models are better at some things than others. They’re better at simulating changes in temperature than they are at simulating changes in rainfall. We also know that each model tends to be stronger in some areas and weaker in others. If you take the average of a whole set of models, you get a much better simulation of how the planet’s climate works than if you look at any individual model on its own. What happens is that the weaknesses in any one model are compensated for by other models that don’t have those weaknesses.

But the results of the models have to be interpreted very carefully, by someone who knows what the models are good at, and what they are not good at – you can’t just take the output of a model and say “that’s how it’s going to be”.

Also, you don’t actually need a computer model to predict climate change. The first predictions of what would happen if we keep on adding carbon dioxide to the atmosphere were produced over 120 years ago. That’s fifty years before the first digital computer was invented. And those predictions were pretty accurate – what has happened over the twentieth century has followed very closely what was predicted all those years ago. Scientists also predicted, for example, that the arctic would warm faster than the equatorial regions, and that’s what happened. They predicted night time temperatures would rise faster than day time temperatures, and that’s what happened.

So in many ways, the models only add detail to what we already know about the climate. They allow scientists to explore “what if” questions. For example, you could ask of a model, what would happen if we stop burning all fossil fuels tomorrow. And the answer from the models is that the temperature of the planet will stay at whatever temperature it was when you stopped. For example, if we wait twenty years, and then stopped, we’re stuck with whatever temperature we’re at for tens of thousands of years. You could ask a model what happens if we dig up all known reserves of fossil fuels, and burn them all at once, in one big party? Well, it gets very hot.

More interestingly, you could ask what if we tried blocking some of the incoming sunlight to cool the planet down, to compensate for some of the warming we’re getting from adding greenhouse gases to the atmosphere? There have been a number of very serious proposals to do that. There are some who say we should float giant space mirrors. That might be hard, but a simpler way of doing it is to put dust up in the stratosphere, and that blocks some of the incoming sunlight. It turns out that if you do that, you can very reliably bring the average temperature of the planet back down to whatever level you want, just by adjusting the amount of the dust. Unfortunately, some parts of the planet cool too much, and others not at all. The crops don’t grow so well, and everyone’s weather gets messed up. So it seems like that could be a solution, but when you study the model results in detail, there are too many problems.

Remember that we know fairly well what will happen to the climate if we keep adding CO2, even without using a computer model, and the computer models just add detail to what we already know. If the models are wrong, they could be wrong in either direction. They might under-estimate the warming just as much as they might over-estimate it. If you look at how well the models can simulate the past few decades, especially the last decade, you’ll see some of both. For example, the models have under-estimated how fast the arctic sea ice has melted. The models have underestimated how fast the sea levels have risen over the last decade. On the other hand, they over-estimated the rate of warming at the surface of the planet. But they underestimated the rate of warming in the deep oceans, so some of the warming ends up in a different place from where the models predicted. So they can under-estimate just as much as they can over-estimate. [The less certain we are about the results from the models, the bigger the risk that the warming might be much worse than we think.]

So when you see a graph like this, which comes from the latest IPCC report that just came out last month, it doesn’t tell us what to do about climate change, it just tells us the consequences of what we might choose to do. Remember, humans aren’t represented in the models at all, except in terms of us producing greenhouse gases and adding them to the atmosphere.

IPCC-AR5-WG1-Fig12.5

If we keep on increasing our use of fossil fuels — finding more oil, building more pipelines, digging up more coal, we’ll follow the top path. And that takes us to a planet that by the end of this century, is somewhere between 4 and 6 degrees warmer, and it keeps on getting warmer over the next few centuries. On the other hand, the bottom path, in dark blue, shows what would happen if, year after year from now onwards, we use less fossil fuels than we did the previous year, until about mid-century, when we get down to zero emissions, and we invent some way to start removing that carbon dioxide from the atmosphere before the end of the century, to stay below 2 degrees of warming.

The models don’t tell us which of these paths we should follow. They just tell us that if this is what we do, here’s what the climate will do in response. You could say that what the models do is take all the data and all the knowledge we have about the climate system and how it works, and put them into one neat package, and its our job to take that knowledge and turn it into wisdom. And to decide which future we would like.

Yesterday I talked about three re-inforcing feedback loops in the earth system, each of which has the potential to accelerate a warming trend once it has started. I also suggested there are other similar feedback loops, some of which are known, and others perhaps yet to be discovered. For example, a paper published last month suggested a new feedback loop, to do with ocean acidification. In a nutshell, as the ocean absorbs more CO2, it becomes more acidic, which inhibits the growth of phytoplankton. These plankton are a major source of sulphur compounds that end up as aerosols in the atmosphere, which seeds the formation of clouds. Less clouds mean lower albedo, which means more warming. Whether this feedback loop is important remains to be seen, but we do know that clouds have an important role to play in climate change.

I didn’t include clouds on my diagrams yet, because clouds deserve a special treatment, in part because they are involved in two major feedback loops that have opposite effects:

Two opposing cloud feedback loops

Two opposing cloud feedback loops. An increase in temperature leads to an increase in moisture in the atmosphere. This leads to two new loops…

As the earth warms, we get more moisture in the atmosphere (simply because there is more evaporation from the surface, and warmer air can hold more moisture). Water vapour is a powerful greenhouse gas, so the more there is in the atmosphere, the more warming we get (greenhouse gases reduce the outgoing radiation). So this sets up a reinforcing feedback loop: more moisture causes more warming causes more moisture.

However, if there is more moisture in the atmosphere, there’s also likely to be more cloud formation. Clouds raise the albedo of the planet and reflect sunlight back into space before it can reach the surface. Hence, there is also a balancing loop: by blocking more sunlight, extra clouds will help to put the brakes on any warming. Note that I phrased this carefully: this balancing loop can slow a warming trend, but it does not create a cooling trend. Balancing loops tend to stop a change from occurring, but they do not create a change in the opposite direction. For example, if enough clouds form to completely counteract the warming, they also remove the mechanism (i.e. warming!) that causes growth in cloud cover in the first place. If we did end up with so many extra clouds that it cooled the planet, the cooling would then remove the extra clouds, so we’d be back where we started. In fact, this loop is nowhere near that strong anyway. [Note that under some circumstances, balancing loops can lead to oscillations, rather than gently converging on an equilibrium point, and the first wave of a very slow oscillation might be mistaken for a cooling trend. We have to be careful with our assumptions and timescales here!].

So now we have two new loops that set up opposite effects – one tends to accelerate warming, and the other tends to decelerate it. You can experience both these effects directly: cloudy days tend to be cooler than sunny days, because the clouds reflect away some of the sunlight. But cloudy nights tend to be warmer than clear nights because the water vapour traps more of the escaping heat from the surface. In the daytime, both effects are operating, and the cooling effect tends to dominate. During the night, there is no sunlight to block, so only the warming effect works.

If we average out the effects of these loops over many days, months, or years, which of the effects dominate? (i.e. which loop is stronger?) Does the extra moisture mean more warming or less warming? This is clearly an area where building a computer model and experimenting with it might help, as we need to quantify the effects to understand them better. We can build good computer models of how clouds form at the small scale, by simulating the interaction of dust and water vapour. But running such a model for the whole planet is not feasible with today’s computers.

To make things a little more complicated, these two feedback loops interact with other things. For example, another likely feedback loop comes from a change in the vertical temperature profile of the atmosphere. Current models indicate that, at least in the tropics, the upper atmosphere will warm faster than the surface (in technical terms, it will reduce the lapse rate – the rate at which temperature drops as you climb higher). This then increases the outgoing radiation, because it’s from the upper atmosphere that the earth loses its heat to space. This creates another (small) balancing feedback:

The lapse rate feedback - if the upper troposphere warms faster than the surface (i.e. a lower lapse rate), this increases outgoing radiation from the planet.

The lapse rate feedback – if the upper troposphere warms faster than the surface (i.e. a lower lapse rate), this increases outgoing radiation from the planet.

Note that this lapse rate feedback operates in the same way as the main energy balance loop – the two ‘-‘ links have the same effect as the existing ‘+’ link from temperature to outgoing infra-red radiation. In other words this new loop just strengthens the effect of the existing loop – for convenience we could just fold both paths into the one link.

However, water vapour feedback can interact with this new feedback loop, because the warmer upper atmosphere will hold more water vapour in exactly the place where it’s most effective as a greenhouse gas. Not only that, but clouds themselves can change the vertical temperature profile, depending on their height. I said it was complicated!

The difficulty of simulating all these different interactions of clouds accurately leads to one of the biggest uncertainties in climate science. In 1979, the Charney report calculated that all these cloud and water vapour feedback loops roughly cancel out, but pointed out that there was a large uncertainty bound on this estimate. More than thirty years later, we understand much more about the how cloud formation and distribution are altered in a warming world, but our margins of error for calculating cloud effects have barely reduced, because of the difficulty of simulating them on a global scale. Our best guess is now that the (reinforcing) water vapour feedback loop is slightly stronger than than the (balancing) cloud albedo and lapse rate loops. So the net effect of these three loops is an amplifying effect on the warming.

Other posts in this series, so far:

We’re taking the kids to see their favourite band: Muse are playing in Toronto tonight. I’m hoping they play my favourite track:

I find this song fascinating, partly because of the weird mix of progressive rock and dubstep. But more for the lyrics:

All natural and technological processes proceed in such a way that the availability of the remaining energy decreases. In all energy exchanges, if no energy enters or leaves an isolated system, the entropy of that system increases. Energy continuously flows from being concentrated to becoming dispersed, spread out, wasted and useless. New energy cannot be created and high grade energy is destroyed. An economy based on endless growth is unsustainable. The fundamental laws of thermodynamics will place fixed limits on technological innovation and human advancement. In an isolated system, the entropy can only increase. A species set on endless growth is unsustainable.

This summarizes, perhaps a little too succinctly, the core of the critique of our current economy, first articulated clearly in 1972 by the Club of Rome in the Limits to Growth Study. Unfortunately, that study was widely dismissed by economists and policymakers. As Jorgen Randers points out in a 2012 paper, the criticism of the Limits to Growth study was largely based on misunderstandings, and the key lessons are absolutely crucial to understanding the state of the global economy today, and the trends that are likely over the next few decades. In a nutshell, humans exceeded the carrying capacity of the planet sometime in the latter part of the 20th century. We’re now in the overshoot portion, where it’s only possible to feed the world and provide energy for economic growth by consuming irreplaceable resources and using up environmental capital. This cannot be sustained.

In general systems terms, there are three conditions for sustainability (I believe it was Herman Daly who first set them out in this way):

  1. We cannot use renewable resources faster than they can be replenished.
  2. We cannot generate wastes faster than they can be absorbed by the environment.
  3. We cannot use up any non-renewable resource.

We can and do violate all of these conditions all the time. Indeed, modern economic growth is based on systematically violating all three of them, but especially #3, as we rely on cheap fossil fuel energy. But any system that violates these rules cannot be sustained indefinitely, unless it is also able to import resources and export wastes to other (external) systems. The key problem for the 21st century is that we’re now violating all three conditions on a global scale, and there are no longer other systems that we can rely on to provide a cushion – the planet as a whole is an isolated system. There are really only two paths forward: either we figure out how to re-structure the global economy to meet Daly’s three conditions, or we face a global collapse (for an understanding of the latter, see GrahamTurner’s 2012 paper).

A species set on endless growth is unsustainable.

Last week, Damon Matthews from Concordia visited, and gave a guest CGCS lecture, “Cumulative Carbon and the Climate Mitigation Challenge”. The key idea he addressed in his talk is the question of “committed warming” – i.e. how much warming are we “owed” because of carbon emissions in the past (irrespective of what we do with emissions in the future). But before I get into the content of Damon’s talk, here’s a little background.

The question of ‘owed’ or ‘committed’ warming arises because we know it takes some time for the planet to warm up in response to an increase in greenhouse gases in the atmosphere. You can calculate a first approximation of how much it will warm up from a simple energy balance model (like the ones I posted about last month). However, to calculate how long it takes to warm up you need to account for the thermal mass of the oceans, which absorb most of the extra energy and hence slow the rate of warming of surface temperatures. For this you need more than a simple energy balance model.

You can do a very simple experiment with a Global Circulation Model, by setting CO2 concentrations at double their pre-industrial levels, and then leave them constant at this level, to see how long the earth takes to reach a new equilibrium temperature. Typically, this takes several decades, although the models differ on exactly how long. Here’s what it looks like if you try this with EdGCM (I ran it with doubled CO2 concentrations starting in 1958):

EVA_time

Of course, the concentrations would never instantaneously double like that, so a more common model experiment is to increase CO2 levels gradually, say by 1% per year (that’s a little faster than how they have risen in the last few decades) until they reach double the pre-industrial concentrations (which takes approx 70 years), and then leave them constant at that level. This particular experiment is a standard way of estimating the Transient Climate Response – the expected warming at the moment we first reach a doubling of CO2 – and is included in the CMIP5 experiments. In these model experiments, it typically takes a few decades more of warming until a new equilibrium point is reached, and the models indicate that the transient response is expected to be a little over half of the eventual equilibrium warming.

This leads to a (very rough) heuristic that as the planet warms, we’re always ‘owed’ almost as much warming again as we’ve already seen at any point, irrespective of future emissions, and it will take a few decades for all that ‘owed’ warming to materialize. But, as Damon argued in his talk, there are two problems with this heuristic. First, it confuses the issue when discussing the need for an immediate reduction in carbon emissions, because it suggests that no matter how fast we reduce them, the ‘owed’ warming means such reductions will make little difference to the expected warming in the next two decades. Second, and more importantly, the heuristic is wrong! How so? Read on!

For an initial analysis, we can view the climate problem just in terms of carbon dioxide, as the most important greenhouse gas. Increasing CO2 emissions leads to increasing CO2 concentrations in the atmosphere, which leads to temperature increases, which lead to climate impacts. And of course, there’s a feedback in the sense that our perceptions of the impacts (whether now or in the future) lead to changed climate policies that constrain CO2 emissions.

So, what happens if we were to stop all CO2 emissions instantly? The naive view is that temperatures would continue to rise, because of the ‘climate commitment’  – the ‘owed’ warming that I described above. However, most models show that the temperature stabilizes almost immediately. To understand why, we need to realize there are different ways of defining ‘climate commitment’:

  • Zero emissions commitment – How much warming do we get if we set CO2 emissions from human activities to be zero?
  • Constant composition commitment – How much warming do we get if we hold atmospheric concentrations constant? (in this case, we can still have some future CO2 emissions, as long as they balance the natural processes that remove CO2 from the atmosphere).

The difference between these two definition is shown here. Note that in the zero emissions case, concentrations drop from an initial peak, and then settle down at a lower level:

Committed-concentrations

CommittedWarming

The model experiments most people are familiar with are the constant composition experiments, in which there is continued warming. But in the zero emissions scenarios, there is almost no further warming. Why is this?

The relationship between carbon emissions and temperature change (the “Carbon Climate Response”) is complicated, because it depends two factors, each of which is complicated by (different types of) inertia in the system:

  • Climate Sensitivity – how much temperature changes in response to difference levels of CO2 in the atmosphere. The temperature response is slowed down by the thermal inertia of the oceans, which means it takes several decades for the earth’s surface temperatures to respond fully to a change in CO2 concentrations.
  • Carbon sensitivity – how much concentrations of CO2 in the atmosphere change in response to different levels of carbon emissions. A significant fraction (roughly half) of our CO2 emissions are absorbed by the oceans, but this also takes time. We can think of this as “carbon cycle inertia” – the delay in uptake of the extra CO2, which also takes several decades. [Note: there is a second kind of carbon system inertia, by which it takes tens of thousands of years for the rest of the CO2 to be removed, via very slow geological processes such as rock weathering.]

Carbon-Response

It turns out that the two forms of inertia roughly balance out. The thermal inertia of the oceans slows the rate of warming, while the carbon cycle inertia accelerates it. Our naive view of the “owed” warming is based on an understanding of only one of these, the thermal inertia of the ocean, because much of the literature talks only about climate sensitivity, and ignores the question of carbon sensitivity.

The fact that these two forms of inertia tend to balance leads to another interesting observation. The models all show an approximately linear response to cumulative emissions. For example, here are the CMIP3 models, used in the IPCC AR4 report (the average of the models, indicated by the arrow, is around 1.6C of warming per 1,000 gigatonnes of carbon):

Temp-Against-Cum-Emissions

The same relationship seems to hold for the CMIP5 models, many of which now include a dynamic carbon cycle:

Temp-against-cum-emissions-CMIP5

This linear relationship isn’t determined by any physical properties of the climate system, and probably won’t hold in much warmer or cooler climates, nor when other feedback processes kick in. So we could say it’s a coincidental property of our current climate. However, it’s rather fortuitous for policy discussions.

Historically, we have emitted around 550 billion tonnes since the beginning of the industrial era, which gives us an expected temperature response of around 0.9°C. If we want to hold temperature rises to be no more than 2°C of warming, total future emissions should not exceed a further 700 billion tonnes of Carbon. In effect, this gives us a total worldwide carbon budget for the future. The hard policy question, of course, is then how to allocate this budget among the nations (or people) of the world in an equitable way.

[A few years ago, I blogged about a similar analysis, which says that cumulative carbon emissions should not exceed 1 trillion tonnes in total, ever. That calculation gives us a smaller future budget of less then 500 billion tonnes. That result came from analysis using the Hadley model, which has one of the higher slopes on the graphs above. Which number we use for a global target then might depend on which model we believe gives the most accurate projections, and perhaps how we also factor in the uncertainties. If the uncertainty range across models is accurate, then picking the average would give us a 50:50 chance of staying within the temperature threshold of 2°C. We might want better odds than this, and hence a smaller budget.]

In the National Academies report in 2011, the cumulative carbon budgets for each temperature threshold were given as follows (note the size of the uncertainty whiskers on each bar):

emissions-targets-NAS2011

[For a more detailed analysis see: Matthews, H. D., Solomon, S., & Pierrehumbert, R. (2012). Cumulative carbon as a policy framework for achieving climate stabilization. Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 370(1974), 4365–79. doi:10.1098/rsta.2012.0064]

So, this allows us to clear up some popular misconceptions:

The idea that there is some additional warming owed, no matter what emissions pathway we follow is incorrect. Zero future emissions means little to no future warming, so future warming depends entirely on future emissions. And while the idea of zero future emissions isn’t policy-relevant (because zero emissions is impossible, at least in the near future), it does have implications for how we discuss policy choices. In particular, it means the idea that CO2 emissions cuts will not have an effect on temperature change for several decades is also incorrect. Every tonne of CO2 emissions avoided has an immediate effect on reducing the temperature response.

Another source of confusion is the emissions scenarios used in the IPCC report. They don’t diverge significantly for the first few decades, largely because we’re unlikely (and to some extent unable) to make massive emissions reductions in the next 1-2 decades, because society is very slow to respond to the threat of climate change, and even when we do respond, the amount of existing energy infrastructure that has to be rebuilt is huge. In this sense, there is some inevitable future warming, but it comes from future emissions that we cannot or will not avoid. In other words, political, socio-economic and technological inertia are the primary causes of future climate warming, rather than any properties of the physical climate system.

I’ve been collecting examples of different types of climate model that students can use in the classroom to explore different aspects of climate science and climate policy. In the long run, I’d like to use these to make the teaching of climate literacy much more hands-on and discovery-based. My goal is to foster more critical thinking, by having students analyze the kinds of questions people ask about climate, figure out how to put together good answers using a combination of existing data, data analysis tools, simple computational models, and more sophisticated simulations. And of course, learn how to critique the answers based on the uncertainties in the lines of evidence they have used.

Anyway, as a start, here’s a collection of runnable and not-so-runnable models, some of which I’ve used in the classroom:

Simple Energy Balance Models (for exploring the basic physics)

General Circulation Models (for studying earth system interactions)

  • EdGCM – an educational version of the NASA GISS general circulation model (well, an older version of it). EdGCM provides a simplified user interface for setting up model runs, but allows for some fairly sophisticated experiments. You typically need to let the model run overnight for a century-long simulation.
  • Portable University Model of the Atmosphere (PUMA) – a planet Simulator designed by folks at the University of Hamburg for use in the classroom to help train students interested in becoming climate scientists.

Integrated Assessment Models (for policy analysis)

  • C-Learn, a simple policy analysis tool from Climate Interactive. Allows you to specify emissions trajectories for three groups of nations, and explore the impact on global temperature. This is a simplified version of the C-ROADS model, which is used to analyze proposals during international climate treaty negotiations.
  • Java Climate Model (JVM) – a detailed desktop assessment model that offers detailed controls over different emissions scenarios and regional responses.

Systems Dynamics Models (to foster systems thinking)

  • Bathtub Dynamics and Climate Change from John Sterman at MIT. This simulation is intended to get students thinking about the relationship between emissions and concentrations, using the bathtub metaphor. It’s based on Sterman’s work on mental models of climate change.
  • The Climate Challenge: Our Choices, also from Sterman’s team at MIT. This one looks fancier, but gives you less control over the simulation – you can just pick one of three emissions paths: increasing, stabilized or reducing. On the other hand, it’s very effective at demonstrating the point about emissions vs. concentrations.
  • Carbon Cycle Model from Shodor, originally developed using Stella by folks at Cornell.
  • And while we’re on systems dynamics, I ought to mention toolkits for building your own systems dynamics models, such as Stella from ISEE Systems (here’s an example of it used to teach the global carbon cycle).

Other Related Models

  • A Kaya Identity Calculator, from David Archer at U Chicago. The Kaya identity is a way of expressing the interaction between the key drivers of carbon emissions: population growth, economic growth, energy efficiency, and the carbon intensity of our energy supply. Archer’s model allows you to play with these numbers.
  • An Orbital Forcing Calculator, also from David Archer. This allows you to calculate what the effect changes in the earth’s orbit and the wobble on its axis have on the solar energy that the earth receives, in any year in the past of future.

Useful readings on the hierarchy of climate models

This term, I’m running my first year seminar course, “Climate Change: Software Science and Society” again. The outline has changed a little since last year, but the overall goals of the course are the same: to take a small, cross-disciplinary group of first year undergrads through some of the key ideas in climate modeling.

As last year, we’re running a course blog, and the first assignment is to write a blog post for it. Please feel free to comment on the students’ posts, but remember to keep your comments constructive!

Oh I do hate seeing blog posts with titles like “Easterbrook’s Wrong (Again)“. Luckily, it’s not me they’re talking about. It’s some other dude who, as far as I know, is completely unrelated to me. And that’s a damn good thing, as this Don Easterbrook appears to be a serial liar. Apparently he’s an emeritus geology prof from from some university in the US. And because he’s ready to stand up and spout scientific sounding nonsense about “global cooling”, he gets invited to talk to journalists all the time. And then his misinformation then gets duly repeated on blog threads all over the internet, despite the efforts of a small group of bloggers trying to clean up the mess:

In a way, this another instance of the kind of denial of service attack I talked about last year. One retired professor fakes a few graphs, and they spread so widely over the internet that many good honest science bloggers have to stop what they’re doing, research the fraud, and expose it. And still they can’t stop the nonsense from spreading (just google “Don Easterbrook” to see how widely he’s quoted, usually in glowing terms).

The depressing thing is that he’s not the only Easterbrook doing this. I appear to be out-numberered: ClimateProgress, Sept 13, 2008: Gregg Easterbrook still knows nothing about global warming — and less about clean energy.

William Connolly has written a detailed critique of our paper “Engineering the Software for Understanding Climate Change”, which follows on from a very interesting discussion about “Amateurish Supercomputing Codes?” in his previous post. One of the issues raised in that discussion is the reward structure in scientific labs for software engineers versus scientists. The funding in such labs is pretty much all devoted to “doing science” which invariably means publishable climate science research. People who devote time and effort to improving the engineering of the model code might get a pat on the back, but inevitably it’s under-rewarded because it doesn’t lead directly to publishable science. The net result is that all the labs I’ve visited so far (UK Met Office, NCAR, MPI-M) have too few software engineers working on the model code.

Which brings up another point. Even if these labs decided to devote more budget to the software engineering effort (and it’s not clear how easy it would be to do this, without re-educating funding agencies), where will they recruit the necessary talent? They could try bringing in software professionals who don’t yet have the domain expertise in climate science, and see what happens. I can’t see this working out well on a large scale. The more I work with climate scientists, the more I appreciate how much domain expertise it takes to understand the science requirements, and to develop climate code. The potential culture clash is huge: software professionals (especially seasoned ones) tend to be very opinionated about “the right way to build software”, and insensitive to contextual factors that might make their previous experiences inapplicable. I envision lots of the requirements that scientists care about most (e.g. the scientific validity of the models) getting trampled on in the process of “fixing” the engineering processes. Right now the trade-off between getting the science right versus having beautifully engineered models is tipped firmly in favour of the former. Tipping it the other way might be a huge mistake for scientific progress, and very few people seem to understand how to get both right simultaneously.

The only realistic alternative is to invest in training scientists to become good software developers. Greg Wilson is pretty much the only person around who is covering this need, but his software carpentry course is desperately underfunded. We’re going to need a lot more like this to fix things…

Last week, I ran a workshop for high school kids from across Toronto on “What can computer models tell us about climate change?“. I already posted some of the material I used: the history of our knowledge about climate change. Jorge, Jon and Val ran another workshop after mine, entitled “Climate change and the call to action: How you can make a difference“. They have already blogged their reflections: See Jon’s summary of the workshop plan, and reflections on how to do it better next time, and the need for better metaphors. I think both workshops could have done with being longer, for more discussion and reflection (we were scheduled only 75 minutes for each). But I enjoyed both workshops a lot, as I find it very useful for my own thinking to consider how to talk about climate change with kids, in this case mainly from grade 10 (≈15 years old).

The main idea I wanted to get across in my workshop was the role of computer models: what they are, and how we can use them to test out hypotheses about how the climate works. I really wanted to do some live experiments, but of course, this is a little hard when a typical climate simulation run takes weeks of processing time on a supercomputer. There are some tools that high school kids could play with in the classroom, but none of them are particularly easy to use, and of course, they all sacrifice resolution for ability to run on a desktop machine. Here are some that I’ve played with:

  • EdGCM – This is the most powerful of the bunch. It’s a full General Circulation Model (GCM), based on the NASA’s models, and does support many different types of experiment. The license isn’t cheap (personally, I think it ought to be free and open source, but I guess they need a rich sponsor for that), but I’ve been playing with the free 30-day license. A full century of simulation tied up my laptop for 24 hours, but I kinda liked that, as it’s a bit like how you have to wait for results on a full scale model too (it even got hot, and I had to think about how to cool it, again just like a real supercomputer…). I do like the way that the documentation guides you through the process of creating an experiment, and the idea of then ‘publishing’ the results of your experiment to a community website.
  • JCM – This is (as far as I can tell) a box model, that allows you to experiment with outcomes of various emissions scenarios, based on the IPCC projections, which means it’s simple enough to give interactive outputs. It’s free and open source, but a little cumbersome to use – the interface doesn’t offer enough guidance for novice users. It might work well in a workshop, with lots of structured guidance for how to use it, but I’m not convinced such a simplistic model offers much value over just showing some of the IPCC graphs and talking about them.
  • Climate Interactive (and the C-Roads model). C-ROADS is also a box model, but with the emissions of different countries/regions separated out, to allow exploration of the choices in international climate negotiations. I’ve played a little with C-ROADS, and found it frustrating because it ignores all the physics, and after all, my main goal in playing with climate models with kids is to explore how climate processes work, rather than the much narrower task of analyzing policy choices. It also seems to be hard to tell the difference between various different policy choices – even when I try to run it with  extreme choices (cease all emissions next year vs. business as usual), the outputs are all of a similar shape (“it gets steadily warmer”). This may well be the correct output, but the overall message is a little unfortunate: whatever policy path we choose, the results look pretty similar. Showing the results of different policies as a set of graphs showing the warming response doesn’t seem very insightful; it would be better to explore different regional impacts, but for that we’re back to needing a full GCM.
  • CCCSN – the Canadian Climate Change Scenarios Network. This isn’t a model at all, but rather a front end to the IPCC climate simulation dataset. The web tool allows you to get the results from a number of experiments that were run for the IPCC assessments, selecting which model you want, which scenario you want, which output variables you want (temperature, precipitation, etc), and allows you to extract just a particular region, or the full global data. I think this is more useful than C-ROADS, because once you download the data, you can graph it in various ways, and explore how different regions are affected.
  • Some Online models collected by David Archer, which I haven’t played with much, but which include some box models, some 1-dimensional models, and the outputs of NCAR’s GCM (which I think is the one of the datasets included in CCCSN). Not much explanation is provided here though – you have to know what you’re doing…
  • John Sterman’s Bathtub simulation. Again, a box model (actually, a stocks-and-flows dynamics model), but this one is intended more to educate people about the basic systems dynamics principles, rather than to explore policy choices. So I already like it better than C-ROADS, except that I think the user interface could do with a serious make-over, and there’s way too much explanatory text – there must be a way to do this with more hands on and less exposition. It also suffers from a problem similar t0 C-ROADS: it allows you to control emissions pathways, and explore the result on atmospheric concentrations and hence temperature. But the problem is, we can’t control emissions directly – we have to put in place a set of policies and deploy alternative energy technologies to indirectly affect emissions. So either we’d want to run the model backwards (to ask what emissions pathway we’d have to follow to keep below a specific temperature threshold), or we’d want as inputs the things we can affect – technology deployments, government investment, cap and trade policies, energy efficiency strategies, etc.

None of these support the full range of experiments I’d like to explore in a kids’ workshop, but I think EdGCM is an excellent start, and access to the IPCC datasets via the CCCSN site might be handy. But I don’t like the models that focus just on how different emissions pathways affect global temperature change, because I don’t think these offer any useful learning opportunities about the science and about how scientists work.

I guess headlines like “An error found in one paragraph of a 3000 page IPCC report; climate science unaffected” wouldn’t sell many newspapers. And so instead, the papers spin out the story that a few mistakes undermine the whole IPCC process. As if newspapers never ever make mistakes. Well, of course, scientists are supposed to be much more careful than sloppy journalists, so “shock horror, those clever scientists made a mistake. Now we can’t trust them” plays well to certain audiences.

And yet there are bound to be errors; the key question is whether any of them impact any important results in the field. The error with the Himalayan glaciers in the Working Group II report is interesting because Working Group I got it right. And the erroneous paragraph in WGII quite clearly contradicts itself. Stupid mistake, that should be pretty obvious to anyone reading that paragraph carefully. There’s obviously room for improvement in the editing and review process. But does this tell us anything useful about the overall quality of the review process?

There are errors in just about every book, newspaper, and blog post I’ve ever read. People make mistakes. Editorial processes catch many of them. Some get through. But few of these things have the kind of systematic review that the IPCC reports went through. Indeed, as large, detailed, technical artifacts, with extensive expert review, the IPCC reports are much less like normal books, and much more like large software systems. So, how many errors get through a typical review process for software? Is the IPCC doing better than this?

Even the best software testing and review practices in the world let errors through. Some examples (expressed in number of faults experienced in operation, per thousand lines of code):

  • Worst military systems: 55 faults/KLoC
  • Best military systems: 5 faults/KLoC
  • Agile software development (XP): 1.4 faults/KLoC
  • The Apache web server (open source): 0.5 faults/KLoC
  • NASA Space shuttle:  0.1 faults/KLoC

Because of the extensive review processes, the shuttle flight software is purported to be the most expensive in the world, in terms of dollars per line of code. Yet still about 1 error every ten thousand lines of code gets through the review and testing process. Thankfully none of those errors have ever caused a serious accident. When I worked for NASA on the Shuttle software verification in the 1990’s, they were still getting reports of software anomalies with every shuttle flight, and releasing a software update every 18 months (this, for an operational vehicle that had been flying for two decades, with only 500,000 lines of flight code!).

The IPCC reports consist of around 3000 pages, and approaching 100 lines of text per page. Let’s assume I can equate a line of text with a line of code (which seems reasonable, when you look a the information density of each line in the IPCC reports) – that would make them as complex as a 300,000 line software system. If the IPCC review process is as thorough as NASA’s, then we should still expect around 30 significant errors made it through the review process. We’ve heard of two recently – does this mean we have to endure another 28 stories, spread out over the next few months, as the drone army of denialists toils through trying to find more mistakes? Actually, it’s probably worse than that…

The IPCC writing, editing and review processes are carried out entirely by unpaid volunteers. They don’t have automated testing and static analysis tools to help – human reviewers are the only kind of review available. So they’re bound to do much worse than NASA’s flight software. I would expect there to be 100s of errors in the reports, even with the best possible review processes in the world. Somebody point me to a technical review process anywhere that can do better than this, and I’ll eat my hat. Now, what was the point of all those newspaper stories again? Oh, yes, sensationalism sells.