I was doing some research on Canada’s climate targets recently, and came across this chart, presented as part of Canada’s Intended Nationally Determined Contribution (INDC) under the Paris Agreement:

canada-indc-pledge

Looks good right? Certainly it conveys a message that Canada’s well on track, and that the target for 2030 is ambitious (compared to a business as usual pathway). Climate change solved, eh?

But the chart is an epic example of misdirection. Here’s another chart that pulls the same trick, this time from the Government’s Climate Change website, and apparently designed to make the 2030 target look bravely ambitious:

ghg_emissions_trends_2016_en

So I downloaded the data and produced my own chart, with a little more perspective added. I wanted to address several ways in which the above charts represent propaganda, rather than evidence:

  • By cutting off the Y axis at 500 Mt, the chart hides the real long-term evidence-based goal for climate policy: zero emissions;
  • Canada has consistently failed to meet any of it’s climate targets in the past, while the chart seems to imply we’re doing rather well;
  • The chart conflates two different measures. The curves showing actual emissions exclude net removal from forestry (officially known as Land Use, Land Use Change, and Forestry LULUCF), while Canada fully intends to include this in its accounting for achieving the 2030 target. So if you plot the target on the same chart with emissions, honesty dictates you should adjust the target accordingly.

Here’s my “full perspective” chart. Note that the first target shown here in grey was once Liberal party policy in the early 1990s; the remainder were official federal government targets. Each is linked to the year they were first proposed. The “fair effort” for Canada comes from ClimateActionTracker’s analysis:

Canada's Climate Targets

The correct long term target for carbon emissions is, of course zero. Every tonne of CO2 emitted makes the problem worse, and there’s no magic fairy that removes these greenhouse gases from the atmosphere once we’ve emitted them. So until we get to zero emissions, we’re making the problem worse, and the planet keeps warming. Worse still, the only plausible pathways to keep us below the UN’s upper limit of 2°C of warming requires us to do even better than this: we have to go carbon negative before the end of the century.

Misleading charts from the government of Canada won’t help us get on the right track.

This is an excerpt from the draft manuscript of my forthcoming book, Computing the Climate.

While models are used throughout the sciences, the word ‘model’ can mean something very different to scientists from different fields. This can cause great confusion. I often encounter scientists from outside of climate science who think climate models are statistical models of observed data, and that future projections from these models must be just extrapolations of past trends. And just to confuse things further, some of the models used in climate policy analysis are like this. But the physical climate models that underpin our knowledge of why climate change occurs are fundamentally different from statistical models.

A useful distinction made by philosophers of science is between models of phenomena, and models of data. The former include models developed by physicists and engineers to capture cause-and-effect relationships. Such models are derived from theory and experimentation, and have explanatory power: the model captures the reasons why things happen. Models of data, on the other hand, describe patterns in observed data, such as correlations and trends over time, without reference to why they occur. Statistical models, for example, describe common patterns (distributions) in data, without saying anything about what caused them. This simplifies the job of describing and analyzing patterns: if you can find a statistical model that matches your data, you can reduce the data to a few parameters (sometimes just two: a mean and a standard deviation). For example, the heights of any large group of people tend to follow a normal distribution—the bell-shaped curve—but this model doesn’t explain why heights vary in that way, nor whether they always will in the future. New techniques from machine learning have extended the power of these kinds of models in recent years, allowing more complex patterns to be discovered by “training” an algorithm to find more complex kinds of pattern.

Statistical techniques and machine learning algorithms are good at discovering patterns in data (eg “A and B always seems to change together”), but hopeless at explaining why those patterns occur. To get over this, many branches of science use statistical methods together with controlled experiments, so that if we find a pattern in the data after we’ve carefully manipulated the conditions, we can argue that the changes we introduced in the experiment caused that pattern. The ability to identify a causal relationship in a controlled experiment has nothing to do with the statistical model used—it comes from the logic of the experimental design. Only if the experiment is designed properly will statistical analysis of the results provide any insights into cause and effect.

Unfortunately, for some scientific questions, experimentation is hard, or even impossible. Climate change is a good example. Even though it’s possible to manipulate the climate (as indeed we are currently doing, by adding more greenhouse gases), we can’t set up a carefully controlled experiment, because we only have one planet to work with. Instead, we use numerical models, which simulate the causal factors—a kind of virtual experiment. An experiment conducted in a causal model won’t necessarily tell us what will happen in the real world, but it often gives a very useful clue. If we run the virtual experiment many times in our causal model, under slightly varied conditions, we can then turn back to a statistical model to help analyze the results. But without the causal model to set up the experiment, a statistical analysis won’t tell us much.

Both traditional statistical models and modern machine learning techniques are brittle, in the sense that they struggle when confronted with new situations not captured in the data from which the models were derived. An observed statistical trend projected into the future is only useful as a predictor if the future is like the past; it will be a very poor predictor if the conditions that cause the trend change. Climate change in particular is likely to make a mess of all of our statistical models, because the future will be very unlike the past. In contrast, a causal model based on the laws of physics will continue to give good predictions, as long as the laws of physics still hold.

Modern climate models contain elements of both types of model. The core elements of a climate model capture cause-and-effect relationships from basic physics, such as the thermodynamics and radiative properties of the atmosphere. But these elements are supplemented by statistical models of phenomena such as clouds, which are less well understood. To a large degree, our confidence in future predictions from climate models comes from the parts that are causal models based on physical laws, and the uncertainties in these predictions derive from the parts that are statistical summaries of less well-understood phenomena. Over the years, many of the improvements in climate models have come from removing a component that was based on a statistical model, and replacing it with a causal model. And our confidence in the causal components in these models comes from our knowledge of the laws of physics, and from running a very large number of virtual experiments in the model to check whether we’ve captured these laws correctly in the model, and whether they really do explain climate patterns that have been observed in the past.

One of the biggest challenges in understanding climate change is that the timescales involved are far longer than most people are used to thinking about. Garvey points out that this makes climate change different from any other ethical question, because both the causes and consequences are smeared out across time and space:

“There is a sense in which my actions and the actions of my present fellows join with the past actions of my parents, grandparents and great-grandparents, and the effects resulting from our actions will still be felt hundreds, even thousands of years in the future. It is also true that we are, in a way, stuck with the present we have because of our past. The little actions I undertake which keep me warm and dry and fed are what they are partly because of choices made by people long dead. Even if I didn’t want to burn fossil fuels, I’m embedded in a culture set up to do so.” (Garvey, 2008, p60)

Part of the problem is that the physical climate system is slow to respond to our additional greenhouse gas emissions, and similarly slow to respond to reductions in emissions. The first part of this is core to a basic understanding of climate change, as it’s built into the idea of equilibrium climate sensitivity (roughly speaking, the expected temperature rise for each doubling of CO2 concentrations in the atmosphere). The extra heat that’s trapped by the additional greenhouse gases builds up over time, and the planet warms slowly, but the oceans have such a large thermal mass, it takes decades for this warming process to complete.

Unfortunately, the second part, that the planet takes a long time to respond to reductions in emissions, is harder to explain, largely because of the common assumption that CO2 will behave like other pollutants, which wash out of the atmosphere fairly quickly once we stop emitting them. This assumption underlies much of the common wait-and-see response to climate change, as it gives rise to the myth that once we get serious about climate change (e.g. because we start to see major impacts), we can fix the problem fairly quickly. Unfortunately, this is not true at all, because CO2 is a long-lived greenhouse gas. About half of human CO2 emissions are absorbed by the oceans and soils, over a period of several decades. The remainder stays in the atmosphere  There are several natural processes that remove the remaining CO2 from the atmosphere, but they take thousands of years, which means that even with zero greenhouse gas emissions, we’re likely stuck with the consequences of life on a warmer planet for centuries.

So the physical climate system presents us with two forms of inertia, one that delays the warming due to greenhouse gas emissions, and, one that delays the reduction in that warming in response to reduced emissions:

  1. The thermal inertia of the planet’s surface (largely due to the oceans), by which the planet can keep absorbing extra heat for years before it makes a substantial difference to surface temperatures. (scale: decades)
  2. The carbon cycle inertia by which CO2 is only removed from the atmosphere very slowly, and has a continued warming effect for as long as it’s there. (scale: decades to millennia)

For more on how these forms of inertia affect future warming scenarios, see my post on committed warming.

But these are not the only forms of inertia that matter. There are also various kinds of inertia in the socio-economic system that slow down our response to climate change. For example, Davis et. al. attempt to quantify the emissions from all the existing energy infrastructure (power plants, factories, cars, buildings, etc that already exist and are in use), because even under the most optimistic scenario, it will take decades to replace all this infrastructure with clean energy alternatives. Here’s an example of their analysis, under the assumption that things we’ve already built will not be retired early. This assumption is reasonable because (1) its rare that we’re willing to bear the cost of premature retirement of infrastructure and (2) it’s going to be hard enough building enough new clean energy infrastructure fast enough to replace stuff that has worn out while meeting increasing demand.

infrastructural-inertia

Expected ongoing carbon dioxide emissions from existing infrastructure. Includes primary infrastructure only – i.e. infrastructure that directly releases CO2 (e.g. cars & trucks), but not infrastructure that encourages the continued production of devices that emit CO2. (e.g. the network of interstate highways in the US). From Davis et al, 2010.

So that gives us our third form of inertia:

  1. Infrastructural inertia from existing energy infrastructure, as emissions of greenhouse gases will continue from everything we’ve built in the past, until it can be replaced. (scale: decades)

We’ve known about the threat of climate change for decades, and various governments and international negotiations have attempted to deal with it, and yet have made very little progress. Which suggests there are more forms of inertia that we ought to be able to name and quantify. To do this, we need to look at the broader socio-economic system that ought to allow us as a society to respond to the threat of climate change. Here’s a schematic of that system, as a systems dynamic model:

The socio-geophysical system. Arrows labelled ‘+’ are positive influence links (“A rise in X tends to cause a rise in Y, and a fall in X tends to cause a fall in Y”). Arrows labelled ‘-‘ represent negative links, where a rise in X tends to cause a fall in Y, and vice versa. The arrow labelled with a tap (faucet) is an accumulation link: Y will continue to rise even while X is falling, until X reaches net zero.

Broadly speaking, decarbonization will require both changes in technology and changes in human behaviour. But before we can do that, we have to recognize and agree that there is a problem, develop an agreed set of coordinated actions to tackle it, and then implement the policy shifts and behaviour changes to get us there.

At first, this diagram looks promising: once we realise how serious climate change is, we’ll take the corresponding actions, and that will bring down emissions, solving the problem. In other words, the more carbon emissions go up, the more they should drive a societal response, which in turn (eventually) will reduce emissions again. But the diagram includes a subtle but important twist: the link from carbon emissions to atmospheric concentrations is an accumulation link. Even as emissions fall, the amount of greenhouse gases in the atmosphere continue to rise. The latter rise only stops when carbon emissions reach zero. Think of a tap on the bathtub – if you reduce the inflow of water, the level of water in the tub still rises, until you turn the tap off completely.

Worse still, there are plenty more forms of inertia hidden in the diagram, because each of the causal links takes time to operate. I’ve given these additional sources of inertia names:

Sources of inertia in the socio-geophysical climate system

Sources of inertia in the socio-geophysical climate system

For example, there are forms of inertia that delay the impacts of increased temperatures, both on ecosystems and on human society. Most of the systems that are impacted by climate change can absorb smaller changes in the climate without much noticeable difference, but then reach a threshold whereby they can no longer be sustained. I’ve characterized two forms of inertia here:

  1. Natural variability (or “signal to noise”) inertia, which arises because initially, temperature increases due to climate change are much smaller than internal variability with daily and seasonal weather patterns. Hence it takes a long time for the ‘signal’ of climate change to emerge from the noise of natural variability. (scale: decades)
  2. Ecosystem resilience. We tend to think of resilience as a good thing – defined informally as the ability of a system to ‘bounce back’ after a shock. But resilience can also mask underlying changes that push a system closer and closer to a threshold beyond which it cannot recover. So this form of inertia acts by masking the effect of that change, sometimes until it’s too late to act. (scale: years to decades)

Then, once we identify the impacts of climate change (whether in advance or after the fact), it takes time for these to feed into the kind of public concern needed to build agreement on the need for action:

  1. Societal resilience. Human society is very adaptable. When storms destroy our buildings, we just rebuild them a little stronger. When drought destroys our crops, we just invent new forms of irrigation. Just as with ecosystems, there is a limit to this kind of resilience, when subjected to a continual change. But our ability to shrug and get on with things causes a further delay in the development of public concern about climate change. (scale: decades?)
  2. Denial. Perhaps even stronger than human resilience is our ability to fool ourselves into thinking that something bad is not happening, and to look for other explanations than the ones that best fit the evidence. Denial is a pretty powerful form of inertia. Denial stops addicts from acknowledging they need to seek help to overcome addiction, and it stops all of us from acknowledging we have a fossil fuel addiction, and need help to deal with it. (scale: decades to generations?)

Even then, public concern doesn’t immediately translate into effective action because of:

  1. Individualism. A frequent response to discussions on climate change is to encourage people to make personal changes in their lives: change your lightbulbs, drive a little less, fly a little less. While these things are important in the process of personal discovery, by helping us understanding our individual impact on the world, they are a form of voluntary action only available to the privileged, and hence do not constitute a systemic solution to climate change. When the systems we live in drive us towards certain consumption patterns, it takes a lot of time and effort to choose a low-carbon lifestyle. So the only way this scales is through collective political action: getting governments to change the regulations and price structures that shape what gets built and what we consume, and making governments and corporations accountable for cutting their greenhouse gas contributions. (scale: decades?)

When we get serious about the need for coordinated action, there are further forms of inertia that come into play:

  1. Missing governance structures. We simply don’t have the kind of governance at either the national or international level that can put in place meaningful policy instruments to tackle climate change. The Kyoto process failed because the short term individual interests of the national governments who have the power to act always tend to outweigh the long term collective threat of climate change. The Paris agreement is woefully inadequate for the same reason. Similarly, national governments are hampered by the need to respond to special interest groups (especially large corporations), which means legislative change is a slow, painful process. (scale: decades!)
  2. Bureaucracy. Hampers implementation of new policy tools. It takes time to get legislation formulated and agreed, and it takes time to set up the necessary institutions to ensure they are implemented. (scale: years)
  3. Social Resistance. People don’t like change, and some groups fight hard to resist changes that conflict with their own immediate interests. Every change in social norms is accompanied by pushback. And even when we welcome change and believe in it, we often slip back into old habits. (scale: years? generations?)

Finally, development and deployment of clean energy solutions experience a large number of delays:

  1. R&D lag. It takes time to ramp up new research and development efforts, due to the lack of qualified personnel, the glacial speed that research institutions such as universities operate, and the tendency, especially in academia, for researchers to keep working on what they’ve always worked on in the past, rather than addressing societally important issues. Research on climate solutions is inherently trans-disciplinary, and existing research institutions tend to be very bad at supporting work that crosses traditional boundaries. (scale: decades?)
  2. Investment lag: A wholescale switch from fossil fuels to clean energy and energy efficiency will require huge upfront investment. Agencies that have funding to enable this switch (governments, investment portfolio managers, venture capitalists) tend to be very risk averse, and so prefer things that they know offer a return on investment – e.g. more oil wells and pipelines rather than new cleantech alternatives (scale: years to decades)
  3. Diffusion of innovation: new technologies tend to take a long time to reach large scale deployment, following the classic s-shaped curve, with a small number of early adopters, and, if things go well, a steadily rising adoption curve, followed by a tailing off as laggards resist new technologies. Think about electric cars: while the technology has been available for years, they still only constitute less than 1% of new car sales today. Here’s a study that predicts this will rise to 35% by 2040. Think about that for a moment – if we follow the expected diffusion of innovation pattern, two thirds of new cars in 2040 will still have internal combustion engines. (scale: decades)

All of these forms of inertia slow the process of dealing with climate change, allowing the warming to steadily increase while we figure out how to overcome them. So the key problem isn’t how to address climate change by switching from the current fossil fuel economy to a carbon-neutral one – we probably have all the technologies to do this today. The problem is how to do it fast enough. To stay below 2°C of warming, the world needs to cut greenhouse gas emissions by 50% by 2030, and achieve carbon neutrality in the second half of the century. We’ll have to find a way of overcoming many different types of inertia if we are to make it.

I’ve been exploring how Canada’s commitments to reduce greenhouse gas emissions stack up against reality, especially in the light of the government’s recent decision to stick with the emissions targets set by the previous administration.

Once upon a time, Canada was considered a world leader on climate and environmental issues. The Montreal Protocol on Substances that Deplete the Ozone Layer, signed in 1987, is widely regarded as the most successful international agreement on environmental protection ever. A year later, Canada hosted a conference on The Changing Atmosphere: Implications for Global Security, which helped put climate change on the international political agenda. This conference was one of the first to identify specific targets to avoid dangerous climate change, recommending a global reduction in greenhouse gas emissions of 20% by 2005. It didn’t happen.

It took another ten years before an international agreement to cut emissions was reached: the Kyoto Protocol in 1997. Hailed as a success at the time, it became clear over the ensuing years that with non-binding targets, the agreement was pretty much a sham. Under Kyoto, Canada agreed to cut emissions to 6% below 1990 levels by the 2008-2012 period. It didn’t happen.

At the Copenhagen talks in 2009, Canada proposed an even weaker goal: 17% below 2005 levels (which corresponds to 1.5% above 1990 levels) by 2020. Given that emissions have risen steadily since then, it probably won’t happen. By 2011, facing an embarrassing gap between its Kyoto targets and reality, the Harper administration formally withdrew from Kyoto – the only country ever to do so.

Last year, in preparation for the Paris talks, the Harper administration submitted a new commitment: 30% below 2005 levels by 2030. At first sight it seems better than previous goals. But it includes a large slice of expected international credits and carbon sequestered in wood products, as Canada incorporates Land Use, Land Use Change and Forestry (LULUCF) into its carbon accounting. In terms of actual cuts in greenhouse gas emissions, the target represents approximately 8% above 1990 levels.

The new government, elected in October 2015, trumpeted a renewed approach to climate change, arguing that Canada should be a world leader again. At the Paris talks in 2015, the Trudeau administration proudly supported both the UN’s commitment to keep global temperatures below 2°C of warming (compared to the pre-industrial average), and voiced strong support for an even tougher limit of 1.5°C. However, the government has chosen to stick with the Harper administration’s original Paris targets.

It is clear that that the global commitments under the Paris agreement fall a long way short of what is needed to stay below 2°C, and Canada’s commitment has been rated as one of the weakest. Based on IPCC assessments, to limit warming below 2°C, global greenhouse gas emissions will need to be cut by about 50% by 2030, and eventually reach zero net emissions globally (which will probably mean zero use of fossil fuels, as assumptions about negative emissions seem rather implausible). As Canada has much greater wealth and access to resources than most nations, much greater per capita emissions than all but a few nations, and much greater historical responsibility for emissions than most nations, a “fair” effort would have Canada cutting emissions much faster than the global average, to allow room for poorer nations to grow their emissions, at least initially, to alleviate poverty. Carbon Action Tracker suggests 67% below 1990 emissions by 2030 is a fair target for Canada.

Here’s what all of this looks like – click for bigger version. Note: emissions data from Government of Canada; the Toronto 1988 target was never formally adopted, but was Liberal party policy in the early 90’s. Global 2°C pathway 2030 target from SEI;  Emissions projection, LULUCF adjustment, and “fair” 2030 target from CAT.

Canada's Climate Targets

Several things jump out at me from this chart. First, the complete failure to implement policies that would have allowed us to meet any of these targets. The dip in emissions from 2008-2010, which looked promising for a while, was due to the financial crisis and economic downturn, rather than any actual climate policy. Second, the similar slope of the line to each target, which represents the expected rate of decline from when the target was proposed to when it ought to be attained. At no point has there been any attempt to make up lost ground after each failed target. Finally, in terms of absolute greenhouse gas emissions, each target is worse than the previous ones. Shifting the baseline from 1990 to 2005 masks much of this, and shows that successive governments are more interested in optics than serious action on climate change.

At no point has Canada ever adopted science-based targets capable of delivering on its commitment to keep warming below 2°C.

Today I’ve been tracking down the origin of the term “Greenhouse Effect”. The term itself is problematic, because it only works as a weak metaphor: both the atmosphere and a greenhouse let the sun’s rays through, and then trap some of the resulting heat. But the mechanisms are different. A greenhouse stays warm by preventing warm air from escaping. In other words, it blocks convection. The atmosphere keeps the planet warm by preventing (some wavelengths of) infra-red radiation from escaping. The “greenhouse effect” is really the result of many layers of air, each absorbing infra-red from the layer below, and then re-emitting it both up and down. The rate at which the planet then loses heat is determined by the average temperature of the topmost layer of air, where this infra-red finally escapes to space. So not really like a greenhouse at all.

So how did the effect acquire this name? The 19th century French mathematician Joseph Fourier is usually credited as the originator of the idea in the 1820’s. However, it turns out he never used the term, and as James Fleming (1999) points out, most authors writing about the history of the greenhouse effect cite only secondary sources on this, without actually reading any of Fourier’s work. Fourier does mention greenhouses in his 1822 classic “Analytical Theory of Heat”, but not in connection with planetary temperatures. The book was published in French, so he uses the french “les serres”, but it appears only once, in a passage on properties of heat in enclosed spaces. The relevant paragraph translates as:

In general the theorems concerning the heating of air in closed spaces extend to a great variety of problems. It would be useful to revert to them when we wish to foresee and regulate the temperature with precision, as in the case of green-houses, drying-houses, sheep-folds, work-shops, or in many civil establishments, such as hospitals, barracks, places of assembly” [Fourier, 1822; appears on p73 of the edition translated by Alexander Freeman, published 1878, Cambridge University Press]

In his other writings, Fourier did hypothesize that the atmosphere plays a role in slowing the rate of heat loss from the surface of the planet to space, hence keeping the ground warmer than it might otherwise be. However, he never identified a mechanism, as the properties of what we now call greenhouse gases weren’t established until John Tyndall‘s experiments in the 1850’s. In explaining his hypothesis, Fourier refers to a “hotbox”, a device invented by the explorer de Saussure, to measure the intensity of the sun’s rays. The hotbox had several layers of glass in the lid which allowed the sun’s rays to enter, but blocked the escape of the heated air via convection. But it was only a metaphor. Fourier understood that whatever the heat trapping mechanism in the atmosphere was, it didn’t actually block convection.

Svante Arrhenius was the first to attempt a detailed calculation of the effect of changing levels of carbon dioxide in the atmosphere, in 1896, in his quest to test a hypothesis that the ice ages were caused by a drop in CO2. Accordingly, he’s also sometime credited with inventing the term. However, he also didn’t use the term “greenhouse” in his papers, although he did invoke a metaphor similar to Fourier’s, using the Swedish word “drivbänk”, which translates as hotbed (Update: or possibly “hothouse” – see comments).

So the term “greenhouse effect” wasn’t coined until the 20th Century. Several of the papers I’ve come across suggest that the first use of the term “greenhouse” in this connection in print was in 1909, in a paper by Wood. This seems rather implausible though, because the paper in question is really only a brief commentary explaining that the idea of a “greenhouse effect” makes no sense, as a simple experiment shows that greenhouses don’t work by trapping outgoing infra-red radiation. The paper is clearly reacting to something previously published on the greenhouse effect, and which Wood appears to take way too literally.

A little digging produces a 1901 paper by Nils Ekholm, a Swedish meteorologist who was a close colleague of Arrhenius, which does indeed use the term ‘greenhouse’. At first sight, he seems to use the term more literally than is warranted, although in subsequent paragraphs, he explains the key mechanism fairly clearly:

The atmosphere plays a very important part of a double character as to the temperature at the earth’s surface, of which the one was first pointed out by Fourier, the other by Tyndall. Firstly, the atmosphere may act like the glass of a green-house, letting through the light rays of the sun relatively easily, and absorbing a great part of the dark rays emitted from the ground, and it thereby may raise the mean temperature of the earth’s surface. Secondly, the atmosphere acts as a heat store placed between the relatively warm ground and the cold space, and thereby lessens in a high degree the annual, diurnal, and local variations of the temperature.

There are two qualities of the atmosphere that produce these effects. The one is that the temperature of the atmosphere generally decreases with the height above the ground or the sea-level, owing partly to the dynamical heating of descending air currents and the dynamical cooling of ascending ones, as is explained in the mechanical theory of heat. The other is that the atmosphere, absorbing but little of the insolation and the most of the radiation from the ground, receives a considerable part of its heat store from the ground by means of radiation, contact, convection, and conduction, whereas the earth’s surface is heated principally by direct radiation from the sun through the transparent air.

It follows from this that the radiation from the earth into space does not go on directly from the ground, but on the average from a layer of the atmosphere having a considerable height above sea-level. The height of that layer depends on the thermal quality of the atmosphere, and will vary with that quality. The greater is the absorbing power of the air for heat rays emitted from the ground, the higher will that layer be, But the higher the layer, the lower is its temperature relatively to that of the ground ; and as the radiation from the layer into space is the less the lower its temperature is, it follows that the ground will be hotter the higher the radiating layer is.” [Ekholm, 1901, p19-20]

At this point, it’s still not called the “greenhouse effect”, but this metaphor does appear to have become a standard way of introducing the concept. But in 1909, the English scientist, John Henry Poynting confidently introduces the term “greenhouse effect”, in his criticism of Percival Lowell‘s analysis of the temperature of the planets. He uses it in scare quotes throughout the paper, which suggests the term is newly minted:

Prof. Lowell’s paper in the July number of the Philosophical Magazine marks an important advance in the evaluation of planetary temperatures, inasmuch as he takes into account the effect of planetary atmospheres in a much more detailed way than any previous wrlter. But he pays hardly any attention to the “blanketing effect,” or, as I prefer to call it, the “greenhouse effect” of the atmosphere.” [Poynting, 1907, p749]

And he goes on:

The ” greenhouse effect” of the atmosphere may perhaps be understood more easily if we first consider the case of a greenhouse with horizontal roof of extent so large compared with its height above the ground that the effect of the edges may be neglected. Let us suppose that it is exposed to a vertical sun, and that the ground under the glass is “black” or a full absorber. We shall neglect the conduction and convection by the air in the greenhouse. [Poynting, 1907, p750]

He then goes on to explore the mathematics of heat transfer in this idealized greenhouse. Unfortunately, he ignores Ekholm’s crucial observation that it is the rate of heat loss at the upper atmosphere that matters, so his calculations are mostly useless. But his description of the mechanism does appear to have taken hold as the dominant explanation. The following year, Frank Very published a response (in the same journal), using the term “Greenhouse Theory” in the title of the paper. He criticizes Poynting’s idealised greenhouse as way too simplistic, but suggests a slightly better metaphor is a set of greenhouses stacked one above another, each of which traps a little of the heat from the one below:

It is true that Professor Lowell does not consider the greenhouse effect analytically and obviously, but it is nevertheless implicitly contained in his deduction of the heat retained, obtained by the method of day and night averages. The method does not specify whether the heat is lost by radiation or by some more circuitous process; and thus it would not be precise to label the retaining power of the atmosphere a “greenhouse effect” without giving a somewhat wider interpretation to this name. If it be permitted to extend the meaning of the term to cover a variety of processes which lead to identical results, the deduction of the loss of surface heat by comparison of day and night temperatures is directly concerned with this wider “greenhouse effect.” [Very, 1908, p477]

Between them, Poynting and Very are attempting to pin down whether the “greenhouse effect” is a useful metaphor, and how the heat transfer mechanisms of planetary atmospheres actually work. But in so doing, they help establish the name. Wood’s 1909 comment is clearly a reaction to this discussion, but one that fails to understand what is being discussed. It’s eerily reminiscent of any modern discussion of the greenhouse effect: whenever any two scientists discuss the details of how the greenhouse effect works, you can be sure someone will come along sooner or later claiming to debunk the idea by completely misunderstanding it.

In summary, I think it’s fair to credit Poynting as the originator of the term “greenhouse effect”, but with a special mention to Ekholm for both his prior use of the word “greenhouse”, and his much better explanation of the effect. (Unless I missed some others?)

References

Arrhenius, S. (1896). On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground. Philosophical Magazine and Journal of Science, 41(251). doi:10.1080/14786449608620846

Ekholm, N. (1901). On The Variations Of The Climate Of The Geological And Historical Past And Their Causes. Quarterly Journal of the Royal Meteorological Society, 27(117), 1–62. doi:10.1002/qj.49702711702

Fleming, J. R. (1999). Joseph Fourier, the “greenhouse effect”, and the quest for a universal theory of terrestrial temperatures. Endeavour, 23(2), 72–75. doi:10.1016/S0160-9327(99)01210-7

Fourier, J. (1822). Théorie Analytique de la Chaleur (“Analytical Theory of Heat”). Paris: Chez Firmin Didot, Pere et Fils.

Fourier, J. (1827). On the Temperatures of the Terrestrial Sphere and Interplanetary Space. Mémoires de l’Académie Royale Des Sciences, 7, 569–604. (translation by Ray Pierrehumbert)

Poynting, J. H. (1907). On Prof. Lowell’s Method for Evaluating the Surface-temperatures of the Planets; with an Attempt to Represent the Effect of Day and Night on the Temperature of the Earth. Philosophical Magazine, 14(84), 749–760.

Very, F. W. (1908). The Greenhouse Theory and Planetary Temperatures. Philosophical Magazine, 16(93), 462–480.

Wood, R. W. (1909). Note on the Theory of the Greenhouse. Philosophical Magazine, 17, 319–320. Retrieved from http://scienceblogs.com/stoat/2011/01/07/r-w-wood-note-on-the-theory-of/

This week I’m reading my way through three biographies, which neatly capture the work of three key scientists who laid the foundation for modern climate modeling: Arrhenius, Bjerknes and Callendar.

Arrhenius-bookAppropriatingWeatherCallFullJacket#3.indd

Crawford, E. (1996). Arrhenius: From Ionic Theory to the Greenhouse Effect. Science History Publications.
A biography of Svante Arrhenius, the Swedish scientist who, in 1895, created the first computational climate model, and spent almost a full year calculating by hand the likely temperature changes across the planet for increased and decreased levels of carbon dioxide. The term “greenhouse effect” hadn’t been coined back then, and Arrhenius was more interested in the question of whether the ice ages might have been caused by reduced levels of CO2. But nevertheless, his model was a remarkably good first attempt, and produced the first quantitative estimate of the warming expected from human’s ongoing use of fossil fuels.
Friedman, R. M. (1993). Appropriating the Weather: Vilhelm Bjerknes and the Construction of a Modern Meteorology. Cornell University Press.
A biography of Vilhelm Bjerknes, the Norwegian scientist, who, in 1904, identified the primitive equations, a set of differential equations that form the basis of modern computational weather forecasting and climate models. The equations are, in essence, an adaption of the equations of fluid flow and thermodynamics, adapted to represent the atmosphere as a fluid on a rotating sphere in a gravitational field. At the time, the equations were little more than a theoretical exercise, and we had to wait half a century for the early digital computers, before it became possible to use them for quantitative weather forecasting.
Fleming, J. R. (2009). The Callendar Effect: The Life and Work of Guy Stewart Callendar (1898-1964). University of Chicago Press.
A biography of Guy S. Callendar, the British scientist, who, in 1938, first compared long term observations of temperatures with measurements of rising carbon dioxide in the atmosphere, to demonstrate a warming trend as predicted by Arrhenius’ theory. It was several decades before his work was taken seriously by the scientific community. Some now argue that we should use the term “Callendar Effect” to describe the warming from increased emissions of carbon dioxide, because the term “greenhouse effect” is too confusing – greenhouse gases were keeping the planet warm long before we started adding more, and anyway, the analogy with the way that glass traps heat in a greenhouse is a little inaccurate.

Not only do the three form a neat ABC, they also represent the three crucial elements you need for modern climate modelling: a theoretical framework to determine which physical processes are likely to matter, a set of detailed equations that allow you to quantify the effects, and comparison with observations as a first step in validating the calculations.

I’m heading off to Florence this week for the International Conference on Software Engineering (ICSE). The highlight of the week will be a panel session I’m chairing, on the Karlskrona Manifesto. The manifesto itself is something we’ve been working on since last summer – a group of us wrote the first draft at the Requirements Engineering conference in Karlskrona, Sweden, last summer (hence the name). This week we’re launching a website for the manifesto, and we’ve published a longer technical paper about it at ICSE.

The idea of the manifesto is to inspire deeper analysis of the roles and responsibilities of technology designers (and especially software designers), given that software systems now shape so much of modern life. We rarely stop to think about the unintended consequences of very large numbers of people using our technologies, nor do we ask whether, on balance, an idea that looks cool on paper will merely help push us even further into unsustainable behaviours. The position we take in the manifesto is that, as designers, our responsibility for the consequences of our designs are much broader than most of us acknowledge, and it’s time to do something about it.

For the manifesto, we ended up thinking about sustainability in terms of five dimensions:

  • Environmental sustainability: the long term viability of natural systems, including ecosystems, resource consumption, climate, pollution food, water, and waste.
  • Social sustainability: the quality of social relationships and the factors that tend to improve or erode trust in society, such as social equity, democracy, and justice.
  • Individual sustainability: the health and well-being of people as individuals, including mental and physical well-being, education, self-respect, skills, and mobility.
  • Economic sustainability: the long term viability of economic activities, such as businesses and nations, including issues such as investment, wealth creation and prosperity.
  • Technical sustainability: the ability to sustain technical systems and their infrastructures, including software maintenance, innovation, obsolescence, and data integrity.

There are of course, plenty of other ways of defining sustainability (which we discuss in the paper), and some hard constraints in some dimensions – e.g. we cannot live beyond the resource limits of the planet, no matter how much progress we make towards sustainability in other other dimensions. But a key insight is that all five dimensions matter, and none of them can be treated in isolation. For example, we might think we’re doing fine in one dimension – economic, say, as we launch a software company with a sound business plan that can make a steady profit – but often we do so only by incurring a debt in other dimensions, perhaps harming the environment by contributing to the mountains of e-waste, or harming social sustainability by replacing skilled jobs with subsistence labour.

The manifesto characterizes a set of problems in how technologists normally think about sustainability (if they do), and ends with a set of principles for sustainability design:

  • Sustainability is systemic. Sustainability is never an isolated property. Systems thinking has to be the starting point for the transdisciplinary common ground of sustainability.
  • Sustainability has multiple dimensions. We have to include those dimensions into our analysis if we are to understand the nature of sustainability in any given situation.
  • Sustainability transcends multiple disciplines. Working in sustainability means working with people from across many disciplines, addressing the challenges from multiple perspectives.
  • Sustainability is a concern independent of the purpose of the system. Sustainability has to be considered even if the primary focus of the system under design is not sustainability.
  • Sustainability applies to both a system and its wider contexts. There are at least two spheres to consider in system design: the sustainability of the system itself and how it affects sustainability of the wider system of which it will be part.
  • Sustainability requires action on multiple levels. Some interventions have more leverage on a system than others. Whenever we take action towards sustainability, we should consider opportunity costs: action at other levels may offer more effective forms of intervention.
  • System visibility is a necessary precondition and enabler for sustainability design. The status of the system and its context should be visible at different levels of abstraction and perspectives to enable participation and informed responsible choice.
  • Sustainability requires long-term thinking. We should assess benefits and impacts on multiple timescales, and include longer-term indicators in assessment and decisions.
  • It is possible to meet the needs of future generations without sacrificing the prosperity of the current generation. Innovation in sustainability can play out as decoupling present and future needs. By moving away from the language of conflict and the trade-off mindset, we can identify and enact choices that benefit both present and future.

You can read the full manifesto at sustainabilitydesign.org, and watch for the twitter tags  and .  I’m looking forward to lots of constructive discussions this week.

For our course about the impacts of the internet, we developed an exercise to get our students thinking critically about the credibility of things they find on the web. As a number of colleagues have expressed in interest in this, I thought I would post it here. Feel free to use it and adapt it!

Near the beginning of the course, we set the students to read the chapter “Crap Detection 101: How to Find What You Need to Know, and How to Decide If It’s True” from Rheingold & Weeks’ book NetSmart. During the tutorial, we get them working in small groups, and give them several, carefully selected web pages to test their skills on. We pick webpages that we think are not too easy nor too hard, and use a mix of credible and misleading ones. It’s a real eye-opener exercise for our students.

To guide them in the activity, we give them the following list of tips (originally distilled from the book by our TA, Matt King, who wrote the first draft of the worksheet).

Tactics for Detecting Crap on the Internet

Here’s a checklist of tactics to use to help you judge the credibility of web pages. Different tactics will be useful for different web pages – use your judgment to decide which tactics to try first. If you find some of these don’t apply, or don’t seem to give you useful information, think about why that is. Make notes about the credibility of each webpage you explored, and which tactics you used to determine its credibility.

  1. Authorship
    • Is the author of a given page named? Who is s/he?
    • What do others say about the author?
  2. Sources cited
    • Does the article include links (or at least references) to sources?
    • What do these sources tell us about credibility and/or bias?
  3. Ownership of the website
    • Can you find out who owns the site? (e.g. look it up using www.easywhois.com)
    • What is the domain name? Does the “pedigree” of a site convince us of its trustworthiness?
    • Who funds the owner’s activities? (e.g. look them up on http://www.sourcewatch.org)
  4. Connectedness
    • How much traffic does this site get? (e.g. use www.alexa.com for stats/demographics)
    • Do the demographics tell you anything about the website’s audience? (see alexa.com again)
    • Do other websites link to this page? (e.g. google with the search term “link: http://*paste URL here*”)? If so, who are the linkers?
    • Is the page ranked highly when searched for from at least two search engines?
  5. Design & Interactivity
    • Does the website’s design and other structural features (such as grammar) tell us anything about its credibility?
    • Does the page have an active comment section? If so, does the author respond to comments?
  6. Triangulation
    • Can you verify the content of a page by “triangulating” its claims with at least two or three other reliable sources?
    • Do fact-checking sites have anything useful on this topic? (e.g. try www.factcheck.org)
    • Are there topic-specific sites that do factchecking? (e.g. www.snopes.com for urban legends, www.skepticalscience.com for climate science). Note: How can you tell whether these sites are credible?
  7. Check your own biases
    • Overall, what’s your personal stake in the credibility of this page’s content?
    • How much time do you think you should allocate to verifying its reliability?

(Download the full worksheet)

It’s been a while since I’ve written about the question of climate model validation, but I regularly get asked about it when I talk about the work I’ve been doing studying how climate models are developed. There’s an upcoming conference organized by the Rotman Institute of Philosophy, in London, Ontario, on Knowledge and Models in Climate Science, at which many of my favourite thinkers on this topic will be speaking. So I thought it was a good time to get philosophical about this again, and define some terms that I think help frame the discussion (at least in the way I see it!).

Here’s my abstract for the conference:

Constructive and External Validity for Climate Modeling

Discussion of validity of scientific computational models tend to treat “the model” as a unitary artifact, and ask questions about its fidelity with respect to observational data, and its predictive power with respect to future situations. For climate modeling, both of these questions are problematic, because of long timescales and inhomogeneities in the available data. Our ethnographic studies of the day-to-day practices of climate modelers suggest an alternative framework for model validity, focusing on a modeling system rather than any individual model. Any given climate model can be configured for a huge variety of different simulation runs, and only ever represents a single instance of a continually evolving body of program code. Furthermore, its execution is always embedded in a broader social system of scientific collaboration which selects suitable model configurations for specific experiments, and interprets the results of the simulations within the broader context of the current body of theory about earth system processes.

We propose that the validity of a climate modeling system should be assessed with respect to two criteria: Constructive Validity, which refers to the extent to which the day-to-day practices of climate model construction involve the continual testing of hypotheses about the ways in which earth system processes are coded into the models, and External Validity, which refers to the appropriateness of claims about how well model outputs ought to correspond to past or future states of the observed climate system. For example, a typical feature of the day-to-day practice of climate model construction is the incremental improvement of the representation of specific earth system processes in the program code, via a series of hypothesis-testing experiments. Each experiment begins with a hypothesis (drawn from current or emerging theories about the earth system) that a particular change to the model code ought to result in a predicable change to the climatology produced by various runs of the model. Such a hypothesis is then tested empirically, using the current version of the model as a control, and the modified version of the model as the experimental case. Such experiments are then replicated for various configurations of the model, and results are evaluated in a peer review process via the scientific working groups who are responsible for steering the ongoing model development effort.

Assessment of constructive validity for a modeling system would take account of how well the day-to-day practices in a climate modeling laboratory adhere to rigorous standards for such experiments, and how well they routinely test the assumptions that are built into the model in this way. Similarly, assessment of the external validity of the modeling system would take account of how well knowledge of the strengths and weaknesses of particular instances of the model are taken into account when making claims about the scope of applicability of model results. We argue that such an approach offers a more coherent approach to questions of model validity, as it corresponds more directly with the way in which climate models are developed and used.

For more background, see:

I’ll be heading off to Stockholm in August to present a paper at the 2nd International Conference on Information and Communication Technologies for Sustainability (ICT4S’2014). The theme of the conference this year is “ICT and transformational change”, which got me thinking about how we think about change, and especially whether we equip students in computing with the right conceptual toolkit to think about change. I ended up writing a long critique of Computational Thinking, which has become popular lately as a way of describing what we teach in computing undergrad programs. I don’t think there’s anything wrong with computational thinking in small doses. But when an entire university program teaches nothing but computational thinking, we turn out generations of computing professionals who are ill-equipped to think about complex societal issues. This then makes them particularly vulnerable to technological solutionism. I hope the paper will provoke some interesting discussion!

Here’s the abstract for my paper (click here for the full paper):

From Computational Thinking to Systems Thinking: A conceptual toolkit for sustainability computing

Steve Easterbrook, University of Toronto

If information and communication technologies (ICT) are to bring about a transformational change to a sustainable society, then we need to transform our thinking. Computer professionals already have a conceptual toolkit for problem solving, sometimes known as computational thinking. However, computational thinking tends to see the world in terms a series of problems (or problem types) that have computational solutions (or solution types). Sustainability, on the other hand, demands a more systemic approach, to avoid technological solutionism, and to acknowledge that technology, human behaviour and environmental impacts are tightly inter-related. In this paper, I argue that systems thinking provides the necessary bridge from computational thinking to sustainability practice, as it provides a domain ontology for reasoning about sustainability, a conceptual basis for reasoning about transformational change, and a set of methods for critical thinking about the social and environmental impacts of technology. I end the paper with a set of suggestions for how to build these ideas into the undergraduate curriculum for computer and information sciences.

At the beginning of March, I was invited to give a talk at TEDxUofT. Colleagues tell me the hardest part of giving these talks is deciding what to talk about. I decided to see if I could answer the question of whether we can trust climate models. It was a fascinating and nerve-wracking experience, quite unlike any talk I’ve given before. Of course, I’d love to do another one, as I now know more about what works and what doesn’t.

Here’s the video and a transcript of my talk. [The bits in square brackets in are things I intended to say but forgot!] 

Computing the Climate: How Can a Computer Model Forecast the Future? TEDxUofT, March 1, 2014.

Talking about the weather forecast is a great way to start a friendly conversation. The weather forecast matters to us. It tells us what to wear in the morning; it tells us what to pack for a trip. We also know that weather forecasts can sometimes be wrong, but we’d be foolish to ignore them when they tell us a major storm is heading our way.

[Unfortunately, talking about climate forecasts is often a great way to end a friendly conversation!] Climate models tell us that by the end of this century, if we carry on burning fossil fuels at the rate we have been doing, and we carry on cutting down forests at the rate we have been doing, the planet will warm by somewhere between 5 to 6 degrees centigrade. That might not seem much, but, to put it into context, in the entire history of human civilization, the average temperature of the planet has not varied by more than 1 degree. So that forecast tells us something major is coming, and we probably ought to pay attention to it.

But on the other hand, we know that weather forecasts don’t work so well the longer into the future we peer. Tomorrow’s forecast is usually pretty accurate. Three day and five day forecasts are reasonably good. But next week? They always change their minds before next week comes. So how can we peer 100 years into the future and look at what is coming with respect to the climate? Should we trust those forecasts? Should we trust the climate models that provide them to us?

Six years ago, I set out to find out. I’m a professor of computer science. I study how large teams of software developers can put together complex pieces of software. I’ve worked with NASA, studying how NASA builds the flight software for the Space Shuttle and the International Space Station. I’ve worked with large companies like Microsoft and IBM. My work focusses not so much on software errors, but on the reasons why people make those errors, and how programmers then figure out they’ve made an error, and how they know how to fix it.

To start my study, I visited four major climate modelling labs around the world: in the UK, in Paris, Hamburg, Germany and in Colorado. Each of these labs have typically somewhere between 50-100 scientists who are contributing code to their climate models. And although I only visited four of these labs, there are another twenty or so around the world, all doing similar things. They run these models on some of the fastest supercomputers in the world, and many of the models have been in construction, the same model, for more than 20 years.

When I started this study, I asked one of my students to attempt to measure how many bugs there are in a typical climate model. We know from our experience with software there are always bugs. Sooner or later the machine crashes. So how buggy are climate models? More specifically, what we set out to measure is what we call “defect density” – How many errors are there per thousand lines of code. By this measure, it turns out climate models are remarkably high quality. In fact, they’re better than almost any commercial software that’s ever been studied. They’re about the same level of quality as the Space Shuttle flight software. Here’s my results (For the actual results you’ll have to read the paper):

DefectDensityResults-sm

We know it’s very hard to build a large complex piece of software without making mistakes.  Even the space shuttle’s software had errors in it. So the question is not “is the software perfect for predicting the future?”. The question is “Is it good enough?” Is it fit for purpose?

To answer that question, we’d better understand what that purpose of a climate model is. First of all, I’d better be clear what a climate model is not. A climate model is not a projection of trends we’ve seen in the past extrapolated into the future. If you did that, you’d be wrong, because you haven’t accounted for what actually causes the climate to change, and so the trend might not continue. They are also not decision-support tools. A climate model cannot tell us what to do about climate change. It cannot tell us whether we should be building more solar panels, or wind farms. It can’t tell us whether we should have a carbon tax. It can’t tell us what we ought to put into an international treaty.

What it does do is tell us how the physics of planet earth work, and what the consequences are of changing things, within that physics. I could describe it as “computational fluid dynamics on a rotating sphere”. But computational fluid dynamics is complex.

I went into my son’s fourth grade class recently, and I started to explain what a climate model is, and the first question they asked me was “is it like Minecraft?”. Well, that’s not a bad place to start. If you’re not familiar with Minecraft, it divides the world into blocks, and the blocks are made of stuff. They might be made of wood, or metal, or water, or whatever, and you can build things out of them. There’s no gravity in Minecraft, so you can build floating islands and it’s great fun.

Climate models are a bit like that. To build a climate model, you divide the world into a number of blocks. The difference is that in Minecraft, the blocks are made of stuff. In a climate model, the blocks are really blocks of space, through which stuff can flow. At each timestep, the program calculates how much water, or air, or ice is flowing into, or out of, each block, and if so, in which directions? It calculates changes in temperature, density, humidity, and so on. And whether stuff such as dust, salt, and pollutants are passing through or accumulating in each block. We have to account for the sunlight passing down through the block during the day. Some of what’s in each block might filter some of the the incoming sunlight, for example if there are clouds or dust, so some of the sunlight doesn’t get down to the blocks below. There’s also heat escaping upwards through the blocks, and again, some of what is in the block might trap some of that heat — for example clouds and greenhouse gases.

As you can see from this diagram, the blocks can be pretty large. The upper figure shows blocks of 87km on a side. If you want more detail in the model, you have to make the blocks smaller. Some of the fastest climate models today look more like the lower figure:

ModelResolution

Ideally, you want to make the blocks as small as possible, but then you have many more blocks to keep track of, and you get to the point where the computer just can’t run fast enough. A typical run of a climate model, to simulate a century’s worth of climate, you might have to wait a couple of weeks on some of the fastest supercomputers for that run to complete. So the speed of the computer limits how small we can make the blocks.

Building models this way is remarkably successful. Here’s video of what a climate model can do today. This simulation shows a year’s worth of weather from a climate model. What you’re seeing is clouds and, in orange, that’s where it’s raining. Compare that to a year’s worth of satellite data for the year 2013. If you put them side by side, you can see many of the same patterns. You can see the westerlies, the winds at the top and bottom of the globe, heading from west to east, and nearer the equator, you can see the trade winds flowing in the opposite direction. If you look very closely, you might even see a pulse over South America, and a similar one over Africa in both the model and the satellite data. That’s the daily cycle as the land warms up in the morning and the moisture evaporates from soils and plants, and then later on in the afternoon as it cools, it turns into rain.

Note that the bottom is an actual year, 2013, while the top, the model simulation is not a real year at all – it’s a typical year. So the two don’t correspond exactly. You won’t get storms forming at the same time, because it’s not designed to be an exact simulation; the climate model is designed to get the patterns right. And by and large, it does. [These patterns aren’t coded into this model. They emerge as a consequences of getting the basic physics of the atmosphere right].

So how do you build a climate model like this? The answer is “very slowly”. It takes a lot of time, and a lot of failure. One of the things that surprised me when I visited these labs is that the scientists don’t build these models to try and predict the future. They build these models to try and understand the past. They know their models are only approximations, and they regularly quote the statistician, George Box, who said “All models are wrong, but some are useful”. What he meant is that any model of the world is only an approximation. You can’t get all the complexity of the real world into a model. But even so, even a simple model is a good way to test your theories about the world.

So the way that modellers work, is they spend their time focussing on places where the model does isn’t quite right. For example, maybe the model isn’t getting the Indian monsoon right. Perhaps it’s getting the amount of rain right, but it’s falling in the wrong place. They then form a hypothesis. They’ll say, I think I can improve the model, because I think this particular process is responsible, and if I improve that process in a particular way, then that should fix the simulation of the monsoon cycle. And then they run a whole series of experiments, comparing the old version of the model, which is getting it wrong, with the new version, to test whether the hypothesis is correct. And if after a series of experiments, they believe their hypothesis is correct, they have to convince the rest of the modelling team that this really is an improvement to the model.

In other words, to build the models, they are doing science. They are developing hypotheses, they are running experiments, and using peer review process to convince their colleagues that what they have done is correct:

ModelDevelopmentProcess-sm

Climate modellers also have a few other weapons up their sleeves. Imagine for a moment if Microsoft had 25 competitors around the world, all of whom were attempting to build their own versions of Microsoft Word. Imagine further that every few years, those 25 companies all agreed to run their software on a very complex battery of tests, designed to test all the different conditions under which you might expect a word processor to work. And not only that, but they agree to release all the results of those tests to the public, on the internet, so that anyone who wanted to use any of that software can pore over all the data and find out how well each version did, and decide which version they want to use for their own purposes. Well, that’s what climate modellers do. There is no other software in the world for which there are 25 teams around the world trying to build the same thing, and competing with each other.

Climate modellers also have some other advantages. In some sense, climate modelling is actually easier than weather forecasting. I can show you what I mean by that. Imagine I had a water balloon (actually, you don’t have to imagine – I have one here):

AboutToThrowTheWaterBalloon

I’m going to throw it at the fifth row. Now, you might want to know who will get wet. You could measure everything about my throw: Will I throw underarm, or overarm? Which way am I facing when I let go of it? How much swing do I put in? If you could measure all of those aspects of my throw, and you understand the physics of how objects move, you could come up with a fairly accurate prediction of who is going to get wet.

That’s like weather forecasting. We have to measure the current conditions as accurately as possible, and then project forward to see what direction it’s moving in:

WeatherForecasting

If I make any small mistakes in measuring my throw, those mistakes will multiply as the balloon travels further. The further I attempt to throw it, the more room there is for inaccuracy in my estimate. That’s like weather forecasting. Any errors in the initial conditions multiply up rapidly, and the current limit appears to be about a week or so. Beyond that, the errors get so big that we just cannot make accurate forecasts.

In contrast, climate models would be more like releasing a balloon into the wind, and predicting where it will go by knowing about the wind patterns. I’ll make some wind here using a fan:

BalloonInTheWind

Now that balloon is going to bob about in the wind from the fan. I could go away and come back tomorrow and it will still be doing about the same thing. If the power stays on, I could leave it for a hundred years, and it might still be doing the same thing. I won’t be able to predict exactly where that balloon is going to be at any moment, but I can predict, very reliably, the space in which it will move. I can predict the boundaries of its movement. And if the things that shape those boundaries change, for example by moving the fan, and I know what the factors are that shape those boundaries, I can tell you how the patterns of its movements are going to change – how the boundaries are going to change. So we call that a boundary problem:

ClimateAsABoundaryProblem

The initial conditions are almost irrelevant. It doesn’t matter where the balloon started, what matters is what’s shaping its boundary.

So can these models predict the future? Are they good enough to predict the future? The answer is “yes and no”. We know the models are better at some things than others. They’re better at simulating changes in temperature than they are at simulating changes in rainfall. We also know that each model tends to be stronger in some areas and weaker in others. If you take the average of a whole set of models, you get a much better simulation of how the planet’s climate works than if you look at any individual model on its own. What happens is that the weaknesses in any one model are compensated for by other models that don’t have those weaknesses.

But the results of the models have to be interpreted very carefully, by someone who knows what the models are good at, and what they are not good at – you can’t just take the output of a model and say “that’s how it’s going to be”.

Also, you don’t actually need a computer model to predict climate change. The first predictions of what would happen if we keep on adding carbon dioxide to the atmosphere were produced over 120 years ago. That’s fifty years before the first digital computer was invented. And those predictions were pretty accurate – what has happened over the twentieth century has followed very closely what was predicted all those years ago. Scientists also predicted, for example, that the arctic would warm faster than the equatorial regions, and that’s what happened. They predicted night time temperatures would rise faster than day time temperatures, and that’s what happened.

So in many ways, the models only add detail to what we already know about the climate. They allow scientists to explore “what if” questions. For example, you could ask of a model, what would happen if we stop burning all fossil fuels tomorrow. And the answer from the models is that the temperature of the planet will stay at whatever temperature it was when you stopped. For example, if we wait twenty years, and then stopped, we’re stuck with whatever temperature we’re at for tens of thousands of years. You could ask a model what happens if we dig up all known reserves of fossil fuels, and burn them all at once, in one big party? Well, it gets very hot.

More interestingly, you could ask what if we tried blocking some of the incoming sunlight to cool the planet down, to compensate for some of the warming we’re getting from adding greenhouse gases to the atmosphere? There have been a number of very serious proposals to do that. There are some who say we should float giant space mirrors. That might be hard, but a simpler way of doing it is to put dust up in the stratosphere, and that blocks some of the incoming sunlight. It turns out that if you do that, you can very reliably bring the average temperature of the planet back down to whatever level you want, just by adjusting the amount of the dust. Unfortunately, some parts of the planet cool too much, and others not at all. The crops don’t grow so well, and everyone’s weather gets messed up. So it seems like that could be a solution, but when you study the model results in detail, there are too many problems.

Remember that we know fairly well what will happen to the climate if we keep adding CO2, even without using a computer model, and the computer models just add detail to what we already know. If the models are wrong, they could be wrong in either direction. They might under-estimate the warming just as much as they might over-estimate it. If you look at how well the models can simulate the past few decades, especially the last decade, you’ll see some of both. For example, the models have under-estimated how fast the arctic sea ice has melted. The models have underestimated how fast the sea levels have risen over the last decade. On the other hand, they over-estimated the rate of warming at the surface of the planet. But they underestimated the rate of warming in the deep oceans, so some of the warming ends up in a different place from where the models predicted. So they can under-estimate just as much as they can over-estimate. [The less certain we are about the results from the models, the bigger the risk that the warming might be much worse than we think.]

So when you see a graph like this, which comes from the latest IPCC report that just came out last month, it doesn’t tell us what to do about climate change, it just tells us the consequences of what we might choose to do. Remember, humans aren’t represented in the models at all, except in terms of us producing greenhouse gases and adding them to the atmosphere.

IPCC-AR5-WG1-Fig12.5

If we keep on increasing our use of fossil fuels — finding more oil, building more pipelines, digging up more coal, we’ll follow the top path. And that takes us to a planet that by the end of this century, is somewhere between 4 and 6 degrees warmer, and it keeps on getting warmer over the next few centuries. On the other hand, the bottom path, in dark blue, shows what would happen if, year after year from now onwards, we use less fossil fuels than we did the previous year, until about mid-century, when we get down to zero emissions, and we invent some way to start removing that carbon dioxide from the atmosphere before the end of the century, to stay below 2 degrees of warming.

The models don’t tell us which of these paths we should follow. They just tell us that if this is what we do, here’s what the climate will do in response. You could say that what the models do is take all the data and all the knowledge we have about the climate system and how it works, and put them into one neat package, and its our job to take that knowledge and turn it into wisdom. And to decide which future we would like.

My department is busy revising the set of milestones our PhD students need to meet in the course of their studies. The milestones are intended to ensure each student is making steady progress, and to identify (early!) any problems. At the moment they don’t really do this well, in part because the faculty all seem to have different ideas about what we should expect at each milestone. (This is probably a special case of the general rule that if you gather n professors together, they will express at least n+1 mutually incompatible opinions). As a result, the students don’t really know what’s expected of them, and hence spend far longer in the PhD program than they would need to if they received clear guidance.

Anyway, in order to be helpful, I wrote down what I think are the set of skills that a PhD student needs to demonstrate early in the program, as a prerequisite for becoming a successful researcher:

  1. The ability to select a small number of significant research contributions from a larger set of published papers, and justify that selection.
  2. The ability to articulate a rationale for selection of these papers, on the basis of significance of the results, novelty of the approach, etc.
  3. The ability to relate the papers to one another, and to other research in the literature.
  4. The ability to critique the research methods used in these papers, the strengths and weaknesses of these methods, and likely threats to validity, whether acknowledged in the papers or not.
  5. The ability to suggest alternative approaches to answering the research questions posed in these papers.
  6. The ability to identify limitations on the results reported in the papers, along with their implications.
  7. The ability to identify and prioritize lines of investigation for further research, based on limitations of the research described in the papers and/or important open problems that the papers fail to answer.

My suggestion is that at the end of the first year of the PhD program, each student should demonstrate development of these skills by writing a short report that selects and critiques a handful (4-6) of papers in a particular subfield. If a student can’t do this well, they’re probably not going to succeed in the PhD program.

My proposal has now gone to the relevant committee (“where good ideas go to die™”), so we’ll see what happens…

Imagine for a moment if Microsoft had 24 competitors around the world, each building their own version of Microsoft Word. Imagine further that every few years, they all agreed to run their software through the same set of very demanding tests of what a word processor ought to be able to do in a large variety of different conditions. And imagine that all these competing  companies agreed that all the results from these tests would be freely available on the web, for anyone to see. Then, people who want to use a word processor can explore the data and decide for themselves which one best serves their purpose. People who have concerns about the reliability of word processors can analyze the strengths and weaknesses of each company’s software. Then think about what such a process would do to the reliability of word processors. Wouldn’t that be a great world to live in?

Well, that’s what climate modellers do, through a series of model inter-comparison projects. There are around 25 major climate modelling labs around the world developing fully integrated global climate models, and hundreds of smaller labs building specialized models of specific components of the earth system. The fully integrated models are compared in detail every few years through the Coupled Model Intercomparison Projects. And there are many other model inter-comparison projects for various specialist communities within climate science.

Have a look at how this process works, via this short paper on the planning process for CMIP6.

What’s the difference between forecasting the weather and predicting future climate change? A few years ago, I wrote a long post explaining that weather forecasting is an initial value problem, while climate is a boundary value problem. This is a much shorter explanation:

Imagine I were to throw a water balloon at you. If you could measure precisely how I threw it, and you understand the laws of physics correctly, you could predict precisely where it will go. If you could calculate it fast enough, you would know whether you’re going to get wet, or whether I’ll miss. That’s an initial value problem. The less precise your measurements of the initial value (how I throw it), the less accurate your prediction will be. Also, the longer the throw, the more the errors grow. This is how weather forecasting works – you measure the current conditions (temperature, humidity, wind speed, and so on) as accurately as possible, put them into a model that simulates the physics of the atmosphere, and run it to see how the weather will evolve. But the further into the future that you want to peer, the less accurate your forecast, because the errors on the initial value get bigger. It’s really hard to predict the weather more than about a week into the future:

Weather as an initial value problem

Now imagine I release a helium balloon into the air flow from a desk fan, and the balloon is on a string that’s tied to the fan casing. The balloon will reach the end of its string, and bob around in the stream of air. It doesn’t matter how exactly I throw the balloon into the airstream – it will keep on bobbing about in the same small area. I could leave it there for hours and it will do the same thing. This is a boundary value problem. I won’t be able to predict exactly where the balloon will be at any moment, but I will be able to tell you fairly precisely the boundaries of the space in which it will be bobbing. If anything affects these boundaries (e.g. because I move the fan a little), I should also be able to predict how this will shifts the area in which the balloon will bob. This is how climate prediction works. You start off with any (reasonable) starting state, and run your model for as long as you like. If your model gets the physics right, it will simulate a stable climate indefinitely, no matter how you initialize it:

Climate as a boundary value problem

But if the boundary conditions change, because, for example, we alter the radiative balance of the planet, the model should also be able to predict fairly accurately how this will shift the boundaries on the climate:

Climate change as a change in boundary conditions

 

We cannot predict what the weather will do on any given day far into the future. But if we understand the boundary conditions and how they are altered, we can predict fairly accurately how the range of possible weather patterns will be affected. Climate change is a change in the boundary conditions on our weather systems.

A few weeks ago, Mark Higgins, from EUMETSAT, posted this wonderful video of satellite imagery of planet earth for the whole of the year 2013. The video superimposes the aggregated satellite data from multiple satellites on the top of NASA’s ‘Blue Marble Next Generation’ ground maps, to give a consistent picture of large scale weather patterns (Original video here – be sure to listen to Mark’s commentary):

When I saw the video, it reminded me of something. Here’s the output from the CAM3, the atmospheric component of the global climate model CESM, run at very high resolution (Original video here):

I find it fascinating to play these two videos at the same time, and observe how the model captures the large scale weather patterns of the planet. The comparison isn’t perfect, because the satellite data measures the cloud temperature (the colder the clouds, the whiter they are shown), while the climate model output shows total water vapour & rain (i.e. warmer clouds are a lot more visible, and precipitation is shown in orange). This means the tropical regions look much drier in the satellite imagery than they do in the model output.

But even so, there are some remarkable similarities. For example, both videos clearly show the westerlies, the winds that flow from west to east at the top and bottom of the map (e.g. pushing rain across the North Atlantic to the UK), and they both show the trade winds, which flow from east to west, closer to the equator. Both videos also show how cyclones form in the regions between these wind patterns. For example, in both videos, you can see the typhoon season ramp up in the Western Pacific in August and September – the model has two hitting Japan in August, and the satellite data shows several hitting China in September. The curved tracks of these storms are similar in both models. If you look closely, you can also see the daily cycle of evaporation and rain over South America and Central Africa in both videos – watch how these regions appear to pulse each day.

I find these similarities remarkable, because none of these patterns are coded into the climate model – they all emerge as a consequence of getting the basic thermodynamic properties of the atmosphere right. Remember also that a climate model is not intended to forecast the particular weather of any given year (that would be impossible, due to chaos theory). However, the model simulates a “typical” year on planet earth. So the specifics of where and when each storm forms do not correspond to anything that actually happened in any given year. But when the model gets the overall patterns about right, that’s a pretty impressive achievement.