Here’s another excerpt from the draft manuscript of my forthcoming book, Computing the Climate.

The idea that the temperature of the planet could be analyzed as a mathematical problem was first suggested by the French mathematician, Joseph Fourier, in the 1820s. Fourier had studied the up-and-down cycles of temperature between day and night, and between summer and winter, and had measured how deep into the ground these heating and cooling cycles reach. It turns out they don’t go very deep. At about 30 meters below the surface, temperatures remain constant all year round, showing no sign of daily or annual change. Today, Fourier is perhaps best remembered for his work on the mathematics of such cycles, and the Fourier transform, a technique for discovering cyclic waveforms in complex data series, was named in his honour.

The temperature of any object is due to the balance of heat entering and leaving it. If more heat is entering, the object warms up, and if more heat is leaving, it cools down. For the planet as a whole, Fourier pointed out there are only three possible sources of heat: the sun, the earth’s core, and background heat from space. His measurements showed that the heat at the earth’s core no longer warms the surface, because the diffusion of heat through layers of rock is too slow to make a noticeable difference. He thought that the temperature of space itself was probably about the same as the coldest temperatures on earth, as that would explain the temperature reached at the poles in the long polar winters. On this point, he was wrong—we now know space is close to absolute zero, a couple of hundred degrees colder than anywhere on earth. But he was correct about the sun being the main source of heat at the earth’s surface.

Fourier also realized there must be more to the story than that, otherwise the heat from the sun would escape to space just as fast as it arrived, causing night-time temperatures to drop back down to the temperature of space—and yet they don’t. We now know this is what happens on the moon, where temperatures drop by hundreds of degrees after the lunar sunset. So why doesn’t this happen on Earth?

The solution lay in the behaviour of ‘dark heat’, an idea that was new and mysterious to the scientists of the early nineteenth century. Today we call it infra-red radiation. Fourier referred to it as ‘radiant heat’ or ‘dark rays’ to distinguish it from ‘light heat’, or visible light. But really, they’re just different parts of the electromagnetic spectrum. Any object that’s warmer than its surroundings continually radiates some of its heat to those surroundings. If the object is hot enough, say a stove, you can feel this ‘dark heat’ if you put your hand near it, although it has to get pretty hot before we can feel the infra-red it gives off. As you heat up an object, the heat it radiates spreads up the spectrum from infra-red to visible light—it starts to glow red, and then, eventually white hot.

Fourier’s theory was elegantly simple. Because the sun is so hot, much of its energy arrives in the form of visible light, which passes through the atmosphere relatively easily, and warms the earth’s surface. As the earth’s surface is warm, it also radiates energy. The earth is cooler than the sun, so the energy the earth radiates is in the form of dark heat. Dark heat doesn’t pass though the atmosphere anywhere near as easily as light heat, so this slows the loss of energy back to space.

The surface temperature of the earth is determined by the balance between the incoming heat from the sun (shortwave rays, mainly in the visible light and ultra-violet) and the outgoing infra-red, radiated in all directions from the earth. The incoming short-wave rays passes through the atmosphere much more easily than the long-wave outgoing infra-red.

To explain the idea, Fourier used an analogy with the hotbox, a kind of solar oven, invented by the explorer Horace Bénédicte de Saussure. The hotbox was a very well-insulated wooden box, painted black inside, with three layers of glass in the lid. De Saussure had demonstrated that the sun would heat the inside of the box to over 100°C, and that this temperature remains remarkably consistent, even at the top of Mont Blanc, where the outside air is much colder. The glass lets the sun’s rays through, but slows the rate at which the heat can escape. Fourier argued that layers of air in the atmosphere play a similar role to the panes of glass in the hotbox, by trapping the outgoing heat; like the air in the hotbox, the planet would stay warmer than its surroundings. A century later, Fourier’s theory came to be called the ‘greenhouse effect’, perhaps because a greenhouse is more familiar to most people than a hotbox.

While Fourier had observed that air does indeed trap some of the dark heat from the ground, it wasn’t clear why, until the English scientist John Tyndall conducted a series of experiments in the 1850s to measure how well this ‘dark heat’ passes through different gases. Tyndall’s experiments used a four foot brass tube, sealed at both ends with transparent disks of salt crystal—glass was no good as it also blocks the dark heat. The tube could be filled with different kinds of gas. A tub of boiling water at one end provided a source of heat, and a galvanometer at the other compared the heat received through the tube with the heat from a second tub of boiling water.

Tyndall’s experimental equipment for testing the absorption properties of different gases. The brass tube was first evacuated, and the equipment calibrated by moving the screens until the temperature readings from the two heat sources were equal. Then the gas to be tested was pumped into the brass tube, and change in deflection of the galvanometer noted. (Figure adapted from Tyndall, 1861)

When Tyndall filled the tube with dry air, or oxygen, or nitrogen, there was very little change. But when he filled it with the hydrocarbon gas ethene, the temperature at the end of the tube dropped dramatically. This was so surprising that he first suspected something had gone wrong with the equipment—perhaps the gas had reacted with the salt, making the ends opaque? After re-testing every aspect of the equipment, he finally concluded that it was the ethene gas itself that was blocking the heat. He went on to test dozens of other gases and vapours, and found that more complex chemicals such as vapours of alcohols and oils were the strongest heat absorbers, while pure elements such as oxygen and nitrogen had the least effect.

Why do some gases allow visible light through, but block infra-red? It turns out that the molecules of each gas react to different wavelengths of light, depending on the molecule’s shape, similar to the way sound waves of just the right wavelength can cause a wine glass to resonate. Each type of molecule will vibrate when certain wavelengths of light hit it, making it stretch, contract, or rotate. So the molecule gains a little energy, and the light rays lose some. Scientists use this to determine which gases are in distant stars, because each gas makes a distinct pattern of dark lines across the spectrum from white light that has passed though it.

Tyndall noticed that gases made of more than one element, such as water vapour (H2O) or carbon dioxide (CO2), tend to absorb more energy from the infra-red rays than gases made of a single type of element, such as hydrogen or oxygen. He argued this provides evidence of atomic bonding: it wouldn’t happen if water was just a mixture of oxygen and hydrogen atoms. On this, he was partially right. We now know that what matters isn’t just the existence of molecular bonds, but whether the molecules are asymmetric—after all, oxygen gas molecules (O2) are also pairs of atoms bonded together. The more complex the molecular structure, the more asymmetries it has, and the more modes of vibration and spin the bonds have, allowing them to absorb energy at more different wavelengths. Today, we call any gas that absorbs parts of the infra-red spectrum a greenhouse gas. Compounds such as methane (CH4) and ethene (C2H4) absorb energy at more wavelengths than carbon dioxide, making them stronger greenhouse gases.

Tyndall’s experiments showed that greenhouse gases absorb infra-red even when the gases are only present in very small amounts. Increasing the concentration of the gas increases the amount of energy absorbed, but only up to a point. Once the concentration is high enough, adding more gas molecules has no further effect—all of the rays in that gas’s absorption bands have been blocked, while rays of other wavelengths pass through unaffected. Today, we call this saturation.

Tyndall concluded that, because of its abundance in the atmosphere, water vapour is responsible for most of the heat trapping effect, with carbon dioxide second. Some of the other vapours he tested have a much stronger absorption effect, but are so rare in the atmosphere they contribute little to the overall effect. Tyndall clearly understood the implications of his experiments for the earth’s climate, arguing that it explains why, for example, temperatures in dry regions such as deserts drop overnight far more than in more humid regions. In the 1861 paper describing his experimental results, Tyndall argued that any change in the levels of water vapour and carbon dioxide, “must produce a change of climate”. He speculated that “Such changes in fact may have produced all the mutations of climate which the researches of geologists reveal”.

Today I’ve been tracking down the origin of the term “Greenhouse Effect”. The term itself is problematic, because it only works as a weak metaphor: both the atmosphere and a greenhouse let the sun’s rays through, and then trap some of the resulting heat. But the mechanisms are different. A greenhouse stays warm by preventing warm air from escaping. In other words, it blocks convection. The atmosphere keeps the planet warm by preventing (some wavelengths of) infra-red radiation from escaping. The “greenhouse effect” is really the result of many layers of air, each absorbing infra-red from the layer below, and then re-emitting it both up and down. The rate at which the planet then loses heat is determined by the average temperature of the topmost layer of air, where this infra-red finally escapes to space. So not really like a greenhouse at all.

So how did the effect acquire this name? The 19th century French mathematician Joseph Fourier is usually credited as the originator of the idea in the 1820’s. However, it turns out he never used the term, and as James Fleming (1999) points out, most authors writing about the history of the greenhouse effect cite only secondary sources on this, without actually reading any of Fourier’s work. Fourier does mention greenhouses in his 1822 classic “Analytical Theory of Heat”, but not in connection with planetary temperatures. The book was published in French, so he uses the french “les serres”, but it appears only once, in a passage on properties of heat in enclosed spaces. The relevant paragraph translates as:

In general the theorems concerning the heating of air in closed spaces extend to a great variety of problems. It would be useful to revert to them when we wish to foresee and regulate the temperature with precision, as in the case of green-houses, drying-houses, sheep-folds, work-shops, or in many civil establishments, such as hospitals, barracks, places of assembly” [Fourier, 1822; appears on p73 of the edition translated by Alexander Freeman, published 1878, Cambridge University Press]

In his other writings, Fourier did hypothesize that the atmosphere plays a role in slowing the rate of heat loss from the surface of the planet to space, hence keeping the ground warmer than it might otherwise be. However, he never identified a mechanism, as the properties of what we now call greenhouse gases weren’t established until John Tyndall‘s experiments in the 1850’s. In explaining his hypothesis, Fourier refers to a “hotbox”, a device invented by the explorer de Saussure, to measure the intensity of the sun’s rays. The hotbox had several layers of glass in the lid which allowed the sun’s rays to enter, but blocked the escape of the heated air via convection. But it was only a metaphor. Fourier understood that whatever the heat trapping mechanism in the atmosphere was, it didn’t actually block convection.

Svante Arrhenius was the first to attempt a detailed calculation of the effect of changing levels of carbon dioxide in the atmosphere, in 1896, in his quest to test a hypothesis that the ice ages were caused by a drop in CO2. Accordingly, he’s also sometime credited with inventing the term. However, he also didn’t use the term “greenhouse” in his papers, although he did invoke a metaphor similar to Fourier’s, using the Swedish word “drivbänk”, which translates as hotbed (Update: or possibly “hothouse” – see comments).

So the term “greenhouse effect” wasn’t coined until the 20th Century. Several of the papers I’ve come across suggest that the first use of the term “greenhouse” in this connection in print was in 1909, in a paper by Wood. This seems rather implausible though, because the paper in question is really only a brief commentary explaining that the idea of a “greenhouse effect” makes no sense, as a simple experiment shows that greenhouses don’t work by trapping outgoing infra-red radiation. The paper is clearly reacting to something previously published on the greenhouse effect, and which Wood appears to take way too literally.

A little digging produces a 1901 paper by Nils Ekholm, a Swedish meteorologist who was a close colleague of Arrhenius, which does indeed use the term ‘greenhouse’. At first sight, he seems to use the term more literally than is warranted, although in subsequent paragraphs, he explains the key mechanism fairly clearly:

The atmosphere plays a very important part of a double character as to the temperature at the earth’s surface, of which the one was first pointed out by Fourier, the other by Tyndall. Firstly, the atmosphere may act like the glass of a green-house, letting through the light rays of the sun relatively easily, and absorbing a great part of the dark rays emitted from the ground, and it thereby may raise the mean temperature of the earth’s surface. Secondly, the atmosphere acts as a heat store placed between the relatively warm ground and the cold space, and thereby lessens in a high degree the annual, diurnal, and local variations of the temperature.

There are two qualities of the atmosphere that produce these effects. The one is that the temperature of the atmosphere generally decreases with the height above the ground or the sea-level, owing partly to the dynamical heating of descending air currents and the dynamical cooling of ascending ones, as is explained in the mechanical theory of heat. The other is that the atmosphere, absorbing but little of the insolation and the most of the radiation from the ground, receives a considerable part of its heat store from the ground by means of radiation, contact, convection, and conduction, whereas the earth’s surface is heated principally by direct radiation from the sun through the transparent air.

It follows from this that the radiation from the earth into space does not go on directly from the ground, but on the average from a layer of the atmosphere having a considerable height above sea-level. The height of that layer depends on the thermal quality of the atmosphere, and will vary with that quality. The greater is the absorbing power of the air for heat rays emitted from the ground, the higher will that layer be, But the higher the layer, the lower is its temperature relatively to that of the ground ; and as the radiation from the layer into space is the less the lower its temperature is, it follows that the ground will be hotter the higher the radiating layer is.” [Ekholm, 1901, p19-20]

At this point, it’s still not called the “greenhouse effect”, but this metaphor does appear to have become a standard way of introducing the concept. But in 1907, the English scientist, John Henry Poynting confidently introduces the term “greenhouse effect”, in his criticism of Percival Lowell‘s analysis of the temperature of the planets. He uses it in scare quotes throughout the paper, which suggests the term is newly minted:

Prof. Lowell’s paper in the July number of the Philosophical Magazine marks an important advance in the evaluation of planetary temperatures, inasmuch as he takes into account the effect of planetary atmospheres in a much more detailed way than any previous wrlter. But he pays hardly any attention to the “blanketing effect,” or, as I prefer to call it, the “greenhouse effect” of the atmosphere.” [Poynting, 1907, p749]

And he goes on:

The ” greenhouse effect” of the atmosphere may perhaps be understood more easily if we first consider the case of a greenhouse with horizontal roof of extent so large compared with its height above the ground that the effect of the edges may be neglected. Let us suppose that it is exposed to a vertical sun, and that the ground under the glass is “black” or a full absorber. We shall neglect the conduction and convection by the air in the greenhouse. [Poynting, 1907, p750]

He then goes on to explore the mathematics of heat transfer in this idealized greenhouse. Unfortunately, he ignores Ekholm’s crucial observation that it is the rate of heat loss at the upper atmosphere that matters, so his calculations are mostly useless. But his description of the mechanism does appear to have taken hold as the dominant explanation. The following year, Frank Very published a response (in the same journal), using the term “Greenhouse Theory” in the title of the paper. He criticizes Poynting’s idealised greenhouse as way too simplistic, but suggests a slightly better metaphor is a set of greenhouses stacked one above another, each of which traps a little of the heat from the one below:

It is true that Professor Lowell does not consider the greenhouse effect analytically and obviously, but it is nevertheless implicitly contained in his deduction of the heat retained, obtained by the method of day and night averages. The method does not specify whether the heat is lost by radiation or by some more circuitous process; and thus it would not be precise to label the retaining power of the atmosphere a “greenhouse effect” without giving a somewhat wider interpretation to this name. If it be permitted to extend the meaning of the term to cover a variety of processes which lead to identical results, the deduction of the loss of surface heat by comparison of day and night temperatures is directly concerned with this wider “greenhouse effect.” [Very, 1908, p477]

Between them, Poynting and Very are attempting to pin down whether the “greenhouse effect” is a useful metaphor, and how the heat transfer mechanisms of planetary atmospheres actually work. But in so doing, they help establish the name. Wood’s 1909 comment is clearly a reaction to this discussion, but one that fails to understand what is being discussed. It’s eerily reminiscent of any modern discussion of the greenhouse effect: whenever any two scientists discuss the details of how the greenhouse effect works, you can be sure someone will come along sooner or later claiming to debunk the idea by completely misunderstanding it.

In summary, I think it’s fair to credit Poynting as the originator of the term “greenhouse effect”, but with a special mention to Ekholm for both his prior use of the word “greenhouse”, and his much better explanation of the effect. (Unless I missed some others?)

References

Arrhenius, S. (1896). On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground. Philosophical Magazine and Journal of Science, 41(251). doi:10.1080/14786449608620846

Ekholm, N. (1901). On The Variations Of The Climate Of The Geological And Historical Past And Their Causes. Quarterly Journal of the Royal Meteorological Society, 27(117), 1–62. doi:10.1002/qj.49702711702

Fleming, J. R. (1999). Joseph Fourier, the “greenhouse effect”, and the quest for a universal theory of terrestrial temperatures. Endeavour, 23(2), 72–75. doi:10.1016/S0160-9327(99)01210-7

Fourier, J. (1822). Théorie Analytique de la Chaleur (“Analytical Theory of Heat”). Paris: Chez Firmin Didot, Pere et Fils.

Fourier, J. (1827). On the Temperatures of the Terrestrial Sphere and Interplanetary Space. Mémoires de l’Académie Royale Des Sciences, 7, 569–604. (translation by Ray Pierrehumbert)

Poynting, J. H. (1907). On Prof. Lowell’s Method for Evaluating the Surface-temperatures of the Planets; with an Attempt to Represent the Effect of Day and Night on the Temperature of the Earth. Philosophical Magazine, 14(84), 749–760.

Very, F. W. (1908). The Greenhouse Theory and Planetary Temperatures. Philosophical Magazine, 16(93), 462–480.

Wood, R. W. (1909). Note on the Theory of the Greenhouse. Philosophical Magazine, 17, 319–320. Retrieved from http://scienceblogs.com/stoat/2011/01/07/r-w-wood-note-on-the-theory-of/

I’ve been trawling through the final draft of the new IPCC assessment report that was released last week, to extract some highlights for a talk I gave yesterday. Here’s what I think are its key messages:

  1. The warming is unequivocal.
  2. Humans caused the majority of it.
  3. The warming is largely irreversible.
  4. Most of the heat is going into the oceans.
  5. Current rates of ocean acidification are unprecedented.
  6. We have to choose which future we want very soon.
  7. To stay below 2°C of warming, the world must become carbon negative.
  8. To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Before I elaborate on these, a little preamble. The IPCC was set up in 1988 as a UN intergovernmental body to provide an overview of the science. Its job is to assess what the peer-reviewed science says, in order to inform policymaking, but it is not tasked with making specific policy recommendations. The IPCC and its workings seem to be widely misunderstood in the media. The dwindling group of people who are still in denial about climate change particularly like to indulge in IPCC-bashing, which seems like a classic case of ‘blame the messenger’. The IPCC itself has a very small staff (no more than a dozen or so people). However, the assessment reports are written and reviewed by a very large team of scientists (several thousands), all of whom volunteer their time to work on the reports. The scientists are are organised into three working groups: WG1 focuses on the physical science basis, WG2 focuses on impacts and climate adaptation, and WG3 focuses on how climate mitigation can be achieved.

Last week, just the WG1 report was released as a final draft, although it was accompanied by bigger media event around the approval of the final wording on the WG1 “Summary for Policymakers”. The final version of the full WG1 report, plus the WG2 and WG3 reports, are not due out until spring next year. That means it’s likely to be subject to minor editing/correcting, and some of the figures might end up re-drawn. Even so, most of the text is unlikely to change, and the major findings can be considered final. Here’s my take on the most important findings, along with a key figure to illustrate each.

(1) The warming is unequivocal

The text of the summary for policymakers says “Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased.”

Observed globally averaged combined land and ocean surface temperature anomaly 1850-2012. The top panel shows the annual values; the bottom panel shows decadal means. (Note: Anomalies are relative to the mean of 1961-1990).

(Fig SPM.1) Observed globally averaged combined land and ocean surface temperature anomaly 1850-2012. The top panel shows the annual values; the bottom panel shows decadal means. (Note: Anomalies are relative to the mean of 1961-1990).

Unfortunately, there has been much play in the press around a silly idea that the warming has “paused” in the last decade. If you squint at the last few years of the top graph, you might be able to convince yourself that the temperature has been nearly flat for a few years, but only if you cherry pick your starting date, and use a period that’s too short to count as climate. When you look at it in the context of an entire century and longer, such arguments are clearly just wishful thinking.

The other thing to point out here is that the rate of warming is unprecedented. “With very high confidence, the current rates of CO2, CH4 and N2O rise in atmospheric concentrations and the associated radiative forcing are unprecedented with respect to the highest resolution ice core records of the last 22,000 years”, and there is “medium confidence that the rate of change of the observed greenhouse gas rise is also unprecedented compared with the lower resolution records of the past 800,000 years.” In other words, there is nothing in any of the ice core records that is comparable to what we have done to the atmosphere over the last century. The earth has warmed and cooled in the past due to natural cycles, but never anywhere near as fast as modern climate change.

(2) Humans caused the majority of it

The summary for policymakers says “It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century”.

The Earth's energy budget from 1970 to 2011. Cumulative energy flux (in zettaJoules!) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

(Box 13.1 fig 1) The Earth’s energy budget from 1970 to 2011. Cumulative energy flux (in zettaJoules!) into the Earth system from well-mixed and short-lived greenhouse gases, solar forcing, changes in tropospheric aerosol forcing, volcanic forcing and surface albedo, (relative to 1860–1879) are shown by the coloured lines and these are added to give the cumulative energy inflow (black; including black carbon on snow and combined contrails and contrail induced cirrus, not shown separately).

This chart summarizes the impact of different drivers of warming and/or cooling, by showing the total cumulative energy added to the earth system since 1970 from each driver. Note that the chart is in zettajoules (1021J). For comparison, one zettajoule is about the energy that would be released from 200 million bombs of the size of the one dropped on Hiroshima. The world’s total annual global energy consumption is about 0.5ZJ.

Long lived greenhouse gases, such as CO2, contribute the majority of the warming (the purple line). Aerosols, such as particles of industrial pollution, block out sunlight and cause some cooling (the dark blue line), but nowhere near enough to offset the warming from greenhouse gases. Note that aerosols have the largest uncertainty bar; much of the remaining uncertainty about the likely magnitude of future climate warming is due to uncertainty about how much of the warming might be offset by aerosols. The uncertainty on the aerosols curve is, in turn, responsible for most of the uncertainty on the black line, which shows the total effect if you add up all the individual contributions.

The graph also puts into perspective some of other things that people like to blame for climate change, including changes in energy received from the sun (‘solar’), and the impact of volcanoes. Changes in the sun (shown in orange) are tiny compared to greenhouse gases, but do show a very slight warming effect. Volcanoes have a larger (cooling) effect, but it is short-lived. There were two major volcanic eruptions in this period, El Chichón in 1982 and and Pinatubo in 1992. Each can be clearly seen in the graph as an immediate cooling effect, which then tapers off after a a couple of years.

(3) The warming is largely irreversible

The summary for policymakers says “A large fraction of anthropogenic climate change resulting from CO2 emissions is irreversible on a multi-century to millennial time scale, except in the case of a large net removal of CO2 from the atmosphere over a sustained period. Surface temperatures will remain approximately constant at elevated levels for many centuries after a complete cessation of net anthropogenic CO2 emissions.”

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

(Fig 12.43) Results from 1,000 year simulations from EMICs on the 4 RCPs up to the year 2300, followed by constant composition until 3000.

The conclusions about irreversibility of climate change are greatly strengthened from the previous assessment report, as recent research has explored this in much more detail. The problem is that a significant fraction of our greenhouse gas emissions stay in the atmosphere for thousands of years, so even if we stop emitting them altogether, they hang around, contributing to more warming. In simple terms, whatever peak temperature we reach, we’re stuck at for millennia, unless we can figure out a way to artificially remove massive amounts of CO2 from the atmosphere.

The graph is the result of an experiment that runs (simplified) models for a thousand years into the future. The major climate models are generally too computational expensive to be run for such a long simulation, so these experiments use simpler models, so-called EMICS (Earth system Models of Intermediate Complexity).

The four curves in this figure correspond to four “Representative Concentration Pathways“, which map out four ways in which the composition of the atmosphere is likely to change in the future. These four RCPs were picked to capture four possible futures: two in which there is little to no coordinated action on reducing global emissions (worst case – RCP8.5 and best case – RCP6) and two on which there is serious global action on climate change (worst case – RCP4.5 and best case – RCP 2.6). A simple way to think about them is as follows. RCP8.5 represents ‘business as usual’ – strong economic development for the rest of this century, driven primarily by dependence on fossil fuels. RCP6 represents a world with no global coordinated climate policy, but where lots of localized clean energy initiatives do manage to stabilize emissions by the latter half of the century. RCP4.5 represents a world that implements strong limits on fossil fuel emissions, such that greenhouse gas emissions peak by mid-century and then start to fall. RCP2.6 is a world in which emissions peak in the next few years, and then fall dramatically, so that the world becomes carbon neutral by about mid-century.

Note that in RCP2.6 the temperature does fall, after reaching a peak just below 2°C of warming over pre-industrial levels. That’s because RCP2.6 is a scenario in which concentrations of greenhouse gases in the atmosphere start to fall before the end of the century. This is only possible if we reduce global emissions so fast that we achieve carbon neutrality soon after mid-century, and then go carbon negative. By carbon negative, I mean that globally, each year, we remove more CO2 from the atmosphere than we add. Whether this is possible is an interesting question. But even if it is, the model results show there is no time within the next thousand years when it is anywhere near as cool as it is today.

(4) Most of the heat is going into the oceans

The oceans have a huge thermal mass compared to the atmosphere and land surface. They act as the planet’s heat storage and transportation system, as the ocean currents redistribute the heat. This is important because if we look at the global surface temperature as an indication of warming, we’re only getting some of the picture. The oceans act as a huge storage heater, and will continue to warm up the lower atmosphere (no matter what changes we make to the atmosphere in the future).

(Box 3.1 Fig 1) Plot of energy accumulation in ZJ (1 ZJ = 1021 J) within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. See text for data sources. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

(Box 3.1 Fig 1) Plot of energy accumulation in ZJ (1 ZJ = 1021 J) within distinct components of Earth’s climate system relative to 1971 and from 1971–2010 unless otherwise indicated. Ocean warming (heat content change) dominates, with the upper ocean (light blue, above 700 m) contributing more than the deep ocean (dark blue, below 700 m; including below 2000 m estimates starting from 1992). Ice melt (light grey; for glaciers and ice caps, Greenland and Antarctic ice sheet estimates starting from 1992, and Arctic sea ice estimate from 1979–2008); continental (land) warming (orange); and atmospheric warming (purple; estimate starting from 1979) make smaller contributions. Uncertainty in the ocean estimate also dominates the total uncertainty (dot-dashed lines about the error from all five components at 90% confidence intervals).

Note the relationship between this figure (which shows where the heat goes) and the figure I showed above that shows change in cumulative energy budget from different sources. Both graphs show zettajoules accumulating over about the same period (1970-2011). But the first graph has a cumulative total just short of 800ZJ by the end of the period, while this one shows the earth storing “only” about 300ZJ of this. Where did the remaining energy go? Because the earth’s temperature rose during this period, it also lost increasingly more energy back into space. When greenhouse gases trap heat, the earth’s temperature keeps rising until outgoing energy and incoming energy are in balance again.

(5) Current rates of ocean acidification are unprecedented.

The IPCC report says “The pH of seawater has decreased by 0.1 since the beginning of the industrial era, corresponding to a 26% increase in hydrogen ion concentration. … It is virtually certain that the increased storage of carbon by the ocean will increase acidification in the future, continuing the observed trends of the past decades. … Estimates of future atmospheric and oceanic carbon dioxide concentrations indicate that, by the end of this century, the average surface ocean pH could be lower than it has been for more than 50 million years”.

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings

(Fig SPM.7c) CMIP5 multi-model simulated time series from 1950 to 2100 for global mean ocean surface pH. Time series of projections and a measure of uncertainty (shading) are shown for scenarios RCP2.6 (blue) and RCP8.5 (red). Black (grey shading) is the modelled historical evolution using historical reconstructed forcings. [The numbers indicate the number of models used in each ensemble.]

Ocean acidification has sometimes been ignored in discussions about climate change, but it is a much simpler process, and is much easier to calculate (notice the uncertainty range on the graph above is much smaller than most of the other graphs). This graph shows the projected acidification in the best and worst case scenarios (RCP2.6 and RCP8.5). Recall that RCP8.5 is the “business as usual” future.

Note that this doesn’t mean the ocean will become acid. The ocean has always been slightly alkaline – well above the neutral value of pH7. So “acidification” refers to a drop in pH, rather than a drop below pH7. As this continues, the ocean becomes steadily less alkaline. Unfortunately, as the pH drops, the ocean stops being supersaturated for calcium carbonate. If it’s no longer supersaturated, anything made of calcium carbonate starts dissolving. Corals and shellfish can no longer form. If you kill these off, the entire ocean foodchain is affected. Here’s what the IPCC report says: “Surface waters are projected to become seasonally corrosive to aragonite in parts of the Arctic and in some coastal upwelling systems within a decade, and in parts of the Southern Ocean within 1–3 decades in most scenarios. Aragonite, a less stable form of calcium carbonate, undersaturation becomes widespread in these regions at atmospheric CO2 levels of 500–600 ppm”.

(6) We have to choose which future we want very soon.

In the previous IPCC reports, projections of future climate change were based on a set of scenarios that mapped out different ways in which human society might develop over the rest of this century, taking account of likely changes in population, economic development and technological innovation. However, none of the old scenarios took into account the impact of strong global efforts at climate mitigation. In other words, they all represented futures in which we don’t take serious action on climate change. For this report, the new “RCPs” have been chosen to allow us to explore the choice we face.

This chart sums it up nicely. If we do nothing about climate change, we’re choosing a path that will look most like RCP8.5. Recall that this is the one where emissions keep rising just as they have done throughout the 20th century. On the other hand, if we get serious about curbing emissions, we’ll end up in a future that’s probably somewhere between RCP2.6 and RCP4.5 (the two blue lines). All of these futures give us a much warmer planet. All of these futures will involve many challenges as we adapt to life on a warmer planet. But by curbing emissions soon, we can minimize this future warming.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

(Fig 12.5) Time series of global annual mean surface air temperature anomalies (relative to 1986–2005) from CMIP5 concentration-driven experiments. Projections are shown for each RCP for the multi model mean (solid lines) and the 5–95% range (±1.64 standard deviation) across the distribution of individual models (shading). Discontinuities at 2100 are due to different numbers of models performing the extension runs beyond the 21st century and have no physical meaning. Only one ensemble member is used from each model and numbers in the figure indicate the number of different models contributing to the different time periods. No ranges are given for the RCP6.0 projections beyond 2100 as only two models are available.

Note also that the uncertainty range (the shaded region) is much bigger for RCP8.5 than it is for the other scenarios. The more the climate changes beyond what we’ve experienced in the recent past, the harder it is to predict what will happen. We tend to use the difference across different models as an indication of uncertainty (the coloured numbers shows how many different models participated in each experiment). But there’s also the possibility of “unknown unknowns” – surprises that aren’t in the models, so the uncertainty range is likely to be even bigger than this graph shows.

(7) To stay below 2°C of warming, the world must become carbon negative.

Only one of the four future scenarios (RCP2.6) shows us staying below the UN’s commitment to no more than 2ºC of warming. In RCP2.6, emissions peak soon (within the next decade or so), and then drop fast, under a stronger emissions reduction policy than anyone has ever proposed in international negotiations to date. For example, the post-Kyoto negotiations have looked at targets in the region of 80% reductions in emissions over say a 50 year period. In contrast, the chart below shows something far more ambitious: we need more than 100% emissions reductions. We need to become carbon negative:

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

(Figure 12.46) a) CO2 emissions for the RCP2.6 scenario (black) and three illustrative modified emission pathways leading to the same warming, b) global temperature change relative to preindustrial for the pathways shown in panel (a).

The graph on the left shows four possible CO2 emissions paths that would all deliver the RCP2.6 scenario, while the graph on the right shows the resulting temperature change for these four. They all give similar results for temperature change, but differ in how we go about reducing emissions. For example, the black curve shows CO2 emissions peaking by 2020 at a level barely above today’s, and then dropping steadily until emissions are below zero by about 2070. Two other curves show what happens if emissions peak higher and later: the eventual reduction has to happen much more steeply. The blue dashed curve offers an implausible scenario, so consider it a thought experiment: if we held emissions constant at today’s level, we have exactly 30 years left before we would have to instantly reduce emissions to zero forever.

Notice where the zero point is on the scale on that left-hand graph. Ignoring the unrealistic blue dashed curve, all of these pathways require the world to go net carbon negative sometime soon after mid-century. None of the emissions targets currently being discussed by any government anywhere in the world are sufficient to achieve this. We should be talking about how to become carbon negative.

One further detail. The graph above shows the temperature response staying well under 2°C for all four curves, although the uncertainty band reaches up to 2°C. But note that this analysis deals only with CO2. The other greenhouse gases have to be accounted for too, and together they push the temperature change right up to the 2°C threshold. There’s no margin for error.

(8) To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Perhaps the most profound advance since the previous IPCC report is a characterization of our global carbon budget. This is based on a finding that has emerged strongly from a number of studies in the last few years: the expected temperature change has a simple linear relationship with cumulative CO2 emissions since the beginning of the industrial era:

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

The chart is a little hard to follow, but the main idea should be clear: whichever experiment we carry out, the results tend to lie on a straight line on this graph. You do get a slightly different slope in one experiment, the “1%/yr” experiment, where only CO2 rises, and much more slowly than it has over the last few decades. All the more realistic scenarios lie in the orange band, and all have about the same slope.

This linear relationship is a useful insight, because it means that for any target ceiling for temperature rise (e.g. the UN’s commitment to not allow warming to rise more than 2°C above pre-industrial levels), we can easily determine a cumulative emissions budget that corresponds to that temperature. So that brings us to the most important paragraph in the entire report, which occurs towards the end of the summary for policymakers:

Limiting the warming caused by anthropogenic CO2 emissions alone with a probability of >33%, >50%, and >66% to less than 2°C since the period 1861–1880, will require cumulative CO2 emissions from all anthropogenic sources to stay between 0 and about 1560 GtC, 0 and about 1210 GtC, and 0 and about 1000 GtC since that period respectively. These upper amounts are reduced to about 880 GtC, 840 GtC, and 800 GtC respectively, when accounting for non-CO2 forcings as in RCP2.6. An amount of 531 [446 to 616] GtC, was already emitted by 2011.

Unfortunately, this paragraph is a little hard to follow, perhaps because there was a major battle over the exact wording of it in the final few hours of inter-governmental review of the “Summary for Policymakers”. Several oil states objected to any language that put a fixed limit on our total carbon budget. The compromise was to give several different targets for different levels of risk. Let’s unpick them. First notice that the targets in the first sentence are based on looking at CO2 emissions alone; the lower targets in the second sentence take into account other greenhouse gases, and other earth systems feedbacks (e.g. release of methane from melting permafrost), and so are much lower. It’s these targets that really matter:

  • To give us a one third (33%) chance of staying below 2°C of warming over pre-industrial levels, we cannot ever emit more than 880 gigatonnes of Carbon. 
  • To give us a 50% chance, we cannot ever emit more than 840 gigatonnes of Carbon.
  • To give us a 66% chance, we cannot ever emit more than 800 gigatonnes of Carbon.

Since the beginning of industrialization, we have already emitted a little more than 500 gigatonnes. So our remaining budget is somewhere between 300 and 400 gigatonnes of carbon. Existing known fossil fuel reserves are enough to release at least 1000 gigatonnes. New discoveries and unconventional sources will likely more than double this. That leads to one inescapable conclusion:

Most of the remaining fossil fuel reserves must stay buried in the ground.

We’ve never done that before. There is no political or economic system anywhere in the world currently that can persuade an energy company to leave a valuable fossil fuel resource untapped. There is no government in the world that has demonstrated the ability to forgo the economic wealth from natural resource extraction, for the good of the planet as a whole. We’re lacking both the political will and the political institutions to achieve this. Finding a way to achieve this presents us with a challenge far bigger than we ever imagined.

Update (10 Oct 2013): An earlier version of this post omitted the phrase “To stay below 2°C of warming” from the last key point.

My first year seminar course, PMU199 Climate Change: Software, Science and Society is up and running again this term. The course looks at the role of computational models in both the science and the societal decision-making around climate change. The students taking the course come from many different departments across arts and science, and we get to explore key concepts in a small group setting, while developing our communication skills.

As an initial exercise, this year’s cohort of students have written their first posts for the course blog (assignment: write a blog post on any aspect of climate change that interests you). Feel free to comment on their posts, but please keep it constructive – the students get a chance to revise their posts before we grade them (and if you’re curious, here’s the rubric).

Incidentally, for the course this year, I’ve adopted Andrew Dessler’s new book, Introduction to Modern Climate Change as the course text. The book was just published earlier this year, and I must say, it’s by far the best introductory book on climate science that I’ve seen. My students tell me they really like the book (despite the price), as it explains concepts simply and clearly, and they especially like the fact that it covers policy and society issues as well as the science. I really like the discussion in chapter 1 on who to believe, in which the author explains that readers ought to be skeptical of anyone writing on this topic (including himself), and then lays out some suggestions for how to decide who to believe. Oh, and I love the fact that there’s an entire chapter later in the book devoted to the idea of exponential growth.

At the CMIP5 workshop earlier this week, one of Ed Hawkins‘ charts caught my eye, because he changed how we look at model runs. We’re used to seeing climate models used to explore the range of likely global temperature responses under different future emissions scenarios, and the results presented as a graph of changing temperature over time. For example, this iconic figure from the last IPCC assessment report (click for the original figure and caption at the IPCC site):

These graphs tend to focus too much on the mean temperature response in each scenario (where ‘mean’ means ‘the multi-model mean’). I tend to think the variance is more interesting – both within each scenario (showing differences in the various CMIP3 models on the same scenarios), and across the different scenarios (showing how our future is likely to be affected by the energy choices implicit in each scenario). A few months ago, I blogged about the analysis that Hawkins and Sutton did on these variabilities, to explore how the different sources of uncertainty change as you move from near term to long term. The analysis shows that in the first few decades, the differences in the models dominate (which doesn’t bode well for decadal forecasting – the models are all over the place). But by the end of the century, the differences between the emissions scenarios dominates (i.e. the spread of projections from the different scenarios is significantly bigger than the  disagreements between models). Ed presented an update on this analysis for the CMIP5 models this week, which looks very similar.

But here’s the new thing that caught my eye: Ed included a graph of temperature responses tipped on its side, to answer a different question: how soon will the global temperature exceed the policymaker’s adopted “dangerous” threshold of 2°C, under each emissions scenario. And, again, how big is the uncertainty? This idea was used in a paper last year by Joshi et. al., entitled Projections of when temperature change will exceed 2 °C above pre-industrial levels. Here’s their figure 1:

Figure 1 from Joshi et al, 2011

By putting the dates on the Y-axis and temperatures on the X-axis, and cutting off the graph at 2°C, we get a whole new perspective on what the models runs are telling us. For example, it’s now easy to see that in all these scenarios, we pass the 2°C threshold well before the end of the century (whereas the IPCC graph above completely obscures this point), and under the higher emissions scenarios, we get to 3°C by the end of the century.

A wonderful example of how much difference the choice of presentation makes. I guess I should mention, however, that the idea of a 2°C threshold is completely arbitrary. I’ve asked many different scientists where the idea came from, and they all suggest it’s something the policymakers dreamt up, rather than anything arising out of scientific analysis. The full story is available in Randalls, 2011, “History of the 2°C climate target”.

This week, I’m featuring some of the best blog posts written by the students on my first year undergraduate course, PMU199 Climate Change: Software, Science and Society. The first is by Terry, and it first appeared on the course blog on January 28.

A couple of weeks ago, Professor Steve was talking about the extra energy that we are adding to the earth system during one of our sessions (and on his blog). He showed us this chart from the last IPCC report in 2007 that summarizes the various radiative forces from different sources:

Notice how aerosols account for most of the negative radiative forcing. But what are aerosols? What is their direct effect, their contribution in the cloud albedo effect, and do they have any other impact?

More »

Update (Aug 15, 2013): After a discussion on Twitter with Gavin Schmidt, I realised I did the calculation wrong. The reason is interesting: I’d confused radiative forcing with the current energy imbalance at the top of the atmosphere. A rookie mistake, but it shows that climate science can be tricky to understand, and it *really* helps to be able to talk to experts when you’re learning it… [I’ve marked the edits in green]

I’ve been meaning to do this calculation for ages, and finally had an excuse today, as I need it for the first year course I’m teaching on climate change. The question is: how much energy are we currently adding to the earth system due to all those greenhouse gases we’ve added to the atmosphere?

In the literature, the key concept is anthropogenic forcing, by which is meant the extent to which human activities are affecting the energy balance of the earth. When the Earth’s climate is stable, it’s because the planet is in radiative balance, meaning the incoming radiation from the sun and the outgoing radiation from the earth back into space are equal. A planet that’s in radiative balance will generally stay at the same (average) temperature because it’s not gaining or losing energy. If we force it out of balance, then the global average temperature will change.

Physicists express radiative forcing in watts per square meter (W/m2), meaning the number of extra watts of power that the earth is receiving, for each square meter of the earth’s surface. Figure 2.4 from the last IPCC report summarizes the various radiative forcings from different sources. The numbers show best estimates of the overall change from 1750 to 2005 (note the whiskers, which express uncertainty – some of these values are known much better than others):

If you add up the radiative forcing from greenhouse gases, you get a little over 2.5 W/m2. Of course, you also have to subtract the negative forcings from clouds and aerosols (tiny particles of pollution, such as sulpur dioxide), as these have a cooling effect because they block some of the incoming radiation from the sun. So we can look at the forcing that’s just due to greenhouse gases (about 2.5 W/m2), or we can look at the total net anthropogenic forcing that takes into account all the different effects (which is about 1.6 W/m2).

Over the period covered by the chart, 1750-2005, the earth warmed somewhat in response to this radiative forcing. The total incoming energy has increased by about +1.6W/m2, but the total outgoing energy lost to space has also risen – a warmer planet loses energy faster. The current imbalance between incoming and outgoing energy at the top of the atmosphere is therefore smaller than the total change in forcing over time. Hansen et. al. give an estimate of the energy imbalance of 0.58 ± 0.15 W/m2 for the period from 2005-2010.

The problem I have with these numbers is that they don’t mean much to most people. Some people try to explain it by asking people to imagine adding a 2 watt light bulb (the kind you get in Christmas lights) over each square meter of the planet, which is on continuously day and night. But I don’t think this really helps much, as most people (including me) do not have a good intuition for how many square meters the surface of Earth has, and anyway, we tend to think of a Christmas tree light bulb as using a trivially small amount of power. According to wikipedia, the Earth’s surface is 510 million square kilometers, which is 510 trillion square meters.

So, do the maths, that gives us a change in incoming energy of about 1,200 trillion watts (1.2 petawatts) for just the anthropogenic greenhouse gases, or about 0.8 petawatts overall when we subtract the cooling effect of changes in clouds and aerosols. But some of this extra energy is being lost back into space. From the current energy imbalance, the planet is gaining 0.3 petawatts at the moment.

But how big is a petawatt? A petawatt is 1015 watts. Wikipedia tells us that the average total global power consumption of the human world in 2010 was about 16 terawatts (1 petawatt = 1000 terawatts). So, human energy consumption is dwarfed by the extra energy currently being absorbed by the planet due to climate change: the planet is currently gaining about 18 watts of extra power for each 1 watt of power humans actually use.

Note: Before anyone complains, I’ve deliberately conflated energy and power above, because the difference doesn’t really matter for my main point. Power is work per unit of time, and is measured in watts; Energy is better expressed in joules, calories, or kilowatt hours (kWh). To be technically correct, I should say that the earth is getting about 300 terawatt hours of energy per hour due to anthropogenic climate change, and humans use about 16 terawatt hours of energy per hour. The ratio is still approximately 18.

Out of interest, you can also convert it to calories. 1kWh is about 0.8 million calories. So, we’re force-feeding the earth about 2 x 1017 (200,000,000,000,000,000) calories every hour. Yikes.

I went to a talk yesterday by Mark Pagani (Yale University), on the role of methane hydrates in the Paleocene-Eocene Thermal Maximum (PETM). The talk was focussed on how to explain the dramatic warming seen at the end of the Paleocene, 56 million years ago. During the Paleocene, the world was already much warmer than it is today (by around 5°C), and had been ice free for millions of years. But at the end of the Paleocene, the tempature shot up by at least another 5°C, over the course of a few thousand years, giving us a world with palm trees and crocodiles in the arctic, and this “thermal maximum” lasted around 100,000 years. The era brought a dramatic reduction in animal body size (although note: the dinosaurs had already been wiped out at the beginning of the Paleocene), and saw the emergence of small mammals.

But what explains the dramatic warming? The story is fascinating, involving many different lines of evidence, and I doubt I can do it justice without a lot more background reading. I’ll do a brief summary here, as I want to go on to talk about something that came up in the questions about climate sensitivity.

First, we know that the warming at the PETM coincided with a massive influx of carbon, and the fossil record shows a significant shift in carbon isotopes, so it was a new and different source of carbon. The resulting increase in CO2 warmed the planet in the way we would expect. But where did the carbon come from? The dominant hypothesis has been that it came from a sudden melting of undersea methane hydrates, triggered by tectonic shifts. But Mark explained that this hypothesis doesn’t add up, because there isn’t enough carbon to account for the observed shift in carbon isotopes, and it also requires a very high value for climate sensitivity (in the range 9-11°C), which is inconsistent with the IPCC estimates of 2-4.5ºC. Some have argued this is evidence that climate sensitivity really is much higher, or perhaps that our models are missing some significant amplifiers of warming (see for instance, the 2008 paper by Zeebe et al., which caused a ruckus in the media). But, as Mark pointed out, this really misses the key point. If the numbers are inconsistent with all the other evidence about climate sensitivity, then it’s more likely that the methane hydrates hypothesis itself is wrong. Mark’s preferred explanation is a melting of the antarctic permafrost, caused by a shift in orbital cycles, and indeed he demonstrates that the orbital pattern leads to similar spikes (of decreasing amplitude) throughout the Eocene. Prior to the PETM, Antarctica would have been ice free for so long that a substantial permafrost would have built up, and even conservative estimates based on today’s permafrost in the sub-arctic regions would have enough carbon to explain the observed changes. (Mark has a paper on this coming out soon).

That was very interesting, but for me the most interesting part was in the discussion at the end of the talk. Mark had used the term “earth system sensitivity” instead of “climate sensitivity”, and Dick Peltier suggested he should explain the distinction for the benefit of the audience.

Mark began by pointing out that the real scientific debate about climate change (after you discount the crazies) is around the actual value of climate sensitivity, which is shorthand for the relationship between changes in atmospheric concentrations of CO2 and the resulting change in global temperature:

Key relationships in the climate system. Adapted from a flickr image by ClimateSafety (click image for the original)

The term climate sensitivity was popularized in 1979 by the Charney report, and refers to the eventual temperature response to a doubling of CO2 concentrations, taking into account fast feedbacks such as water vapour, but not the slow feedbacks such as geological changes. Charney sensitivity also assumes everything else about the earth system (e.g. ice sheets, vegetation, ocean biogeochemistry, atmospheric chemistry, aerosols, etc) is held constant. The reason the definition refers to warming per doubling of CO2 is because the radiative effect of CO2 is roughly logarithmic, so you get about he same warming each time you double atmospheric concentrations. Charney calculated climate sensitivity to be 3°C (±1.5), a value that was first worked out in the 1950’s, and hasn’t really changed, despite decades of research since then. Note: equilibrium climate sensitivity is also not the same as the transient response.

Earth System Sensitivity is then the expected change in global temperature in response to a doubling of CO2 when we do take into account all the other aspects of the earth system. This is much harder to estimate, because there is a lot more uncertainty around different kinds of interactions in the earth system. However, many scientists expect it to be higher than the Charney sensitivity, because, on balance, most of the known earth system feedbacks are positive (i.e. they amplify the basic greenhouse gas warming).

Mark put it this way: Earth System Sensitivity is like an accordion. It stretches out or contracts, depending on the current state of the earth system. For example, if you melt the arctic sea ice, this causes an amplifying feedback because white ice has a higher albedo than the dark sea water that replaces it. So if there’s a lot of ice to melt, it would increase earth system sensitivity. But if you’ve already melted all the sea ice, the effect is gone. Similarly, if the warming leads to a massive drying out and burning of vegetation, that’s another temporary amplification that will cease once you’ve burned off most of the forests. If you start the doubling in a warmer world, in which these feedbacks are no longer available, earth system sensitivity might be lower.

The key point is that, unlike Charney sensitivity, earth system sensitivity depends on where you start from. In the case of the PETM, the starting point for the sudden warming was a world that was already ice free. So we shouldn’t expect the earth system sensitivity to be the same as it is in the 21st century. Which certainly complicates the job of comparing climate changes in the distant past with those of today.

But, more relevantly for current thinking about climate policy, thinking in terms of Charney sensitivity is likely to be misleading. If earth system sensitivity is significantly bigger in today’s earth system, which seems likely, then calculations of expected warming based on Charney sensitivity will underestimate the warming, and hence the underestimate the size of the necessary policy responses.

I’ve been invited to give a guest seminar to the Dynamics of Global Change core course, which is being run this year by Prof Robert Vipond, of the Munk School of Global Affairs. The course is an inter-disciplinary exploration of globalization (and especially global capitalism) as a transformative change to the world we live in. (One of the core texts is Jan Aart Scholte’s Globalization: A Critical Introduction).

My guest seminar, which I’ve titled “Climate Change as a Global Challenge“, comes near the middle of the course, among a series of different aspects of globalization, including international relations, global mortality, humanitarianism, and human security. I had to provide some readings for the students, and had an interesting time whittling it down to a manageable set (they’ll only get 1 week in which to read them). Here’s what I came up with, and some rationale for why I picked them:

  1. Kartha S, Siebert CK, Mathur R, et al. A Copenhagen Prognosis: Towards a Safe Climate Future.
    I picked this as a short (12 page) overview of the latest science and policy challenges. I was going to use the much longer Copenhagen Diagnosis, but at 64 pages, I thought it was probably a bit much, and anyway, it’s missing the discussion about emissions allocations (see fig 11 of the Prognosis report), which is a nice tie in to the globalization and international politics themes of the course…
  2. Rockström J, Steffen W, Noone K, et al. A Safe Operating Space for Humanity. Nature. 2009;461(7263):472–475.
    This one’s very short (4 pages) and gives a great overview of the concept of planetary boundaries. It also connects up climate change with a set of related boundary challenges. And it’s rapidly become a classic.
  3. Müller P. Constructing climate knowledge with computer models. Wiley Interdisciplinary Reviews: Climate Change. 2010.
    A little long, but it’s one of the best overviews of the role of modeling in climate science that I’ve ever seen. As part of the aim of the course is to examine the theoretical perspectives and methodologies of different disciplines, I want to spend some time in the seminar talking about what’s in a climate model, and how they’re used. I picked Müller over and above another great paper, Moss et al on the next generation of scenarios, which is an excellent discussion of how scenarios are developed and used. However, I think Müller is a little more readable, and covers more aspects of the modeling process.
  4. Jamieson D. The Moral and Political Challenges of Climate Change. In: Moser SC, Dilling L, eds. Creating A Climate for Change. Cambridge University Press; 2006:475-482.
    Nice short, readable piece on climate ethics, as an introduction to issues of equity and international justice…

So that’s the readings. What do you all think of my choice?

I had to sacrifice another set of readings I’d picked out on Systems Thinking and Cybernetics, for which I was hoping to use at least the first chapter of Donella Meadows’ book, because it offers another perspective on how to link up global problems and our understanding of the climate system. But that will have to wait for a future seminar…

This is brilliant:

There’s a whole series. Each video is less than three minutes, but manages to pack in some of the clearest, most informative account of climate change I’ve ever seen:

(I’m not sure what happened to #1)

While preparing for my class this morning, I was looking for graphs that show model projections for likely warming over the coming century, and I ended up putting these two graphs side by side:

What’s interesting is that they are both based on the same dataset (namely, from the model runs for projections for the coming century from the CMIP3 dataset used in the IPCC AR4). But they present the information in radically different ways. Some of the differences are obvious, and some are subtle:
  • The second graph puts the projections into the context of the last 1500 years or relatively stable climate, while the first graph only goes back to the beginning of the 20th Century, so you don’t see the contrast with the pre-industrial context;
  • The first graph gives some projections longer than the 21st century – for selected scenarios, the temperature response is shown out to three centuries.
  • The two graphs show different selections of scenarios: A1B, A2, and B1 in the first graph, and A1FI, A2 and B1 in the second.
  • The baseline for temperature anomalies is different. The first graph uses the IPCC standard of the average global temperature from 1961-1990 as the zero point; the second graph uses the 18th century average as the zero point. As best I can tell, the difference is a little over 0.5°C, so the first graph shows a temperature anomaly for the end of the 20th Century as less than +0.5°C, while the second graph has this anomaly closer to +1°C.

These choices are interesting for a number of reasons. Most obviously, the second graph is much scarier. The extra context from the pre-industrial era emphasizes how unusual the warming is, and the compressed timescale emphasizes the rapidity of the warming. The pre-industrial baseline shifts the Y axis slightly, so the warming from the shared scenarios, A2 and B1, looks a little worse. And by cutting the graph off at 2100, you don’t see the eventual stabilization for the B1 scenario.

The selection of which scenarios to show is important too. The SRES scenarios are projections for future emissions of greenhouse gases, based on different assumptions about economic development, globalization, and how quickly we switch to cleaner energy sources. The A scenarios represent worlds in which economic growth is emphasized over environmental protection, while the B scenarios represent a future world in which environmental measures are prioritized over economic growth. These scenarios define the emissions profiles used as input to the models, which then calculate the temperature response (because the models can only compute the earth systems’ responses to emissions levels; they can’t predict what humans will actually do!).

The choice to include A1FI in the second graph is important:

  • A1FI represents strong economic growth, a strong globalization trend, and aggressive exploitation of fossil fuels (the FI stands for “fossil fuel intensive”). That’s basically the world preferred by the oil industry and the US Republican party, i.e. “Drill, baby, drill”.
  • In contrast, A1B represents similar economic trends, but more of a balance of energy sources – i.e. something closer to what Obama was advocating in his state-of-the union address.
  • A2 is an intermediate scenario, with less globalization, and less technological development, and slower growth in the developing world.
  • B1 is what we might get if the world gets its act together and agrees tough new targets to reduce global emissions, and then actually follows through and implements them – i.e. something dramatically different to the Kyoto experience.

The data comes directly from the science (i.e. from the models for future projections, and from observations for the years prior to 2000). But the choices of how to present the information are not scientific choices, they are value choices. The choices made in the first graph all tend to play down the seriousness of climate change, while the choices in the second graph all tend to emphasize it. In particular, the choice not to include A1FI, the business-as-usual path in the first graph could be argued as a very serious omission – a failure to warn the world how bad it could get on our current path. Similarly, the decision to extend only the lower scenarios into future centuries conveys an overall message that we get to choose between two paths, one that stabilizes around +2°C and one that stabilizes around the +3°C level. This is not a fair representation of today’s policy choices.

Okay, now I should say where I got the two graphs from. The first might be very familiar – it’s from the IPCC 2007 assessment. The second is from the Copenhagen Diagnosis in 2009, a document put together by a respectable group of scientists (many of them are IPCC lead authors), intended as an update on the last IPCC report, taking on board developments in the science, and in particular, a growing body of evidence that the IPCC projections have tended to underestimate the trends.

The question of which graph better represents the prognosis is clearly a value judgment. Having compared the two, I now feel that the IPCC graph is missing a major part of the story, and hence is misleading. I think there are weaknesses in the second graph too, as the compressed timescale for the 21st century makes it really hard to discern the three trends. But it certainly seems a lot more appropriate to include more of the pre-industrial context, and to choose a pre-industrial temperature baseline. These graphs have the potential to take on an iconic status, and to directly affect people’s thinking about climate change. We really ought to examine more closely the choices that were made in presenting them.

Update, Feb 3 2011: Bart does a similar comparison with a third graph, which does a better job of the pre-industrial reconstruction, but still suffers from all the other problems of the IPCC graph.

I spent some time this week explaining to my undergraduate class the ideas of thermal equilibrium (loosely speaking, the point at which the planet’s incoming solar radiation and outgoing blackbody radiation are in balance) and climate sensitivity (loosely speaking, how much warmer the earth will get per doubling of CO2, until it reaches a new equilibrium). I think some of my students might prefer me to skip the basic physics, and get on quicker to the tough questions of what solutions there are to climate change, whether geo-engineering will work, and the likely impacts around the world.

So it’s nice to be reminded that a good grasp of the basic science is important. A study produced by the Argentinean group Federacion Ecologia Universal, and published on the American Association for Advancement of Science website, looked at the likely impact of climate change on global food supplies by the year 2020, concluding that global food prices will rise by iup to 20%, and some countries, such as India, will see crop yields drop as much as 30%. The study claims to have used IPCC temperature projections of up to 2.4°C rise in global average temperatures by 2020 on a business-as-usual scenario.

The trouble is the IPCC doesn’t have temperature projections anywhere near this high for 2020. As Scott Mandia explains, it looks like the author of the report made a critical (but understandable) mistake, confusing the two ways of understanding ‘climate sensitivity’:

  • Equilibrium Climate Sensitivity (ECS), which means overall eventual global temperature rise that would result  if we double the level of CO2 concentrations in the atmosphere.
  • Transient Climate Response (TCR), which means the actual temperature rise the planet will have experienced at the time this doubling happens.

These are different quantities because of lags in the system. It takes many years (perhaps decades) for the earth to reach a new equilibrium whenever we increase the concentrations of greenhouse gases, because most of the extra energy is initially absorbed by the oceans, and it takes a long time for the oceans and atmosphere to settle into a new balance. By global temperature, scientists normally mean the average air temperature measured just above the surface (which is probably where temperature matters most to humans).

BTW, calculating the temperature rise “per doubling of CO2” make sense because the greenhouse effect is roughly logarithmic – each doubling produces about the same temperature rise. So for example, the pre-industrial concentration of CO2 was about 280ppm (parts per million). So a doubling would take us to 560ppm (we’re currently at 390ppm).

To estimate how quickly the earth will warm, and where the heat might go, we need good models of how the earth systems (ocean, atmosphere, ice sheets, land surfaces) move heat around. In earth system models, the two temperature responses are estimated from two different types of experiment:

  • equilibrium climate sensitivity is calculated by letting CO2 concentrations rise steadily over a number of years, until they reach double the pre-industrial levels. They are then held steady after this point, and the run continues until the global temperature stops changing.
  • transient climate sensitivity response is calculated by increasing CO2 concentrations by 1% per year, until they reach double the pre-industrial levels, and taking the average temperature at that point.

Both experiments are somewhat unrealistic, and should be thought of more as thought experiments rather than predictions. For example, in the equilibrium experiment, it’s unlikely that CO2 concentrations would stop rising and then remain constant from that point on. In the transient experiment, the annual rise of 1% a little unrealistic –  CO2 concentrations rose by a less than 1% per year over the last decade. On the other hand, knowing the IPCC figures for equilibrium sensitivity tells you very little about the eventual temperature change if (when) we do reach 560ppm, because when we reach that level, it’s unlikely we’ll be able to prevent CO2 concentrations going even higher still.

Understanding all this matters for many reasons. If people confuse the two types of sensitivity, they’ll misunderstand what temperature changes are likely to happen when. More importantly, failure to understand these ideas means a failure to understand the lags in the system:

  • there’s a lag of decades between increasing greenhouse gas concentrations and the eventual temperature response. In other words, we’re always owed more warming than we’ve had. Even if we stopped using fossil fuels immediately, temperatures would still rise for a while.
  • there’s another lag also decades long, between peak emissions and peak concentrations. If we get greenhouse gas emissions under control and then start to reduce them, atmospheric concentrations will continue to rise for as long as the emissions exceed the rate of natural removal of CO2 from the atmosphere.
  • there’s another lag (and evidence shows it’s also decades long) between humans realising climate change is a serious problem, and any coordinated attempts to do something about it.
  • and yet another lag (probably also decades long, hopefully shorter) between the time we implement any serious international climate policies and the point at which we reach peak emissions, because it will take a long time to re-engineer the world’s energy infrastructure to run on non-fossil fuel energy.

Add up these lags, and it becomes apparent that climate change is a problem that will stretch most people’s imaginations. We’re not used to having to having to plan decades ahead, and we’re not used to the idea that any solution will take decades before it starts to make a difference.

And of course, if people who lie about climate change for a living merely say “ha, ha, a scientist made a mistake so global warming must be a myth!” we’ll never get anywhere. Indeed, we may even have already caused the impacts on food supply described in the withdrawn report. It’s just that it’s likely to take longer than 2020 before we see them played out.

Last week I attended the workshop in Exeter to lay out the groundwork for building a new surface temperature record. My head is still buzzing with all the ideas we kicked around, and it was a steep learning curve for me because I wasn’t familiar with many of the details (and difficulties) of research in this area. In many ways it epitomizes what Paul Edwards terms “Data Friction” – the sheer complexity of moving data around in the global observing system means there are many points where it needs to be transformed from one form to another, each of which requires people’s energy and time, and, just like real friction, generates waste and slows down the system. (Oh, and some of these data transformations seem to generate a lot of heat too, which rather excites the atoms of the blogosphere).

Which brings us to the reasons the workshop existed in the first place. In many ways, it’s a necessary reaction to the media frenzy over the last year or so around alleged scandals in climate science, in which scientists are supposed to be hiding or fabricating data, which has allowed the ignoranti to pretend that the whole of climate science is discredited. However, while the nature and pace of the surface temperatures initiative has clearly been given a shot in the arm by this media frenzy, the roots of the workshop go back several years, and have a strong scientific foundation. Quite simply, scientists have recognized for years that we need a more complete and consistent surface temperature record with a much higher temporal resolution than currently exists. Current long term climatological records are mainly based on monthly summary data. Which is inadequate to meet the needs of current climate assessment, particularly the need for better understanding of the impact of climate change on extreme weather. Most weather extremes don’t show up in the monthly data, because they are shorter term – lasting for a few days or even just a few hours. This is not always true of course; Albert Klein Tank pointed out in his talk that this summer’s heatwave in Moscow occured mainly in a single calendar month, and hence shows up strongly in the monthly record. But in general, that is unusual, and so the worry is that monthly records tend to mask the occurrence of extremes (and hence may conceal trends in extremes).

The opening talks at the workshop also pointed out that the intense public scrutiny puts us in a whole new world, and one that many of the workshop attendees are clearly still struggling to come to terms with. Now, it’s clear that any new temperature record needs to be entirely open and transparent, so that every piece of research based on it could (in principle) be traced all the way back to basic observational records, and to echo the way John Christy put it at the workshop – every step of the research now has to be available as admissible evidence that could stand up in a court of law, because that’s the kind of scrutiny we’re being subjected to. Of course, the problem is that not only isn’t science ready for this (no field of science is anywhere near that transparent), it’s also not currently feasible, given the huge array of data sources being drawn on, the complexities of ownership and access rights, the expectations that much of the data will have high commercial value.

I’ll attempt a summary, but it will be rather long, as I don’t have time to make it any shorter. The slides from the workshop are now all available, and the outcomes from the workshop will be posted soon. The main goals were summarized in Peter Thorne’s opening talk: to create a (longish) list of principles, a roadmap for how to proceed, an identification of any overlapping initiatives so that synergies can be exploited, an agree method to engage with broader audiences (including the general public), and an initial governance model.

Did we achieve that? Well, you can skip to the end and see the summary slides, and judge for yourself. Personally, I thought the results were mixed. One obvious problem is that there is no funding on the table for this initiative, and it’s being launched at a time when everyone is cutting budgets, especially in the UK. Which meant that occasionally it felt like we were putting together a Heath Robinson device (Rube Goldberg to you Americans) – cobbling it together out of whatever we could find lying around. Which is ironic really given that the major international bodies (e.g. WMO) seem to fully appreciate the importance of this. And of course, the fact that it will be a vital part of our ability to assess the impacts of climate change over the next few decades.

Another problem is that the workshop attendees struggled to reach consensus on some of the most important principles. For example, should the databank be entirely open, or does it need a restricted section? The argument for the latter is that large parts of the source data are not currently open, as the various national weather services that collect it charge a fee on a cost recovery basis, and wish to restrict access to non-commercial uses as commercial applications are (in some cases) a significant portion of their operating budgets. The problem is that while the monthly data has been shared freely with international partners for many years, the daily and sub-daily records have not, because these are the basis for commercial weather forecasting services. So an insistence on full openness might mean a very incomplete dataset, which then defeats the purpose, as researchers will continue to use other (private) sources for more complete records.

And what about an appropriate licensing model? Some people argued that the data must be restricted to non-commercial uses, because that’s likely to make negotiations with national weather services easier. But others argued that unrestricted licenses should be used, so that the databank can help to lay the foundation for the development of a climate services industry (which would create jobs, and therefore please governments). [Personally, I felt that if governments really want to foster the creation of such an industry, then they ought to show more willingness to invest in this initiative, and until they do, we shouldn’t pander to them. I’d go for a cc by-nc-sa license myself, but I think I was outvoted]. Again, existing agreements are likely to get in the way: 70% of the European data would not be available if the research-only clause clause was removed.

There was also some serious disagreement about timelines. Peter outlined a cautious roadmap that focussed on building momentum, and delivering the occasional reports and white papers over the next year or so. The few industrial folks in the audience (most notably, Amy Luers from Google) nearly choked on their cookies – they’d be rolling out a beta version of the software within a couple of weeks if they were running the project. Quite clearly, as Amy urged in her talk, the project needs to plan for software needs right from the start, release early, prepare for iteration and flexibility, and invest in good visualizations.

Oh, and there wasn’t much agreement on open source software either. The more software oriented participants (most notably, Nick Barnes, from the Climate Code Foundation) argued strongly that all software, including every tool used to process the data every step of the way should be available as open source. But for many of the scientists, this represented a huge culture change. There was even some confusion about what open source means (e.g. that ‘open’ and ‘free’ aren’t necessarily the same thing).

On the other hand, some great progress was made in many areas, including identifying many important data services, building on lessons learnt from other large climate and weather data curation efforts, offers of help from many of the international partners (including offers of data from NCDC, NCAR, EURO4M, from across Europe and North America, as well as Russia, China, Indonesia, and Argentina). Agreement was clear that version control and good metadata are vital, and need to be planned for right from the start, but also that providing full provenance for each data item is an important long term goal, but cannot be a rule from the start, as we will have to build on existing data sources that come with little or no provenance information. Oh, and I was very impressed with the deep thinking and planning around benchmarking for homogenization tools (I’ll blog more on this soon, as it fascinates me).

Oh, and on the size of the task. Estimates of the number of undigitized paper records in the basements of various weather services ran to hundreds of millions of pages. But I still didn’t get a sense of the overall size of the planned databank…

Things I learnt:

  • Steve Worley from NCAR, reflecting on lessons from running ICOADS, pointed out that no matter how careful you think you’ve been, people will end up mis-using the data because they ignore or don’t understand the flags in the metadata.
  • Steve also pointed out that a drawback with open datasets is the proliferation of secondary archives, which then tend to get out of date and mislead users (as they rarely direct users back to the authoritative source).
  • Oh, and the scope of the uses of such data is usually surprisingly large and diverse.
  • Jay Lawrimore, reflecting on lessons from NCDC, pointed out that monthly data and daily and sub-daily data are collected and curated along independent routes, which then makes it hard to reconcile them. The station names sometimes don’t match, the lat/long coords don’t match (e.g. because of differences in rounding), and the summarized data are similar but not exact.
  • Another problem is that it’s not always clear exactly which 24-hour period a daily summary refers to (e.g. did they use a local or UTC midnight?). Oh, and this also means that 3- and 6-hour synoptic readings might not match the daily summaries either.
  • Some data doesn’t get transmitted, and so has to be obtained later, even to the point of having to re-key it from emails. Long delays in obtaining some of the data mean the datasets frequently have to be re-released.
  • Personal contacts and workshops in different parts of the world play a surprisingly important role in tracking down some of the harder to obtain data.
  • NCDC runs a service called Datzilla (similar to Bugzilla for software) for recording and tracking reported defects in the dataset.
  • Albert Klein Tank, describing the challenges in regional assessment of climate change and extremes, pointed out that the data requirements for analyzing extreme events are much higher than for assessing global temperature change. For example, we might need to know not just how many days were above 25°C compared to normal, but also how much did it cool off overnight (because heat stress and human health depend much more on overnight relief from the heat).
  • John Christy, introducing the breakout group on data provenance, had some nice examples in his slides of the kinds of paper records they have to deal with, and a fascinating example of a surface station that’s now under a lake, and hence old maps are needed to pinpoint its location.
  • From Michael de Podesta, who insisted on a healthy dose of serious metrology (not to be confused with meteorology): All measurements ought to come with an estimation of uncertainty, and people usually make a mess of this because they confuse accuracy and precision.
  • Uncertainty information isn’t metadata, it’s data. [Oh, and for that matter anything that’s metadata to one community is likely to be data to another. But that’s probably confusing things too much]
  • Oh, and of course, we have to distinguish Type A and Type B uncertainty. Type A is where the uncertainty is describable using statistics, so that collecting bigger samples will reduce it. Type B is where you just don’t know, so that collecting more data cannot reduce the uncertainty.
  • From Matt Menne, reflecting on lessons from the GHCN dataset, explaining the need for homogenization (which is climatology jargon for getting rid of errors in the observational data that arise because of changes over time in the way the data was measured). Some of the inhomogeneities are due to abrupt changes (e.g. because a recording station was moved, or got a new instrument), and also gradual changes (e.g. because the environment for a recording station slowly changes, e.g. gradual urbanization of its location).
  • Matt has lots of interesting examples of inhomogeneities in his slides, includes some really nasty ones. For example, a station in Reno, Nevada, that was originally in town, and then moved to the airport. There’s a gradual upwards trend in the early part of the record, from an urban heat island effect, and another similar trend in the latter part, after it moved to the airport, as the airport was also eventually encroached by urbanisation. But if you correct for both of these, as well as the step change when the station moved, you’re probably over-correcting….
  • which led Matt to suggest the Climate Scientist’s version of the Hippocratic Oath: First, do not flag good data as bad; Then do not make bias adjustments where none are warranted.
  • While criticism from non-standard sources (that’s polite-speak for crazy denialists) is coming faster than any small group can respond to (that’s code for the CRU), useful allies are beginning to emerge, also from the blogosphere, in the form of serious citizen scientists (such as Zeke Hausfather) who do their own careful reconstructions, and help address some of the crazier accusations from denialists. So there’s an important role in building community with such contributors.
  • John Kennedy, talking about homogenization for Sea Surface Temperatures, pointed out that Sea Surface and Land Surface data are entirely different beasts, requiring totally different approaches to homogenization. Why? because SSTs are collected from buckets on ships, engine intakes on ships, drifting buoys, fixed buoys, and so on. Which means you don’t have long series of observations from a fixed site like you do with land data – every observation might be from a different location!

Things I hope I managed to inject into the discussion:

  • “solicitation of input from the community at large” is entirely the wrong set of terms for white paper #14. It should be about community building and engagement. It’s never a one-way communication process.
  • Part of the community building should be the support for a shared set of open source software tools for analysis and visualization, contributed by the various users of the data. The aim would be for people to share their tools, and help build on what’s in the collection, rather than having everyone re-invent their own software tools. This could be as big a service to the research community as the data itself.
  • We desperately need a clear set of use cases for the planned data service (e.g. who wants access to which data product, and what other information will they be needing and why?). Such use cases should illustrate what kinds of transparency and traceability will be needed by users.
  • Nobody seems to understand just how much user support will need to be supplied (I think it will be easy for whatever resources are put into this to be overwhelmed, given the scrutiny that temperature records are subjected to these days)…
  • The rate of change in this dataset is likely to be much higher than has been seen in past data curation efforts, given the diversity of sources, and the difficulty of recovering complete data records.
  • Nobody (other than Bryan) seemed to understand that version control will need to be done at a much finer level of granularity than whole datasets, and that really every single data item needs to have a unique label so that it can be referred to in bug reports, updates, etc. Oh and that the version management plan should allow for major and minor releases, given how often even the lowest data products will change, as more data and provenance information is gradually recovered.
  • And of course, the change process itself will be subjected to ridiculous levels of public scrutiny, so the rational for accepting/rejecting changes and scheduling new releases needs to be clear and transparent. Which means far more attention to procedures and formal change control boards than past efforts have used.
  • I had lots of suggestions about how to manage the benchmarking effort, including planning for the full lifecycle: making sure the creation of the benchmark is a really community consensus building effort, and planning for retirement of each benchmark, to avoid the problems of overfitting. Susan Sim wrote an entire PhD on this.
  • I think the databank will need to come with a regularly updated blog, to provide news about what’s happening with the data releases, highlight examples of how it’s being used, explain interesting anomalies, interpret published papers based on the data, etc. A bit like RealClimate. Oh, and with serious moderation of the comment threads to weed out the crazies. Which implies some serious effort is needed.
  • …and I almost but not quite entirely learned how to pronounce the word ‘inhomogeneities’ without tripping over my tongue. I’m just going to call them ‘bugs’.

Update Sept 21, 2010: Some other reports from the workshop.

Here’s an appalling article by Andy Revkin on dotEarth which epitomizes everything that is wrong with media coverage of climate change. Far from using his position to educate and influence the public by seeking the truth, journalists like Revkin now seem to have taken to just making shit up, reporting what he reads in blogs as the truth, rather than investigating for himself what scientists actually do.

Revkin kicks off by citing a Harvard cognitive scientist found guilty of academic misconduct, and connecting it with “assertions that climate research suffered far too much from group think, protective tribalism and willingness to spin findings to suit an environmental agenda”. Note the juxtaposition. On the one hand, a story of a lone scientist who turned out to be corrupt (which is rare, but does happen from time to time). On the other hand, a set of insinuations about thousands of climate scientists, with no evidence whatsoever. Groupthink? Tribalism? Spin? Can Revkin substantiate these allegations? Does he even try? Of course not. He just repeats a lot of gossip from a bunch of politically motivated blogs, and demonstrates his own total ignorance of how scientists work.

He does offer two pieces of evidence to back up his assertion of bias. The first is the well-publicized mistake in the IPCC report on the retreat of the Himalayan glaciers. Unfortunately, the quotes from the IPCC authors in the very article Revkin points to, show it was the result of an honest mistake, despite an entire cadre of journalists and bloggers trying to spin it into some vast conspiracy theory. The second is about a paper on the connection between vanishing frogs and climate change, cited in the IPCC report. The IPCC report quite correctly cites the paper, and gives a one sentence summary of it. Somehow or other, Revkin seems to think this is bias or spin. It must have entirely escaped his notice that the IPCC report is supposed to summarize the literature in order to assess our current understanding of the science. Some of that literature is tentative, and some less so. Now, maybe Revkin has evidence that there is absolutely no connection between the vanishing frogs and climate change. If so, he completely fails to mention it. Which means that the IPCC is merely reporting on the best information we have on the subject. Come on Andy, if you want to demonstrate a pattern of bias in the IPCC reports, you’re gonna have to work damn harder than that. Oh, but I forgot. You’re just repeating a bunch of conspiracy theories to pretend you have something useful to say, rather than actually, say, investigating a story.

From here, Revkin weaves a picture of climate science as “done by very small tribes (sea ice folks, glacier folks, modelers, climate-ecologists, etc)”, and hence suggests they must therefore be guilty of groupthink and confirmation bias. Does he offer any evidence for this tribalism? No he does not, for there is none. He merely repeats the allegations of a bunch of people like Steve McIntyre, who working on the fringes of science, clearly do belong to a minor tribe, one that does not interact in any meaningful way with real climate scientists. So, I guess we’re meant to conclude that because McIntyre and a few others have formed a little insular tribe, that this must mean mainstream climate scientists are tribal too? Such reasoning would be laughable, if this wasn’t such a serious subject.

Revkin claims to have been “following the global warming saga – science and policy – for nearly a quarter century”. Unfortunately, in all that time, he doesn’t appear to have actually educated himself about how the science is done. If he’d spent any time in a climate science research institute, he’d know this allegation of tribalism is about as far from the truth as it’s possible to get. Oh, but of course, actually going and observing scientists in action would require some effort. That seems to be just a little too much to ask.

So, to educate Andy, and to save him the trouble of finding out for himself, let me explain. First, a little bit of history. The modern concern about the potential impacts of climate change probably dates back to the 1957 Revelle and Suess paper, in which they reported that the oceans absorb far less anthropogenic carbon emissions than was previously thought. Revelle was trained in geology and oceanography. Suess was a nuclear physicist, who studied the distribution of carbon-14 in the atmosphere. Their collaboration was inspired by discussions with Libby, a physical chemist famous for the development of radio-carbon dating. As head of the Scripps Institute, Revelle brought together oceanographers with atmospheric physicists (including initiating the Mauna Loa of the measurement of carbon dioxide concentrations in the atmosphere), atomic physicists studying dispersal of radioactive particles, and biologists studying the biological impacts of  radiation. Tribalism? How about some truly remarkable inter-disciplinary research?

I suppose Revkin might argue that those were the old days, and maybe things have gone downhill since then. But again, the evidence says otherwise. In the 1970’s, the idea of earth system science began to emerge, and in the last decade, it has become central to the efforts to build climate simulation models to improve our understandings of the connections between the various earth subsystems: atmosphere, ocean, atmospheric chemistry, ocean biogeochemistry, biology, hydrology, glaciology and meteorology. If you visit any of the major climate research labs today, you’ll find a collection of scientists from many of these different disciplines working alongside one another, collaborating on the development of integrated models, and discussing the connections between the different earth subsystems. For example, when I visited the UK Met Office two years ago, I was struck by their use of cross-disciplinary teams to investigate specific problems in the simulation models. When I visited, they had just formed such a cross-disciplinary team to investigate how to improve the simulation of the Indian monsoons in their earth system models. This week, I’m just wrapping up a month long visit to the Max Planck Institute for Meteorology in Hamburg, where I’ve also regularly sat in on meetings between scientists from the various disciplines, sharing ideas about, for example, the relationships between atmospheric radiative transfer and ocean plankton models.

The folks in Hamburg have been kind enough to allow me to sit in on their summer school this week, in which they’re training the next generation of earth science PhD students how to work with earth system models. The students are from a wide variety of disciplines: some study glaciers, some clouds, some oceanography, some biology, and so on. The set of experiments we’ve been given to try out the model include: changing the cloud top mass flux, altering the rate of decomposition in soils, changing the ocean mixing ratio, altering the ocean albedo, and changing the shape of the earth. Oh, and they’ve mixed up the students, so they have to work in pairs with people from another discipline. Tribalism? No, right from the get go, PhD training includes the encouragement of cross-disciplinary thinking and cross-disciplinary working.

Of course, if Revkin ever did wander into a climate science research institute he would see this for himself. But no, he prefers pontificating from the comfort of his armchair, repeating nonsense allegations he reads on the internet. And this is the standard that journalists hold for themselves? No wonder the general public is confused about climate change. Instead of trying to pick holes in a science they clearly don’t understand, maybe people like Revkin ought to do some soul searching and investigate the gaping holes in journalistic coverage of climate change. Then finally we might find out where the real biases lie.

So, here’s a challenge for Andy Revkin: Do not write another word about climate science until you have spent one whole month as a visitor in a climate research institute. Attend the seminars, talk to the PhD students, sit in on meetings, find out what actually goes on in these places. If you can’t be bothered to do that, then please STFU [about this whole bias, groupthink and tribalism meme].

Update: On reflection, I think I was too generous to Revkin when I accused him of making stuff up, so I deleted that bit. He’s really just parroting other people who make stuff up.

Update #2: Oh, did I mention that I’m a computer scientist? I’ve been welcomed into various climate research labs, invited to sit in on meetings and observe their working practices, and to spend my time hanging out with all sorts of scientists from all sorts of disciplines. Because obviously they’re a bunch of tribalists who are trying to hide what they do. NOT.

Update #3: I’ve added a clarifying rider to my last paragraph  – I don’t mean to suggest Andy should shut up altogether, just specifically about these ridiculous memes about tribalism and so on.

Nearly everything we ever do depends on vast social and technical infrastructures, which, when they work, are largely invisible. Science is no exception – modern science is only possible because we have built the infrastructure to support it: classification systems, international standards, peer-review, funding agencies, and, most importantly, systems for the collection and curation of vast quantities of data about the world. Star and Ruhleder point out the infrastructure that supports scientific work is embedded inside of other social and technical systems, and becomes invisible when we come to rely on it. Indeed, the process of learning how to make use of a particular infrastructure is, to a large extent, what defines membership in a particular community of practice. They also observe that our infrastructures are closely intertwined with our conventions and standards. As a simple example, they point to the QWERTY keyboard, which despite its limitations, shapes much of our interaction with computers (even the design of office furniture!), such that learning to use the keyboard is a crucial part of learning to use a computer. And once you can type, you cease to be aware of the keyboard itself, except when it breaks down. This invisibility-in-use is similar to Heidegger’s notion of tools that are ready-to-hand; the key difference is that tools are local to the user, while infrastructures have vast spatial and/or temporal extent.

A crucial point is that what counts as infrastructure depends on the nature of the work that it supports. What is invisible infrastructure for one community might not be for another. The internet is a good example – most users just accept it exists and make use of it, without asking how it works. However, to computer scientists, a detailed understanding of its inner workings is vital. A refusal to treat the internet as invisible infrastructure is a condition to entry into certain geek cultures.

In their book Sorting Things Out, Star and Bowker introduced the term infrastructural inversion, for a process of focusing explicitly on the infrastructure itself, in order to expose and study its inner workings. It’s a rather cumbersome phrase for a very interesting process, kind of like a switch of figure and ground. In their case, infrastructural inversion is a research strategy that allows them to explore how things like classification systems and standards are embedded in so much of scientific practice, and to understand how these things evolve with the science itself.

Paul Edwards applies infrastructural inversion to climate science in his book A Vast Machine, where he examines the history of attempts by meteorologists to create a system for collecting global weather data, and for sharing that data with the international weather forecasting community. He points out that climate scientists also come to rely on that same infrastructure, but that it doesn’t serve their needs so well, and hence there is a difference between weather data and climate data. As an example, meteorologists tolerate changes in the nature and location of a particular surface temperature station over time, because they are only interested in forecasting over the short term (days or weeks). But to a climate scientist trying to study long-term trends in climate, such changes (known as inhomogeneities) are crucial. In this case, the infrastructure breaks down, as it fails to serve the needs of this particular community of scientists.

Hence, as Edwards points out, climate scientists also perform infrastructural inversion regularly themselves, as they dive into the details of the data collection system, trying to find and correct inhomogeneities. In the process, almost any aspect of how this vast infrastructure works might become important, revealing clues about what parts of the data can be used and which parts must be re-considered. One of the key messages in Paul’s book is that the usual distinction between data and models is now almost completely irrelevant in meteorology and climate science. The data collection depends on a vast array of models to turn raw instrumental readings into useful data, while the models themselves can be thought of sophisticated data reconstructions. Even GCMs, which now have the ability to do data assimilation and re-analysis, can be thought of as large amounts of data made executable through a set of equations that define spatial and temporal relationships within that data.

As an example, Edwards describes the analysis performed by Christy and Spencer at UAH on the MSU satellite data, from which they extracted measurements of the temperature of the upper atmosphere. In various congressional hearing, Spencer and Christy frequently touted their work, which showed a slight cooling trend in the upper atmosphere, as superior to other work that showed a warming trend because they were able to “actually measure the temperature of the free atmosphere” whereas other work was merely “estimation” from models (Edwards, p414). However, this completely neglects the fact that the MSU data doesn’t measure temperature in the lower troposphere directly at all, it measures radiance at the top of the atmosphere. Temperature readings for the lower troposphere are constructed from these readings via a complex set of models that take into account the chemical composition of the atmosphere, the trajectory of the satellite, and the position of the sun, among other factors. More importantly, a series of corrections in these models over several years gradually removed the apparent cooling trend, finally revealing a warming trend, as predicted by the theory (see Karl et al for a more complete account). The key point is that the data needed for meteorology and climate science is so vast and so complex that it’s no longer possible to disentangle models from data. The data depends on models to make it useful, and the models are sophisticated tools for turning one kind of data into another.

While the vast infrastructure for collecting and sharing data has become largely invisible to many working meteorologists, but must be continually inverted by climate scientists, in order to use it for analysis of longer term trends. The project to develop a new global surface temperature record that I described yesterday is one example of such inversion – it will involve a painstaking process of search and rescue on original data records dating back more than a century, because of the needs for a more complete, higher resolution temperature record than is currently available.

So far, I’ve only described constructive uses of infrastructural inversion, performed in the pursuit of science, to improve our understanding of how things work, and to allow us to re-adapt an infrastructure for new purposes. But there’s another use of infrastructural inversion, applied as a rhetorical technique to undermine scientific research. It has been applied increasingly in recent years in an attempt to slow down progress on enacting climate change mitigation policies, by sowing doubt and confusion about the validity of our knowledge about climate change. The technique is to dig down into the vast infrastructure that supports climate science, identify weaknesses in this infrastructure, and tout them as reasons to mistrust scientists’ current understanding of the climate system. And it’s an easy game to play, for two reasons: (1) all infrastructures are constructed through a series of compromises (e.g. standards are never followed exactly), and communities of practice develop workarounds that naturally correct for infrastructural weaknesses and (2) as described above, the data collection for weather forecasting frequently does fail to serve the needs of climate scientists. The climate scientists are painfully aware of these infrastructural weaknesses and have to deal with them every day, while those playing this rhetorical game ignore this, and pretend instead that there’s a vast conspiracy to lie about the science.

The problem is that, at first sight, many of these attempts at infrastructural inversion look like honest citizen-scientist attempt to increase transparency and improve the quality of the science (e.g. see Edwards, p421-427). For example, Anthony Watt’s SurfaceStations.org project is an attempt to document the site details of a large number of surface weather measuring stations, to understand how problems in their siting (e.g. growth of surrounding buildings) and placement of instruments might create biases in the long term trends constructed from their data. At face value, this looks like a valuable citizen-science exercise in infrastructural inversion. However, Watts wraps the whole exercise in the rhetoric of conspiracy theory, frequently claiming that climate scientists are dishonest, that they are covering up these problems, and that climate change itself is a myth. This not only ignores the fact that climate scientists themselves routinely examine such weaknesses in the temperature record, but also has the effect of biasing the entire exercise, as Watts’ followers are increasingly motivated to report only those problems that would cause a warming bias, and ignore those that do not. Recent independent studies that have examined the data collected by the SurfaceStations.org project demonstrate that the corrections demanded by Watts are irrelevant.

The recent project launched by the UK Met Office might look to many people like it’s a desperate response to “ClimateGate“, a mea culpa, an attempt to claw back some credibility. But, put into the context of the history of continual infrastructural inversion performed by climate scientists throughout the history of the field, it is nothing of the sort. It’s just one more in a long series of efforts to build better and more complete datasets to allow climate scientists to answer new research questions. This is what climate scientists do all the time. In this case, it is an attempt to move from monthly to daily temperature records, to improve our ability to understand the regional effects of climate change, and especially to address the growing need to understand the effect of climate change on extreme weather events (which are largely invisible in monthly averages).

So, infrastructural inversion is a fascinating process, used by at least three different groups:

  • Researchers who study scientific work (e.g. Star, Bowker, Edwards) use it to understand the interplay between the infrastructure and the scientific work that it supports;
  • Climate scientists use it all the time to analyze and improve the weather data collection systems that they need to understand longer term climate trends;
  • Climate change denialists use it to sow doubt and confusion about climate science, to further a political agenda of delaying regulation of carbon emissions.

And unfortunately, sorting out constructive uses of infrastructural inversion from its abuses is hard, because in all cases, it looks like legitimate questions are being asked.

Oh, and I can’t recommend Edward’s book highly enough. As Myles Allen writes in his review: “A Vast Machine […] should be compulsory reading for anyone who now feels empowered to pontificate on how climate science should be done.”