At the beginning of March, I was invited to give a talk at TEDxUofT. Colleagues tell me the hardest part of giving these talks is deciding what to talk about. I decided to see if I could answer the question of whether we can trust climate models. It was a fascinating and nerve-wracking experience, quite unlike any talk I’ve given before. Of course, I’d love to do another one, as I now know more about what works and what doesn’t.

Here’s the video and a transcript of my talk. [The bits in square brackets in are things I intended to say but forgot!] 

Computing the Climate: How Can a Computer Model Forecast the Future? TEDxUofT, March 1, 2014.

Talking about the weather forecast is a great way to start a friendly conversation. The weather forecast matters to us. It tells us what to wear in the morning; it tells us what to pack for a trip. We also know that weather forecasts can sometimes be wrong, but we’d be foolish to ignore them when they tell us a major storm is heading our way.

[Unfortunately, talking about climate forecasts is often a great way to end a friendly conversation!] Climate models tell us that by the end of this century, if we carry on burning fossil fuels at the rate we have been doing, and we carry on cutting down forests at the rate we have been doing, the planet will warm by somewhere between 5 to 6 degrees centigrade. That might not seem much, but, to put it into context, in the entire history of human civilization, the average temperature of the planet has not varied by more than 1 degree. So that forecast tells us something major is coming, and we probably ought to pay attention to it.

But on the other hand, we know that weather forecasts don’t work so well the longer into the future we peer. Tomorrow’s forecast is usually pretty accurate. Three day and five day forecasts are reasonably good. But next week? They always change their minds before next week comes. So how can we peer 100 years into the future and look at what is coming with respect to the climate? Should we trust those forecasts? Should we trust the climate models that provide them to us?

Six years ago, I set out to find out. I’m a professor of computer science. I study how large teams of software developers can put together complex pieces of software. I’ve worked with NASA, studying how NASA builds the flight software for the Space Shuttle and the International Space Station. I’ve worked with large companies like Microsoft and IBM. My work focusses not so much on software errors, but on the reasons why people make those errors, and how programmers then figure out they’ve made an error, and how they know how to fix it.

To start my study, I visited four major climate modelling labs around the world: in the UK, in Paris, Hamburg, Germany and in Colorado. Each of these labs have typically somewhere between 50-100 scientists who are contributing code to their climate models. And although I only visited four of these labs, there are another twenty or so around the world, all doing similar things. They run these models on some of the fastest supercomputers in the world, and many of the models have been in construction, the same model, for more than 20 years.

When I started this study, I asked one of my students to attempt to measure how many bugs there are in a typical climate model. We know from our experience with software there are always bugs. Sooner or later the machine crashes. So how buggy are climate models? More specifically, what we set out to measure is what we call “defect density” – How many errors are there per thousand lines of code. By this measure, it turns out climate models are remarkably high quality. In fact, they’re better than almost any commercial software that’s ever been studied. They’re about the same level of quality as the Space Shuttle flight software. Here’s my results (For the actual results you’ll have to read the paper):

DefectDensityResults-sm

We know it’s very hard to build a large complex piece of software without making mistakes.  Even the space shuttle’s software had errors in it. So the question is not “is the software perfect for predicting the future?”. The question is “Is it good enough?” Is it fit for purpose?

To answer that question, we’d better understand what that purpose of a climate model is. First of all, I’d better be clear what a climate model is not. A climate model is not a projection of trends we’ve seen in the past extrapolated into the future. If you did that, you’d be wrong, because you haven’t accounted for what actually causes the climate to change, and so the trend might not continue. They are also not decision-support tools. A climate model cannot tell us what to do about climate change. It cannot tell us whether we should be building more solar panels, or wind farms. It can’t tell us whether we should have a carbon tax. It can’t tell us what we ought to put into an international treaty.

What it does do is tell us how the physics of planet earth work, and what the consequences are of changing things, within that physics. I could describe it as “computational fluid dynamics on a rotating sphere”. But computational fluid dynamics is complex.

I went into my son’s fourth grade class recently, and I started to explain what a climate model is, and the first question they asked me was “is it like Minecraft?”. Well, that’s not a bad place to start. If you’re not familiar with Minecraft, it divides the world into blocks, and the blocks are made of stuff. They might be made of wood, or metal, or water, or whatever, and you can build things out of them. There’s no gravity in Minecraft, so you can build floating islands and it’s great fun.

Climate models are a bit like that. To build a climate model, you divide the world into a number of blocks. The difference is that in Minecraft, the blocks are made of stuff. In a climate model, the blocks are really blocks of space, through which stuff can flow. At each timestep, the program calculates how much water, or air, or ice is flowing into, or out of, each block, and if so, in which directions? It calculates changes in temperature, density, humidity, and so on. And whether stuff such as dust, salt, and pollutants are passing through or accumulating in each block. We have to account for the sunlight passing down through the block during the day. Some of what’s in each block might filter some of the the incoming sunlight, for example if there are clouds or dust, so some of the sunlight doesn’t get down to the blocks below. There’s also heat escaping upwards through the blocks, and again, some of what is in the block might trap some of that heat — for example clouds and greenhouse gases.

As you can see from this diagram, the blocks can be pretty large. The upper figure shows blocks of 87km on a side. If you want more detail in the model, you have to make the blocks smaller. Some of the fastest climate models today look more like the lower figure:

ModelResolution

Ideally, you want to make the blocks as small as possible, but then you have many more blocks to keep track of, and you get to the point where the computer just can’t run fast enough. A typical run of a climate model, to simulate a century’s worth of climate, you might have to wait a couple of weeks on some of the fastest supercomputers for that run to complete. So the speed of the computer limits how small we can make the blocks.

Building models this way is remarkably successful. Here’s video of what a climate model can do today. This simulation shows a year’s worth of weather from a climate model. What you’re seeing is clouds and, in orange, that’s where it’s raining. Compare that to a year’s worth of satellite data for the year 2013. If you put them side by side, you can see many of the same patterns. You can see the westerlies, the winds at the top and bottom of the globe, heading from west to east, and nearer the equator, you can see the trade winds flowing in the opposite direction. If you look very closely, you might even see a pulse over South America, and a similar one over Africa in both the model and the satellite data. That’s the daily cycle as the land warms up in the morning and the moisture evaporates from soils and plants, and then later on in the afternoon as it cools, it turns into rain.

Note that the bottom is an actual year, 2013, while the top, the model simulation is not a real year at all – it’s a typical year. So the two don’t correspond exactly. You won’t get storms forming at the same time, because it’s not designed to be an exact simulation; the climate model is designed to get the patterns right. And by and large, it does. [These patterns aren’t coded into this model. They emerge as a consequences of getting the basic physics of the atmosphere right].

So how do you build a climate model like this? The answer is “very slowly”. It takes a lot of time, and a lot of failure. One of the things that surprised me when I visited these labs is that the scientists don’t build these models to try and predict the future. They build these models to try and understand the past. They know their models are only approximations, and they regularly quote the statistician, George Box, who said “All models are wrong, but some are useful”. What he meant is that any model of the world is only an approximation. You can’t get all the complexity of the real world into a model. But even so, even a simple model is a good way to test your theories about the world.

So the way that modellers work, is they spend their time focussing on places where the model does isn’t quite right. For example, maybe the model isn’t getting the Indian monsoon right. Perhaps it’s getting the amount of rain right, but it’s falling in the wrong place. They then form a hypothesis. They’ll say, I think I can improve the model, because I think this particular process is responsible, and if I improve that process in a particular way, then that should fix the simulation of the monsoon cycle. And then they run a whole series of experiments, comparing the old version of the model, which is getting it wrong, with the new version, to test whether the hypothesis is correct. And if after a series of experiments, they believe their hypothesis is correct, they have to convince the rest of the modelling team that this really is an improvement to the model.

In other words, to build the models, they are doing science. They are developing hypotheses, they are running experiments, and using peer review process to convince their colleagues that what they have done is correct:

ModelDevelopmentProcess-sm

Climate modellers also have a few other weapons up their sleeves. Imagine for a moment if Microsoft had 25 competitors around the world, all of whom were attempting to build their own versions of Microsoft Word. Imagine further that every few years, those 25 companies all agreed to run their software on a very complex battery of tests, designed to test all the different conditions under which you might expect a word processor to work. And not only that, but they agree to release all the results of those tests to the public, on the internet, so that anyone who wanted to use any of that software can pore over all the data and find out how well each version did, and decide which version they want to use for their own purposes. Well, that’s what climate modellers do. There is no other software in the world for which there are 25 teams around the world trying to build the same thing, and competing with each other.

Climate modellers also have some other advantages. In some sense, climate modelling is actually easier than weather forecasting. I can show you what I mean by that. Imagine I had a water balloon (actually, you don’t have to imagine – I have one here):

AboutToThrowTheWaterBalloon

I’m going to throw it at the fifth row. Now, you might want to know who will get wet. You could measure everything about my throw: Will I throw underarm, or overarm? Which way am I facing when I let go of it? How much swing do I put in? If you could measure all of those aspects of my throw, and you understand the physics of how objects move, you could come up with a fairly accurate prediction of who is going to get wet.

That’s like weather forecasting. We have to measure the current conditions as accurately as possible, and then project forward to see what direction it’s moving in:

WeatherForecasting

If I make any small mistakes in measuring my throw, those mistakes will multiply as the balloon travels further. The further I attempt to throw it, the more room there is for inaccuracy in my estimate. That’s like weather forecasting. Any errors in the initial conditions multiply up rapidly, and the current limit appears to be about a week or so. Beyond that, the errors get so big that we just cannot make accurate forecasts.

In contrast, climate models would be more like releasing a balloon into the wind, and predicting where it will go by knowing about the wind patterns. I’ll make some wind here using a fan:

BalloonInTheWind

Now that balloon is going to bob about in the wind from the fan. I could go away and come back tomorrow and it will still be doing about the same thing. If the power stays on, I could leave it for a hundred years, and it might still be doing the same thing. I won’t be able to predict exactly where that balloon is going to be at any moment, but I can predict, very reliably, the space in which it will move. I can predict the boundaries of its movement. And if the things that shape those boundaries change, for example by moving the fan, and I know what the factors are that shape those boundaries, I can tell you how the patterns of its movements are going to change – how the boundaries are going to change. So we call that a boundary problem:

ClimateAsABoundaryProblem

The initial conditions are almost irrelevant. It doesn’t matter where the balloon started, what matters is what’s shaping its boundary.

So can these models predict the future? Are they good enough to predict the future? The answer is “yes and no”. We know the models are better at some things than others. They’re better at simulating changes in temperature than they are at simulating changes in rainfall. We also know that each model tends to be stronger in some areas and weaker in others. If you take the average of a whole set of models, you get a much better simulation of how the planet’s climate works than if you look at any individual model on its own. What happens is that the weaknesses in any one model are compensated for by other models that don’t have those weaknesses.

But the results of the models have to be interpreted very carefully, by someone who knows what the models are good at, and what they are not good at – you can’t just take the output of a model and say “that’s how it’s going to be”.

Also, you don’t actually need a computer model to predict climate change. The first predictions of what would happen if we keep on adding carbon dioxide to the atmosphere were produced over 120 years ago. That’s fifty years before the first digital computer was invented. And those predictions were pretty accurate – what has happened over the twentieth century has followed very closely what was predicted all those years ago. Scientists also predicted, for example, that the arctic would warm faster than the equatorial regions, and that’s what happened. They predicted night time temperatures would rise faster than day time temperatures, and that’s what happened.

So in many ways, the models only add detail to what we already know about the climate. They allow scientists to explore “what if” questions. For example, you could ask of a model, what would happen if we stop burning all fossil fuels tomorrow. And the answer from the models is that the temperature of the planet will stay at whatever temperature it was when you stopped. For example, if we wait twenty years, and then stopped, we’re stuck with whatever temperature we’re at for tens of thousands of years. You could ask a model what happens if we dig up all known reserves of fossil fuels, and burn them all at once, in one big party? Well, it gets very hot.

More interestingly, you could ask what if we tried blocking some of the incoming sunlight to cool the planet down, to compensate for some of the warming we’re getting from adding greenhouse gases to the atmosphere? There have been a number of very serious proposals to do that. There are some who say we should float giant space mirrors. That might be hard, but a simpler way of doing it is to put dust up in the stratosphere, and that blocks some of the incoming sunlight. It turns out that if you do that, you can very reliably bring the average temperature of the planet back down to whatever level you want, just by adjusting the amount of the dust. Unfortunately, some parts of the planet cool too much, and others not at all. The crops don’t grow so well, and everyone’s weather gets messed up. So it seems like that could be a solution, but when you study the model results in detail, there are too many problems.

Remember that we know fairly well what will happen to the climate if we keep adding CO2, even without using a computer model, and the computer models just add detail to what we already know. If the models are wrong, they could be wrong in either direction. They might under-estimate the warming just as much as they might over-estimate it. If you look at how well the models can simulate the past few decades, especially the last decade, you’ll see some of both. For example, the models have under-estimated how fast the arctic sea ice has melted. The models have underestimated how fast the sea levels have risen over the last decade. On the other hand, they over-estimated the rate of warming at the surface of the planet. But they underestimated the rate of warming in the deep oceans, so some of the warming ends up in a different place from where the models predicted. So they can under-estimate just as much as they can over-estimate. [The less certain we are about the results from the models, the bigger the risk that the warming might be much worse than we think.]

So when you see a graph like this, which comes from the latest IPCC report that just came out last month, it doesn’t tell us what to do about climate change, it just tells us the consequences of what we might choose to do. Remember, humans aren’t represented in the models at all, except in terms of us producing greenhouse gases and adding them to the atmosphere.

IPCC-AR5-WG1-Fig12.5

If we keep on increasing our use of fossil fuels — finding more oil, building more pipelines, digging up more coal, we’ll follow the top path. And that takes us to a planet that by the end of this century, is somewhere between 4 and 6 degrees warmer, and it keeps on getting warmer over the next few centuries. On the other hand, the bottom path, in dark blue, shows what would happen if, year after year from now onwards, we use less fossil fuels than we did the previous year, until about mid-century, when we get down to zero emissions, and we invent some way to start removing that carbon dioxide from the atmosphere before the end of the century, to stay below 2 degrees of warming.

The models don’t tell us which of these paths we should follow. They just tell us that if this is what we do, here’s what the climate will do in response. You could say that what the models do is take all the data and all the knowledge we have about the climate system and how it works, and put them into one neat package, and its our job to take that knowledge and turn it into wisdom. And to decide which future we would like.

Yesterday I talked about three re-inforcing feedback loops in the earth system, each of which has the potential to accelerate a warming trend once it has started. I also suggested there are other similar feedback loops, some of which are known, and others perhaps yet to be discovered. For example, a paper published last month suggested a new feedback loop, to do with ocean acidification. In a nutshell, as the ocean absorbs more CO2, it becomes more acidic, which inhibits the growth of phytoplankton. These plankton are a major source of sulphur compounds that end up as aerosols in the atmosphere, which seeds the formation of clouds. Less clouds mean lower albedo, which means more warming. Whether this feedback loop is important remains to be seen, but we do know that clouds have an important role to play in climate change.

I didn’t include clouds on my diagrams yet, because clouds deserve a special treatment, in part because they are involved in two major feedback loops that have opposite effects:

Two opposing cloud feedback loops

Two opposing cloud feedback loops. An increase in temperature leads to an increase in moisture in the atmosphere. This leads to two new loops…

As the earth warms, we get more moisture in the atmosphere (simply because there is more evaporation from the surface, and warmer air can hold more moisture). Water vapour is a powerful greenhouse gas, so the more there is in the atmosphere, the more warming we get (greenhouse gases reduce the outgoing radiation). So this sets up a reinforcing feedback loop: more moisture causes more warming causes more moisture.

However, if there is more moisture in the atmosphere, there’s also likely to be more cloud formation. Clouds raise the albedo of the planet and reflect sunlight back into space before it can reach the surface. Hence, there is also a balancing loop: by blocking more sunlight, extra clouds will help to put the brakes on any warming. Note that I phrased this carefully: this balancing loop can slow a warming trend, but it does not create a cooling trend. Balancing loops tend to stop a change from occurring, but they do not create a change in the opposite direction. For example, if enough clouds form to completely counteract the warming, they also remove the mechanism (i.e. warming!) that causes growth in cloud cover in the first place. If we did end up with so many extra clouds that it cooled the planet, the cooling would then remove the extra clouds, so we’d be back where we started. In fact, this loop is nowhere near that strong anyway. [Note that under some circumstances, balancing loops can lead to oscillations, rather than gently converging on an equilibrium point, and the first wave of a very slow oscillation might be mistaken for a cooling trend. We have to be careful with our assumptions and timescales here!].

So now we have two new loops that set up opposite effects – one tends to accelerate warming, and the other tends to decelerate it. You can experience both these effects directly: cloudy days tend to be cooler than sunny days, because the clouds reflect away some of the sunlight. But cloudy nights tend to be warmer than clear nights because the water vapour traps more of the escaping heat from the surface. In the daytime, both effects are operating, and the cooling effect tends to dominate. During the night, there is no sunlight to block, so only the warming effect works.

If we average out the effects of these loops over many days, months, or years, which of the effects dominate? (i.e. which loop is stronger?) Does the extra moisture mean more warming or less warming? This is clearly an area where building a computer model and experimenting with it might help, as we need to quantify the effects to understand them better. We can build good computer models of how clouds form at the small scale, by simulating the interaction of dust and water vapour. But running such a model for the whole planet is not feasible with today’s computers.

To make things a little more complicated, these two feedback loops interact with other things. For example, another likely feedback loop comes from a change in the vertical temperature profile of the atmosphere. Current models indicate that, at least in the tropics, the upper atmosphere will warm faster than the surface (in technical terms, it will reduce the lapse rate – the rate at which temperature drops as you climb higher). This then increases the outgoing radiation, because it’s from the upper atmosphere that the earth loses its heat to space. This creates another (small) balancing feedback:

The lapse rate feedback - if the upper troposphere warms faster than the surface (i.e. a lower lapse rate), this increases outgoing radiation from the planet.

The lapse rate feedback – if the upper troposphere warms faster than the surface (i.e. a lower lapse rate), this increases outgoing radiation from the planet.

Note that this lapse rate feedback operates in the same way as the main energy balance loop – the two ‘-‘ links have the same effect as the existing ‘+’ link from temperature to outgoing infra-red radiation. In other words this new loop just strengthens the effect of the existing loop – for convenience we could just fold both paths into the one link.

However, water vapour feedback can interact with this new feedback loop, because the warmer upper atmosphere will hold more water vapour in exactly the place where it’s most effective as a greenhouse gas. Not only that, but clouds themselves can change the vertical temperature profile, depending on their height. I said it was complicated!

The difficulty of simulating all these different interactions of clouds accurately leads to one of the biggest uncertainties in climate science. In 1979, the Charney report calculated that all these cloud and water vapour feedback loops roughly cancel out, but pointed out that there was a large uncertainty bound on this estimate. More than thirty years later, we understand much more about the how cloud formation and distribution are altered in a warming world, but our margins of error for calculating cloud effects have barely reduced, because of the difficulty of simulating them on a global scale. Our best guess is now that the (reinforcing) water vapour feedback loop is slightly stronger than than the (balancing) cloud albedo and lapse rate loops. So the net effect of these three loops is an amplifying effect on the warming.

Other posts in this series, so far:

We’re taking the kids to see their favourite band: Muse are playing in Toronto tonight. I’m hoping they play my favourite track:

I find this song fascinating, partly because of the weird mix of progressive rock and dubstep. But more for the lyrics:

All natural and technological processes proceed in such a way that the availability of the remaining energy decreases. In all energy exchanges, if no energy enters or leaves an isolated system, the entropy of that system increases. Energy continuously flows from being concentrated to becoming dispersed, spread out, wasted and useless. New energy cannot be created and high grade energy is destroyed. An economy based on endless growth is unsustainable. The fundamental laws of thermodynamics will place fixed limits on technological innovation and human advancement. In an isolated system, the entropy can only increase. A species set on endless growth is unsustainable.

This summarizes, perhaps a little too succinctly, the core of the critique of our current economy, first articulated clearly in 1972 by the Club of Rome in the Limits to Growth Study. Unfortunately, that study was widely dismissed by economists and policymakers. As Jorgen Randers points out in a 2012 paper, the criticism of the Limits to Growth study was largely based on misunderstandings, and the key lessons are absolutely crucial to understanding the state of the global economy today, and the trends that are likely over the next few decades. In a nutshell, humans exceeded the carrying capacity of the planet sometime in the latter part of the 20th century. We’re now in the overshoot portion, where it’s only possible to feed the world and provide energy for economic growth by consuming irreplaceable resources and using up environmental capital. This cannot be sustained.

In general systems terms, there are three conditions for sustainability (I believe it was Herman Daly who first set them out in this way):

  1. We cannot use renewable resources faster than they can be replenished.
  2. We cannot generate wastes faster than they can be absorbed by the environment.
  3. We cannot use up any non-renewable resource.

We can and do violate all of these conditions all the time. Indeed, modern economic growth is based on systematically violating all three of them, but especially #3, as we rely on cheap fossil fuel energy. But any system that violates these rules cannot be sustained indefinitely, unless it is also able to import resources and export wastes to other (external) systems. The key problem for the 21st century is that we’re now violating all three conditions on a global scale, and there are no longer other systems that we can rely on to provide a cushion – the planet as a whole is an isolated system. There are really only two paths forward: either we figure out how to re-structure the global economy to meet Daly’s three conditions, or we face a global collapse (for an understanding of the latter, see GrahamTurner’s 2012 paper).

A species set on endless growth is unsustainable.

Last week, Damon Matthews from Concordia visited, and gave a guest CGCS lecture, “Cumulative Carbon and the Climate Mitigation Challenge”. The key idea he addressed in his talk is the question of “committed warming” – i.e. how much warming are we “owed” because of carbon emissions in the past (irrespective of what we do with emissions in the future). But before I get into the content of Damon’s talk, here’s a little background.

The question of ‘owed’ or ‘committed’ warming arises because we know it takes some time for the planet to warm up in response to an increase in greenhouse gases in the atmosphere. You can calculate a first approximation of how much it will warm up from a simple energy balance model (like the ones I posted about last month). However, to calculate how long it takes to warm up you need to account for the thermal mass of the oceans, which absorb most of the extra energy and hence slow the rate of warming of surface temperatures. For this you need more than a simple energy balance model.

You can do a very simple experiment with a Global Circulation Model, by setting CO2 concentrations at double their pre-industrial levels, and then leave them constant at this level, to see how long the earth takes to reach a new equilibrium temperature. Typically, this takes several decades, although the models differ on exactly how long. Here’s what it looks like if you try this with EdGCM (I ran it with doubled CO2 concentrations starting in 1958):

EVA_time

Of course, the concentrations would never instantaneously double like that, so a more common model experiment is to increase CO2 levels gradually, say by 1% per year (that’s a little faster than how they have risen in the last few decades) until they reach double the pre-industrial concentrations (which takes approx 70 years), and then leave them constant at that level. This particular experiment is a standard way of estimating the Transient Climate Response – the expected warming at the moment we first reach a doubling of CO2 – and is included in the CMIP5 experiments. In these model experiments, it typically takes a few decades more of warming until a new equilibrium point is reached, and the models indicate that the transient response is expected to be a little over half of the eventual equilibrium warming.

This leads to a (very rough) heuristic that as the planet warms, we’re always ‘owed’ almost as much warming again as we’ve already seen at any point, irrespective of future emissions, and it will take a few decades for all that ‘owed’ warming to materialize. But, as Damon argued in his talk, there are two problems with this heuristic. First, it confuses the issue when discussing the need for an immediate reduction in carbon emissions, because it suggests that no matter how fast we reduce them, the ‘owed’ warming means such reductions will make little difference to the expected warming in the next two decades. Second, and more importantly, the heuristic is wrong! How so? Read on!

For an initial analysis, we can view the climate problem just in terms of carbon dioxide, as the most important greenhouse gas. Increasing CO2 emissions leads to increasing CO2 concentrations in the atmosphere, which leads to temperature increases, which lead to climate impacts. And of course, there’s a feedback in the sense that our perceptions of the impacts (whether now or in the future) lead to changed climate policies that constrain CO2 emissions.

So, what happens if we were to stop all CO2 emissions instantly? The naive view is that temperatures would continue to rise, because of the ‘climate commitment’  – the ‘owed’ warming that I described above. However, most models show that the temperature stabilizes almost immediately. To understand why, we need to realize there are different ways of defining ‘climate commitment':

  • Zero emissions commitment – How much warming do we get if we set CO2 emissions from human activities to be zero?
  • Constant composition commitment – How much warming do we get if we hold atmospheric concentrations constant? (in this case, we can still have some future CO2 emissions, as long as they balance the natural processes that remove CO2 from the atmosphere).

The difference between these two definition is shown here. Note that in the zero emissions case, concentrations drop from an initial peak, and then settle down at a lower level:

Committed-concentrations

CommittedWarming

The model experiments most people are familiar with are the constant composition experiments, in which there is continued warming. But in the zero emissions scenarios, there is almost no further warming. Why is this?

The relationship between carbon emissions and temperature change (the “Carbon Climate Response”) is complicated, because it depends two factors, each of which is complicated by (different types of) inertia in the system:

  • Climate Sensitivity – how much temperature changes in response to difference levels of CO2 in the atmosphere. The temperature response is slowed down by the thermal inertia of the oceans, which means it takes several decades for the earth’s surface temperatures to respond fully to a change in CO2 concentrations.
  • Carbon sensitivity – how much concentrations of CO2 in the atmosphere change in response to different levels of carbon emissions. A significant fraction (roughly half) of our CO2 emissions are absorbed by the oceans, but this also takes time. We can think of this as “carbon cycle inertia” – the delay in uptake of the extra CO2, which also takes several decades. [Note: there is a second kind of carbon system inertia, by which it takes tens of thousands of years for the rest of the CO2 to be removed, via very slow geological processes such as rock weathering.]

Carbon-Response

It turns out that the two forms of inertia roughly balance out. The thermal inertia of the oceans slows the rate of warming, while the carbon cycle inertia accelerates it. Our naive view of the “owed” warming is based on an understanding of only one of these, the thermal inertia of the ocean, because much of the literature talks only about climate sensitivity, and ignores the question of carbon sensitivity.

The fact that these two forms of inertia tend to balance leads to another interesting observation. The models all show an approximately linear response to cumulative emissions. For example, here are the CMIP3 models, used in the IPCC AR4 report (the average of the models, indicated by the arrow, is around 1.6C of warming per 1,000 gigatonnes of carbon):

Temp-Against-Cum-Emissions

The same relationship seems to hold for the CMIP5 models, many of which now include a dynamic carbon cycle:

Temp-against-cum-emissions-CMIP5

This linear relationship isn’t determined by any physical properties of the climate system, and probably won’t hold in much warmer or cooler climates, nor when other feedback processes kick in. So we could say it’s a coincidental property of our current climate. However, it’s rather fortuitous for policy discussions.

Historically, we have emitted around 550 billion tonnes since the beginning of the industrial era, which gives us an expected temperature response of around 0.9°C. If we want to hold temperature rises to be no more than 2°C of warming, total future emissions should not exceed a further 700 billion tonnes of Carbon. In effect, this gives us a total worldwide carbon budget for the future. The hard policy question, of course, is then how to allocate this budget among the nations (or people) of the world in an equitable way.

[A few years ago, I blogged about a similar analysis, which says that cumulative carbon emissions should not exceed 1 trillion tonnes in total, ever. That calculation gives us a smaller future budget of less then 500 billion tonnes. That result came from analysis using the Hadley model, which has one of the higher slopes on the graphs above. Which number we use for a global target then might depend on which model we believe gives the most accurate projections, and perhaps how we also factor in the uncertainties. If the uncertainty range across models is accurate, then picking the average would give us a 50:50 chance of staying within the temperature threshold of 2°C. We might want better odds than this, and hence a smaller budget.]

In the National Academies report in 2011, the cumulative carbon budgets for each temperature threshold were given as follows (note the size of the uncertainty whiskers on each bar):

emissions-targets-NAS2011

[For a more detailed analysis see: Matthews, H. D., Solomon, S., & Pierrehumbert, R. (2012). Cumulative carbon as a policy framework for achieving climate stabilization. Philosophical transactions. Series A, Mathematical, physical, and engineering sciences, 370(1974), 4365–79. doi:10.1098/rsta.2012.0064]

So, this allows us to clear up some popular misconceptions:

The idea that there is some additional warming owed, no matter what emissions pathway we follow is incorrect. Zero future emissions means little to no future warming, so future warming depends entirely on future emissions. And while the idea of zero future emissions isn’t policy-relevant (because zero emissions is impossible, at least in the near future), it does have implications for how we discuss policy choices. In particular, it means the idea that CO2 emissions cuts will not have an effect on temperature change for several decades is also incorrect. Every tonne of CO2 emissions avoided has an immediate effect on reducing the temperature response.

Another source of confusion is the emissions scenarios used in the IPCC report. They don’t diverge significantly for the first few decades, largely because we’re unlikely (and to some extent unable) to make massive emissions reductions in the next 1-2 decades, because society is very slow to respond to the threat of climate change, and even when we do respond, the amount of existing energy infrastructure that has to be rebuilt is huge. In this sense, there is some inevitable future warming, but it comes from future emissions that we cannot or will not avoid. In other words, political, socio-economic and technological inertia are the primary causes of future climate warming, rather than any properties of the physical climate system.

I’ve been collecting examples of different types of climate model that students can use in the classroom to explore different aspects of climate science and climate policy. In the long run, I’d like to use these to make the teaching of climate literacy much more hands-on and discovery-based. My goal is to foster more critical thinking, by having students analyze the kinds of questions people ask about climate, figure out how to put together good answers using a combination of existing data, data analysis tools, simple computational models, and more sophisticated simulations. And of course, learn how to critique the answers based on the uncertainties in the lines of evidence they have used.

Anyway, as a start, here’s a collection of runnable and not-so-runnable models, some of which I’ve used in the classroom:

Simple Energy Balance Models (for exploring the basic physics)

General Circulation Models (for studying earth system interactions)

  • EdGCM – an educational version of the NASA GISS general circulation model (well, an older version of it). EdGCM provides a simplified user interface for setting up model runs, but allows for some fairly sophisticated experiments. You typically need to let the model run overnight for a century-long simulation.
  • Portable University Model of the Atmosphere (PUMA) – a planet Simulator designed by folks at the University of Hamburg for use in the classroom to help train students interested in becoming climate scientists.

Integrated Assessment Models (for policy analysis)

  • C-Learn, a simple policy analysis tool from Climate Interactive. Allows you to specify emissions trajectories for three groups of nations, and explore the impact on global temperature. This is a simplified version of the C-ROADS model, which is used to analyze proposals during international climate treaty negotiations.
  • Java Climate Model (JVM) – a detailed desktop assessment model that offers detailed controls over different emissions scenarios and regional responses.

Systems Dynamics Models (to foster systems thinking)

  • Bathtub Dynamics and Climate Change from John Sterman at MIT. This simulation is intended to get students thinking about the relationship between emissions and concentrations, using the bathtub metaphor. It’s based on Sterman’s work on mental models of climate change.
  • The Climate Challenge: Our Choices, also from Sterman’s team at MIT. This one looks fancier, but gives you less control over the simulation – you can just pick one of three emissions paths: increasing, stabilized or reducing. On the other hand, it’s very effective at demonstrating the point about emissions vs. concentrations.
  • Carbon Cycle Model from Shodor, originally developed using Stella by folks at Cornell.
  • And while we’re on systems dynamics, I ought to mention toolkits for building your own systems dynamics models, such as Stella from ISEE Systems (here’s an example of it used to teach the global carbon cycle).

Other Related Models

  • A Kaya Identity Calculator, from David Archer at U Chicago. The Kaya identity is a way of expressing the interaction between the key drivers of carbon emissions: population growth, economic growth, energy efficiency, and the carbon intensity of our energy supply. Archer’s model allows you to play with these numbers.
  • An Orbital Forcing Calculator, also from David Archer. This allows you to calculate what the effect changes in the earth’s orbit and the wobble on its axis have on the solar energy that the earth receives, in any year in the past of future.

Useful readings on the hierarchy of climate models

This term, I’m running my first year seminar course, “Climate Change: Software Science and Society” again. The outline has changed a little since last year, but the overall goals of the course are the same: to take a small, cross-disciplinary group of first year undergrads through some of the key ideas in climate modeling.

As last year, we’re running a course blog, and the first assignment is to write a blog post for it. Please feel free to comment on the students’ posts, but remember to keep your comments constructive!

Oh I do hate seeing blog posts with titles like “Easterbrook’s Wrong (Again)“. Luckily, it’s not me they’re talking about. It’s some other dude who, as far as I know, is completely unrelated to me. And that’s a damn good thing, as this Don Easterbrook appears to be a serial liar. Apparently he’s an emeritus geology prof from from some university in the US. And because he’s ready to stand up and spout scientific sounding nonsense about “global cooling”, he gets invited to talk to journalists all the time. And then his misinformation then gets duly repeated on blog threads all over the internet, despite the efforts of a small group of bloggers trying to clean up the mess:

In a way, this another instance of the kind of denial of service attack I talked about last year. One retired professor fakes a few graphs, and they spread so widely over the internet that many good honest science bloggers have to stop what they’re doing, research the fraud, and expose it. And still they can’t stop the nonsense from spreading (just google “Don Easterbrook” to see how widely he’s quoted, usually in glowing terms).

The depressing thing is that he’s not the only Easterbrook doing this. I appear to be out-numberered: ClimateProgress, Sept 13, 2008: Gregg Easterbrook still knows nothing about global warming — and less about clean energy.

William Connolly has written a detailed critique of our paper “Engineering the Software for Understanding Climate Change”, which follows on from a very interesting discussion about “Amateurish Supercomputing Codes?” in his previous post. One of the issues raised in that discussion is the reward structure in scientific labs for software engineers versus scientists. The funding in such labs is pretty much all devoted to “doing science” which invariably means publishable climate science research. People who devote time and effort to improving the engineering of the model code might get a pat on the back, but inevitably it’s under-rewarded because it doesn’t lead directly to publishable science. The net result is that all the labs I’ve visited so far (UK Met Office, NCAR, MPI-M) have too few software engineers working on the model code.

Which brings up another point. Even if these labs decided to devote more budget to the software engineering effort (and it’s not clear how easy it would be to do this, without re-educating funding agencies), where will they recruit the necessary talent? They could try bringing in software professionals who don’t yet have the domain expertise in climate science, and see what happens. I can’t see this working out well on a large scale. The more I work with climate scientists, the more I appreciate how much domain expertise it takes to understand the science requirements, and to develop climate code. The potential culture clash is huge: software professionals (especially seasoned ones) tend to be very opinionated about “the right way to build software”, and insensitive to contextual factors that might make their previous experiences inapplicable. I envision lots of the requirements that scientists care about most (e.g. the scientific validity of the models) getting trampled on in the process of “fixing” the engineering processes. Right now the trade-off between getting the science right versus having beautifully engineered models is tipped firmly in favour of the former. Tipping it the other way might be a huge mistake for scientific progress, and very few people seem to understand how to get both right simultaneously.

The only realistic alternative is to invest in training scientists to become good software developers. Greg Wilson is pretty much the only person around who is covering this need, but his software carpentry course is desperately underfunded. We’re going to need a lot more like this to fix things…

Last week, I ran a workshop for high school kids from across Toronto on “What can computer models tell us about climate change?“. I already posted some of the material I used: the history of our knowledge about climate change. Jorge, Jon and Val ran another workshop after mine, entitled “Climate change and the call to action: How you can make a difference“. They have already blogged their reflections: See Jon’s summary of the workshop plan, and reflections on how to do it better next time, and the need for better metaphors. I think both workshops could have done with being longer, for more discussion and reflection (we were scheduled only 75 minutes for each). But I enjoyed both workshops a lot, as I find it very useful for my own thinking to consider how to talk about climate change with kids, in this case mainly from grade 10 (≈15 years old).

The main idea I wanted to get across in my workshop was the role of computer models: what they are, and how we can use them to test out hypotheses about how the climate works. I really wanted to do some live experiments, but of course, this is a little hard when a typical climate simulation run takes weeks of processing time on a supercomputer. There are some tools that high school kids could play with in the classroom, but none of them are particularly easy to use, and of course, they all sacrifice resolution for ability to run on a desktop machine. Here are some that I’ve played with:

  • EdGCM – This is the most powerful of the bunch. It’s a full General Circulation Model (GCM), based on the NASA’s models, and does support many different types of experiment. The license isn’t cheap (personally, I think it ought to be free and open source, but I guess they need a rich sponsor for that), but I’ve been playing with the free 30-day license. A full century of simulation tied up my laptop for 24 hours, but I kinda liked that, as it’s a bit like how you have to wait for results on a full scale model too (it even got hot, and I had to think about how to cool it, again just like a real supercomputer…). I do like the way that the documentation guides you through the process of creating an experiment, and the idea of then ‘publishing’ the results of your experiment to a community website.
  • JCM – This is (as far as I can tell) a box model, that allows you to experiment with outcomes of various emissions scenarios, based on the IPCC projections, which means it’s simple enough to give interactive outputs. It’s free and open source, but a little cumbersome to use – the interface doesn’t offer enough guidance for novice users. It might work well in a workshop, with lots of structured guidance for how to use it, but I’m not convinced such a simplistic model offers much value over just showing some of the IPCC graphs and talking about them.
  • Climate Interactive (and the C-Roads model). C-ROADS is also a box model, but with the emissions of different countries/regions separated out, to allow exploration of the choices in international climate negotiations. I’ve played a little with C-ROADS, and found it frustrating because it ignores all the physics, and after all, my main goal in playing with climate models with kids is to explore how climate processes work, rather than the much narrower task of analyzing policy choices. It also seems to be hard to tell the difference between various different policy choices – even when I try to run it with  extreme choices (cease all emissions next year vs. business as usual), the outputs are all of a similar shape (“it gets steadily warmer”). This may well be the correct output, but the overall message is a little unfortunate: whatever policy path we choose, the results look pretty similar. Showing the results of different policies as a set of graphs showing the warming response doesn’t seem very insightful; it would be better to explore different regional impacts, but for that we’re back to needing a full GCM.
  • CCCSN – the Canadian Climate Change Scenarios Network. This isn’t a model at all, but rather a front end to the IPCC climate simulation dataset. The web tool allows you to get the results from a number of experiments that were run for the IPCC assessments, selecting which model you want, which scenario you want, which output variables you want (temperature, precipitation, etc), and allows you to extract just a particular region, or the full global data. I think this is more useful than C-ROADS, because once you download the data, you can graph it in various ways, and explore how different regions are affected.
  • Some Online models collected by David Archer, which I haven’t played with much, but which include some box models, some 1-dimensional models, and the outputs of NCAR’s GCM (which I think is the one of the datasets included in CCCSN). Not much explanation is provided here though – you have to know what you’re doing…
  • John Sterman’s Bathtub simulation. Again, a box model (actually, a stocks-and-flows dynamics model), but this one is intended more to educate people about the basic systems dynamics principles, rather than to explore policy choices. So I already like it better than C-ROADS, except that I think the user interface could do with a serious make-over, and there’s way too much explanatory text – there must be a way to do this with more hands on and less exposition. It also suffers from a problem similar t0 C-ROADS: it allows you to control emissions pathways, and explore the result on atmospheric concentrations and hence temperature. But the problem is, we can’t control emissions directly – we have to put in place a set of policies and deploy alternative energy technologies to indirectly affect emissions. So either we’d want to run the model backwards (to ask what emissions pathway we’d have to follow to keep below a specific temperature threshold), or we’d want as inputs the things we can affect – technology deployments, government investment, cap and trade policies, energy efficiency strategies, etc.

None of these support the full range of experiments I’d like to explore in a kids’ workshop, but I think EdGCM is an excellent start, and access to the IPCC datasets via the CCCSN site might be handy. But I don’t like the models that focus just on how different emissions pathways affect global temperature change, because I don’t think these offer any useful learning opportunities about the science and about how scientists work.

I guess headlines like “An error found in one paragraph of a 3000 page IPCC report; climate science unaffected” wouldn’t sell many newspapers. And so instead, the papers spin out the story that a few mistakes undermine the whole IPCC process. As if newspapers never ever make mistakes. Well, of course, scientists are supposed to be much more careful than sloppy journalists, so “shock horror, those clever scientists made a mistake. Now we can’t trust them” plays well to certain audiences.

And yet there are bound to be errors; the key question is whether any of them impact any important results in the field. The error with the Himalayan glaciers in the Working Group II report is interesting because Working Group I got it right. And the erroneous paragraph in WGII quite clearly contradicts itself. Stupid mistake, that should be pretty obvious to anyone reading that paragraph carefully. There’s obviously room for improvement in the editing and review process. But does this tell us anything useful about the overall quality of the review process?

There are errors in just about every book, newspaper, and blog post I’ve ever read. People make mistakes. Editorial processes catch many of them. Some get through. But few of these things have the kind of systematic review that the IPCC reports went through. Indeed, as large, detailed, technical artifacts, with extensive expert review, the IPCC reports are much less like normal books, and much more like large software systems. So, how many errors get through a typical review process for software? Is the IPCC doing better than this?

Even the best software testing and review practices in the world let errors through. Some examples (expressed in number of faults experienced in operation, per thousand lines of code):

  • Worst military systems: 55 faults/KLoC
  • Best military systems: 5 faults/KLoC
  • Agile software development (XP): 1.4 faults/KLoC
  • The Apache web server (open source): 0.5 faults/KLoC
  • NASA Space shuttle:  0.1 faults/KLoC

Because of the extensive review processes, the shuttle flight software is purported to be the most expensive in the world, in terms of dollars per line of code. Yet still about 1 error every ten thousand lines of code gets through the review and testing process. Thankfully none of those errors have ever caused a serious accident. When I worked for NASA on the Shuttle software verification in the 1990’s, they were still getting reports of software anomalies with every shuttle flight, and releasing a software update every 18 months (this, for an operational vehicle that had been flying for two decades, with only 500,000 lines of flight code!).

The IPCC reports consist of around 3000 pages, and approaching 100 lines of text per page. Let’s assume I can equate a line of text with a line of code (which seems reasonable, when you look a the information density of each line in the IPCC reports) – that would make them as complex as a 300,000 line software system. If the IPCC review process is as thorough as NASA’s, then we should still expect around 30 significant errors made it through the review process. We’ve heard of two recently – does this mean we have to endure another 28 stories, spread out over the next few months, as the drone army of denialists toils through trying to find more mistakes? Actually, it’s probably worse than that…

The IPCC writing, editing and review processes are carried out entirely by unpaid volunteers. They don’t have automated testing and static analysis tools to help – human reviewers are the only kind of review available. So they’re bound to do much worse than NASA’s flight software. I would expect there to be 100s of errors in the reports, even with the best possible review processes in the world. Somebody point me to a technical review process anywhere that can do better than this, and I’ll eat my hat. Now, what was the point of all those newspaper stories again? Oh, yes, sensationalism sells.

When I was visiting MPI-M earlier this month, I blogged about the difficulty of documenting climate models. The problem is particularly pertinent to questions of model validity and reproducibility, because the code itself is the result of a series of methodological choices by the climate scientists, which are entrenched in their design choices, and eventually become inscrutable. And when the code gets old, we lose access to these decisions. I suggested we need a kind of literate programming, which sprinkles the code among the relevant human representations (typically bits of physics, formulas, numerical algorithms, published papers), so that the emphasis is on explaining what the code does, rather than preparing it for a compiler to digest.

The problem with literate programming (at least in the way it was conceived) is that it requires programmers to give up using the program code as their organising principle, and maybe to give up traditional programming languages altogether. But there’s a much simpler way to achieve the same effect. It’s to provide an organising structure for existing programming languages and tools, but which mixes in non-code objects in an intuitive way. Imagine you had an infinitely large sheet of paper, and could zoom in and out, and scroll in any direction. Your chunks of code are laid out on the paper, in an spatial arrangement that means something to you, such that the layout helps you navigate. Bits of documentation, published papers, design notes, data files, parameterization schemes, etc can be placed on the sheet, near to the code that they are relevant to. When you zoom in on a chunk of code, the sheet becomes a code editor; when you zoom in on a set of math formulae, it becomes a LaTeX editor, and when you zoom in on a document it becomes a word processor.

Well, Code Canvas, a tool under development in Rob Deline‘s group at Microsoft Research does most of this already. The code is laid out as though it was one big UML diagram, but as you zoom in you move fluidly into a code editor. The whole thing appeals to me because I’m a spatial thinker. Traditional IDEs drive me crazy, because they separate the navigation views from the code, and force me to jump from one pane to another to navigate. In the process, they hide the inherent structure of a large code base, and constrain me to see only a small chunk at a time. Which means these tools create an artificial separation between higher level views (e.g. UML diagrams) and the code itself, sidelining the diagrammatic representations. I really like the idea of moving seamlessly back and forth between the big picture views and actual chunks of code.

Code Canvas is still an early prototype, and doesn’t yet have the ability to mix in other forms of documentation (e.g. LaTeX) on the sheet (or at least not in any demo Microsoft are willing to show off), but the potential is there. I’d like to explore how we take an idea like this an customize it for scientific code development, where there is less of a strict separation of code and data than in other forms of programming, and where the link to published papers and draft reports is important. The infinitely zoomable paper could provide an intuitive unifying tool to bring all these different types of object together in one place, to be managed as a set. And the use of spatial memory to help navigate will be helpful, when the set of things gets big.

I’m also interested in exploring the idea of using this metaphor for activities that don’t involve coding – for example complex decision-support for sustainability, where you need to move between spreadsheets, graphs & charts, models runs, and so on. I would lay out the basic decision task as a graph on the sheet, with sources of evidence connecting into the decision steps where they are needed. The sources of evidence could be text, graphs, spreadsheet models, live datafeeds, etc. And as you zoom in over each type of object, the sheet turns into the appropriate editor. As you zoom out, you get to see how the sources of evidence contribute to the decision-making task. Hmmm. Need a name for this idea. How about DecisionCanvas?

Update: Greg also pointed me to CodeBubbles and Intentional Software

Well, my intention to liveblog from interesting sessions is blown – the network connection in the meeting rooms is hopeless. One day, some conference will figure out how to provide reliable internet…

Yesterday I attended an interesting session in the afternoon on climate services. Much of the discussion was building on work done at the third World Climate Conference (WCC-3) in August, which set out to develop a framework for provision of climate services. These would play a role akin to local, regional and global weather forecasting services, but focussing on risk management and adaptation planning for the impacts of climate change. Most important is the emphasis on combining observation and monitoring services with research and modeling services (both of which already exist) with a new climate services information system (I assume this would be distributed across multiple agencies across the world) and system of user interfaces to deliver the information in forms needed for different audiences. Rasmus at RealClimate discusses some of the scientific challenges.

My concern in reading the outcomes of the WCC this is that it’s all focussed on a one-way flow of information, with insufficient attention to understanding who the different users would be, and what they really need. I needn’t have worried – the AGU session demonstrated that there’s plenty of people focussing on exactly this issue. I got the impression that there’s a massive international effort quietly putting in place the risk management and planning tools needed for a us to deal with the impacts of a rapidly changing climate, but which is completely ignored by a media still obsessed with the “is it happening?” pseudo-debate. The extent of this planning for expected impacts would make a much more compelling media story, and one that matters, on a local, scale to everyone.

Some highlights from the session:

Mark Svboda from the National Drought Mitigation Centre at the University of Nebraska, talking about drought planning the in US. He pointed out that drought tends to get ignored compared to other kinds of natural disasters (tornados, floods, hurricanes), presumably because it doesn’t happen within a daily news cycle. However drought dwarfs the damage costs in the US from all other kinds of natural disasters except hurricanes. One problem is that population growth has been highest in regions most subject ot drought, especially in the southwest US. The NDMC monitoring program includes the only repository of drought impacts. Their US drought monitor has been very successful, but next generation of tools need better sources of data on droughts, so they are working on adding a drought reporter, doing science outreach, working with kids, etc. Even more important, is improving the drought planning process, hence a series of workshops on drought management tools.

Tony Busalacchi from the Earth System Science Interdisciplinary Centre at the University of Maryland. Through a series of workshops in the CIRUN project, they’ve identified the need for tools for forecasting, especially around risks such as sea level rise. Especially the need for actionable information, but no service currently provides this. Climate information system needed for policymakers, on scales of seasons to decades, providing tailorable to regions, and with ability to explore “what-if” questions. To build this, needs coupling of models not used together before, and the synthesis of new datasets.

Robert Webb from NOAA, in Boulder, on experimental climate information services to support risk management. The key to risk assessment is to understand it’s across multiple timescales. Users of such services do not distinguish between weather and climate – they need to know about extreme weather events, and they need to know how such risks change over time. Climate change matters because of the impacts. Presenting the basic science and predictions of temperature change are irrelevant to most people – its the impacts that matter (His key quote: “It’s the impacts, stupid!”). Examples: water – droughts and floods, changes in snowpack, river stream flow, fire outlooks, and planning issues (urban, agriculture, health). He’s been working with the Climate Change and Western Water Group (CCAWWG) to develop a strategy on water management. How to get people to plan and adapt? The key is to get people to think in terms of scenarios rather than deterministic forecasts.

Guy Brasseur from German Climate Services Center, in Hamburg. German adaption strategy developed by german federal government, which appears to be way ahead of the US agencies in developing climate services. Guy emphasized the need for seamless prediction – need a uniform ensemble system to build from climate monitoring of recent past and present, and forward into the future, at different regional scales and timescales. Guy called for an Apollo-sized program to develop the infrastructure for this.

Kristen Averyt from the University of Colorado, talking about her “Climate services machine” (I need to get hold of the image for this – it was very nice). She’s been running workshops for Colorado-specific services, with beakout sessions focussed on impacts and utility of climate information. She presented some evaluations of the success of these workshop, including a climate literacy test they have developed. For example at one workshop, the attendees had 63% correct answers at the beginning (where the wrong answers tended to cluster and indicate some important misperceptions. I need to get hold of this – it sounds like an interesting test. Kristen’s main point was that these workshops play an important role in reaching out to people of all ages, including kids, and getting them to understand how climate change will affect them.

Overall, the main message of this session was that while there have been lots of advances in our understanding of climate, these are still not being used for planning and decision-making.

First proper day of the AGU conference, and I managed to get to the (free!) breakfast for Canadian members, which was so well attended that the food ran out early. Do I read this as a great showing for Canadians at AGU, or just that we’re easily tempted with free food?

Anyway, on to the first of three poster sessions we’re involved in this week. This first poster was on TracSnap, the tool that Ainsley and Sarah worked on over the summer:

Our tracSNAP poster for the AGU meeting. Click for fullsize.

Our tracSNAP poster for the AGU meeting. Click for fullsize.

The key idea in this project is that large teams of software developers find it hard to maintain an awareness of one another’s work, and cannot easily identify the appropriate experts for different sections of the software they are building. In our observations of how large climate models are built, we noticed it’s often hard to keep up to date with what changes other people are working on, and how those changes will affect things. TracSNAP builds on previous research that attempts to visualize the social network of a large software team (e.g. who talks to whom), and relate that to couplings between code modules that team members are working on. Information about the intra-team communication patterns (e.g. emails, chat sessions, bug reports, etc) can be extracted automatically from project repositories, as can information about dependencies in the code. TracSNAP extracts data automatically from the project repository to provide answers to questions such as “Who else recently worked on the module I am about to start editing?”, and “Who else should I talk to before starting a task?”. The tool extracts hidden connections in the software by examining modules that were checked into the repository together (even though they don’t necessarily refer to each other), and offers advice on how to approach key experts by identifying intermediaries in the social network. It’s still a very early prototype, but I think it has huge potential. Ainsley is continuing to work on evaluating it on some existing climate models, to check that we can pull out of the respositories the data we think we can.

The poster session we were in, “IN11D. Management and Dissemination of Earth and Space Science Models” seemed a little disappointing as there were only three posters (a fourth poster presenter hadn’t made it to the meeting). But what we lacked in quantity, we made up in quality. Next to my poster was David Bailey‘s: “The CCSM4 Sea Ice Component and the Challenges and Rewards of Community Modeling”. I was intrigued by the second part of his title, so we got chatting about this. Supporting a broader community in climate modeling has a cost, and we talked about how university labs just cannot afford this overhead. However, it also comes with a number of benefits, particularly the existence of a group of people from different backgrounds who all take on some ownership of model development, and can come together to develop a consensus on how the model should evolve. With the CCSM, most of this happens in face to face meetings, particularly the twice-yearly user meetings. We also talked a little about the challenges of integrating the CICE sea ice model from Los Alamos with CCSM, especially given that CICE is also used in the Hadley model. Making it work in both models required some careful thinking about the interface, and hence more focus on modularity. David also mentioned people are starting to use the term kernelization as a label for the process of taking physics routines and packaging them so that they can be interchanged more easily.

Dennis Shea‘s poster, “Processing Community Model Output: An Approach to Community Accessibility” was also interesting. To tackle the problem of making output from the CCSM more accessible to the broader CCSM community, the decision was taken to standardize on netCDF for the data, and to develop and support a standard data analysis toolset, based on the NCAR Command Language. NCAR runs regular workshops on the use of these data formats and tools, as part of it’s broader community support efforts (and of course, this illustrates David’s point about universities not being able to afford to provide such support efforts).

The missing poster also looked interesting: Charles Zender from UC Irvine, comparing climate modeling practices with open source software practices. Judging from his abstract, Charles makes many of the same observations we made in our CiSE paper, so I was looking forward to comparing notes with him. Next time, I guess.

Poster sessions at these meetings are both wonderful and frustrating. Wonderful because you can wonder down aisles of posters and very quickly sample a large slice of research, and chat to the poster owners in a freeform format (which is usually much better than sitting through a talk). Frustrating because poster owners don’t stay near their posters very long (I certainly didn’t – too much to see and do!), which means you get to note an interesting piece of work, and then never manage to track down the author to chat (and if you’re like me, you also forget to write down contact details for posters you noticed. However, I did manage to make notes on two to follow up on:

  • Joe Galewsky caught my attention with an provocative title: “Integrating atmospheric and surface process models: Why software engineering is like having weasels rip your flesh”
  • I briefly caught Brendan Billingsley of NSIDC as he was taking his poster down. It caught my eye because it was a reflection on software reuse in the  Searchlight tool.

I wasn’t going to post anything about the CRU emails story (apart from my attempt at humour), because I think it’s a non-story. I’ve read a few of the emails, and it looks no different to the studies I’ve done of how climate science works. It’s messy. It involves lots of observational data sets, many of which contain errors that have to be corrected. Luckily, we have a process for that – it’s called science. It’s carried out by a very large number of people, any of whom might make mistakes. But it’s self-correcting, because they all review each other’s work, and one of the best ways to get noticed as a scientist is to identify and correct a problem in someone else’s work. There is, of course, a weakness in the way such corrections are usually done to a previously published paper. The corrections appear as letters and other papers in various journals, and don’t connect up well to the original paper. Which means you have to know an area well to understand which papers have stood the test of time and which have been discredited. Outsiders to a particular subfield won’t be able to tell the difference. They’ll also have a tendency to seize on what they think are errors, but actually have already been addressed in the literature.

If you want all the gory details about the CRU emails, read the RealClimate posts (here and here) and the lengthy discussion threads that follow from them. Many of the scientists involved in the CRU emails comment in these threads, and the resulting picture of the complexity of the data and analysis that they have to deal with is very interesting. But of course, none of this will change anyone’s minds. If you’re convinced global warming is a huge conspiracy, you’ll just see scientists trying to circle the wagons and spin the story their way. If you’re convinced that AGW is now accepted as scientific fact, then you’ll see scientists tearing their hair out at the ignorance of their detractors. If you’re not sure about this global warming stuff, you’ll see a lively debate about the details of the science, none of which makes much sense on its own. Don’t venture in without a good map and compass. (Update: I’ve no idea who’s running this site, but it’s a brilliant deconstruction of the allegations about the CRU emails).

But one issue has arisen in many discussions about the CRU emails that touches strongly on the research we’re doing in our group. Many people have looked at the emails and concluded that if the CRU had been fully open with its data and code right from the start, there would be no issue now. This is of course, a central question in Alicia‘s research on open science. While in principle, open science is a great idea, in practice, there are many hurdles, including the fear of being “scooped”, the need to give appropriate credit, the problems of metadata definition and of data provenance, the cost of curation, and the fact that software has a very short shelf-life, and so on. For the CRU dataset at the centre of the current kerfuffle, someone would have to go back to all the data sources, and re-negotiate agreements about how the data can be used. Of course, the anti-science crowd just think that’s an excuse.

However, for climate scientists there is another problem, which is the workload involved in being open. Gavin, at RealClimate, raises this issue in response to a comment that just putting the model code online is not sufficient. For example, if someone wanted to reproduce a graph that appears in a published paper, they’ll need much more: the script files, the data sets, the parameter settings, the spinup files, and so on. Michael Tobis argues that much of this can be solved with good tools and lots of automated scripts. Which would be fine if we were talking about how to help other scientists replicate the results.

Unfortunately, in the special case of climate science, that’s not what we’re talking about. A significant factor in the reluctance of climate scientists to release code and data is to protect themselves from denial-of-service attacks. There is a very well-funded and PR-savvy campaign to discredit climate science. Most scientists just don’t understand how to respond to this. Firing off hundreds of requests to CRU to release data under the freedom of information act, despite each such request being denied for good legal reasons, is the equivalent of frivolous lawsuits. But even worse, once datasets and codes are released, it is very easy for an anti-science campaign to tie the scientists up in knots trying to respond to their attempts to poke holes in the data. If the denialists were engaged in an honest attempt to push the science ahead, this would be fine (although many scientists would still get frustrated – they are human too).

But in reality, the denialists don’t care about the science at all; their aim is a PR campaign to sow doubt in the minds of the general public. In the process, they effect a denial-of-service attack on the scientists – the scientists can’t get on with doing their science because their time is taken up responding to frivolous queries (and criticisms) about specific features of the data. And their failure to respond to each and every such query will be trumpeted as an admission that an alleged error is indeed an error. In such an environment, is it perfectly rational not to release data and code – it’s better to pull up the drawbridge and get on with the drudgery of real science in private. That way the only attacks are complaints about lack of openness. Such complaints are bothersome, but much better than the alternative.

In this case, because the science is vitally important for all of us, it’s actually in the public interest that climate scientists be allowed to withhold their data. Which is really a tragic state of affairs. The forces of anti-science have a lot to answer for.

Update: Robert Grumbine has a superb post this morning on why openness and reproducibility is intrinsically hard in this field