Excellent news: Jon Pipitone has finished his MSc project on the software quality of climate models, and it makes fascinating reading. I quote his abstract here:

A climate model is an executable theory of the climate; the model encapsulates climatological theories in software so that they can be simulated and their implications investigated. Thus, in order to trust a climate model one must trust that the software it is built from is built correctly. Our study explores the nature of software quality in the context of climate modelling. We performed an analysis of the reported and statically discoverable defects in several versions of leading global climate models by collecting defect data from bug tracking systems, version control repository comments, and from static analysis of the source code. We found that the climate models all have very low defect densities compared to well-known, similarly sized open-source projects. As well, we present a classification of static code faults and find that many of them appear to be a result of design decisions to allow for flexible configurations of the model. We discuss the implications of our findings for the assessment of climate model software trustworthiness.

The idea for the project came from an initial back-of-the-envelope calculation we did of the Met Office Hadley’s Centre’s Unified Model, in which we estimated the number of defects per thousand lines of code (a common measure of defect density in software engineering) to be extremely low – of the order of 0.03 defects/KLoC. By comparison, the shuttle flight software, reputedly the most expensive software per line of code ever built, clocked in at 0.1 defects/KLoC; most of the software industry does worse than this.

This initial result was startling, because the climate scientists who build this software don’t follow any of the software processes commonly prescribed in the software literature. Indeed, when you talk to them, many climate modelers are a little apologetic about this, and have a strong sense they ought to be doing more rigorous job with their software engineering. However, as we documented in our paper, climate modeling centres such as the UK Met Office do have a excellent software processes, that they have developed over many years to suit their needs. I’ve come to the conclusion that it has to be very different from mainstream software engineering processes because the context is so very different.

Well, obviously we were skeptical (scientists are always skeptical, especially when results seem to contradict established theory). So Jon set about investigating this more thoroughly for his MSc project. He tackled the question in three ways: (1) measuring defect density by using bug repositories, version history and change logs to quantify bug fixes; (2) assessing the software directly using static analysis tools and (3) interviewing climate modelers to understand how they approach software development and bug fixing in particular.

I think there are two key results of Jon’s work:

  1. The initial results on defect density bear up. Although not quite as startlingly low as my back of the envelope calculation, Jon’s assessment of three major GCMs indicate they all fall in the range commonly regarded as good quality software by industry standards.
  2. There are a whole bunch of reasons why result #1 may well be meaningless, because the metrics for measuring software quality don’t really apply well to large scale scientific simulation models.

You’ll have to read Jon’s thesis to get all the details, but it will be well worth it. The conclusion? More research needed. It opens up plenty of questions for a PhD project….

27. April 2010 · 2 comments · Categories: ICSE 2009

It’s nearly a year late, but I finally managed to put together the soundtrack and slides from the short version of the talk on Software Engineering for the Planet that I gave at the International Conference on Software Engineering last year. The full version has been around for a while, but I’m not happy with it because it’s slow and ponderous. To kick off a lunchtime brainstorming session we had at ICSE last year, I did a Pecha Kucha version of it in 6 minutes and 40 seconds (if you listen carefully, you can hear the lunch plates rattling). For anyone who has never done a Pecha Kucha talk, I highly recommend it – putting the slides on an automated timer really keeps you on your toes.

PS If you look carefully, you’ll notice I cheated slightly, rather than 20 slides with 20 seconds each, I packed in more by cutting them down to 10 seconds each for the last half of the talk. It surprises me that this actually seems to work!

After catching the start of yesterday’s Centre for Environment Research Day, I headed around the corner to catch the talk by Ray Pierrehumbert on “Climate Ethics, Climate Justice“. Ray is here all week giving the 2010 Noble lectures, “New Worlds, New Climates“. His theme for the series is the new perspectives we get about Earth’s climate from the discovery of hundreds of new planets orbiting nearby stars, advances in knowledge about solar system planets, and advances in our knowledge of the early evolution of Earth, especially new insights into the snowball earth. I missed the rest of the series, but made it today, and I’m glad I did, because the talk was phenomenal.

Ray began by pointing out that climate ethics might not seem to fit with the theme of the rest of the series, but it does, because future climate change will, in effect, make the earth into a different planet. And the scary thing is we don’t know too much about what that planet will be like. Which then brings us to questions of responsibility, particularly the question of how much we should be spending to avoid this.

Figure 1 from Rockstrom et al, Nature 461, 472-475 (24 Sept 2009). Original caption: The inner green shading represents the proposed safe operating space for nine planetary systems. The red wedges represent an estimate of the current position for each variable. The boundaries in three systems (rate of biodiversity loss, climate change and human interference with the nitrogen cycle), have already been exceeded.

Humans are a form of life, and are altering the climate in a major way. Some people talk about humans now having an impact of “geological proportions” on the planet. But in fact, we’re a force of far greater than geological proportions: we’re releasing around 20 times as much carbon per year than what nature can do (for example via volcanoes). We may cause a major catastrophe. And we need to consider not just CO2, but many other planetary boundaries – all biogeochemical boundaries.

But this is nothing new – this is what life does – it alters the planet. The mother of all planet altering lifeforms is blue-green algae. It radically changed atmospheric chemistry, even affecting composition of rocks. If the IPCC had been around at the end of the Archean Eon (2500 million years ago) to consider how much photosynthesis should be allowed, it would have been a much bigger question than we face today. The biosphere (eventually!) recovers from such catastrophes. There are plenty of examples: oxygenation by cyanobacteria, snowball earth, permo-triassic mass extinction (90% of species died out) and the KT dinosaur killer astreroid (although the latter wasn’t biogeochemically driven). So the earth does just fine in the long run, and such catastrophes often cause interesting things to happen, eg. opening up new niches for new species to evolve (e.g. humans!).

But normally these changes take tens of millions of years, and whichever species were at the top of the heap before usually lose out: the new kind of planet favours new kinds of species.

So what is new with the current situation? Most importantly we have foresight and we know about what we’re doing to the planet. This means we have to decide what kind of climate the planet will have, and we can’t avoid that decision, because even deciding to do nothing about it is a decision. We cannot escape the responsibility. For example, we currently have a climate that humans evolved to exist in. The conservative thing is to decide not to rock the boat – to keep the climate we evolved in. On the other hand we could decide a different climate would be preferable, and work towards it – e.g. would things be better (on balance) if the world were a little warmer or a little cooler. So we have to decide how much warming is tolerable. And we must consider irreversible decisions – e.g. preserving irreplaceable treasures (e.g. species that will go extinct). Or we could put the human species at the centre of the issue, and observe that (as far as we know) the human specifies is unique as the only intelligent life in the universe; the welfare of the human species might be paramount. So then the question then becomes: how should we preserve a world worth living in for humanity?

So far, we’re not doing any better than cyanobacteria. We consume resources and reproduce until everything is filled up and used up. Okay, we have a few successes, for example in controlling acid rain and CFCs. But on balance, we don’t do much better than the bacteria.

Consider carbon accounting. You can buy carbon credits, sometimes expressed in terms of tonnes of CO2, sometimes in terms of tonnes of carbon. From a physics point of view, it’s much easier to think in terms of carbon molecules, because it’s the carbon in various forms that matters – e.g. dissolved in the ocean making them more acidic, in CO2 in the atmosphere, etc. We’re digging up this carbon in various forms (coal, oil, gas) and releasing it into the atmosphere. Most of this came from biological sources in the first place, but has been buried over very long (geological) timescales. So, we can do the accounting in terms of billions of tonnes (Gt) of carbon. The pre-industrial atmosphere contained 600Gt carbon. Burning another 600Gt would be enough to double atmospheric concentrations (except that we have to figure out how much stays in the atmosphere, how much is absorbed by the oceans, etc). World cumulative emissions show an exponential growth over last century. We are currently at 300Gt cumulative emissions from fossil fuel. 1000Gt of cumulative emissions is an interesting threshold, because that’s about enough to warm the planet by 2°C (which is the EU’s stated upper limit). A straight projection of the current exponential trend takes us to 5000GtC by 2100. It’s not clear there is enough coal to get us there, but it is dangerous to assume that we’ll run out of resources before this. The worst scenario: we get to 5000GtC, wreck the climate, just as we run out of fossil fuels, so civilization collapses, at a time when we no longer have a tolerable climate to live in.

Of course, such exponential growth can never continue indefinitely. To demonstrate the point, Ray showed the video of The Impossible Hamster Club. The key question is whether we will voluntarily stop this growth in carbon emissions, and if we don’t, at what point will natural limits kick in and stop the growth for us?

There are four timescales for CO2 drawdown:

  • Uptake by the ocean mixed layer – a few decades
  • Uptake by the deep ocean – a few centuries
  • Carbonate dissolution (laying down new sediments on the ocean bed) – a few millenia
  • Silicate weathering (reaction between rocks and CO2 in the atmosphere that creates limestone) – a few hundred millenia.

Ray then showed the results of some simulations using the Hamburg carbon cycle model. The scenario they used is a ramp up to peak emissions in 2010, followed by a drop to either 4, 2, or 1Gt per year from then on. The graph of atmospheric concentrations out to the year 4000 shows that holding emissions stable at 2Gt/yr still causes concentrations to ramp up to 1000ppm. Even reducing to 1Gt/yr leads to an increase to around 600ppm by the year 4000. The obvious conclusion is that we have to reduce net emissions to approximately zero in order to keep the climate stable over the next few thousand years.

What does a cumulative emissions total of 5000GtC mean for our future? Peak atmospheric concentrations will reach over 2000ppm, and stay there for around 10,000 years, then slowly reducing on a longer timescale because of silicate weathering. Global mean temperature rises by around 10°C. Most likely, the greenland and west antarctic ice sheets will melt completely (it’s not clear what it would take to melt the east antarctic). So what we do this century will affect us for tens of thousands of years. Paul Crutzen coined the term anthropocene to label this new era in which humans started altering the climate. In the distant future, the change in the start of the anthropocene will look as dramatic as other geolological shifts – certainly bigger than the change at the end of the KT extinction.

This makes geoengineering by changing the earth’s albedo an abomination (Ray mentioned as an example the view put forward in that awful book Superfreakonimics). It’s morally reprehensible, because it leads to the Damocles world. The sword hanging over us is that for the next 10,o000 years, we’re committed to doing the sulphur seeding every two years, and continuing to do so no matter what unforutunate consequence such as drought, etc. happen as side effects.

But we will need longwave geoengineering – some way of removing CO2 from the atmosphere to deal with the last gigatonne or so of emissions, because these will be hard to get rid of no matter how big the push to renewable energy sources. That suggests we do need a big research program on air capture techniques.

So, the core questions for climate ethics are:

  • What is the right amount to spend to reduce emissions?
  • How should costs be divided up (e.g. US, Europe, Asia, etc)?
  • How to figure the costs of inaction?
  • When should it be spent?

There is often a confusion between fairness and expedience (e.g. Cass Sunstein, an Obama advisor, makes this mistake in his work on climate change justice). The argument goes that a carbon tax that falls primarily on the first world is, in effect, a wealth transfer to the developing world. It’s a form of foreign aid, therefore hard to sell politically to Americans, and therefore unfair. But the real issue is not about what’s expedient, the issue is about the right thing to do.

Not all costs can be measured by money, which makes cost-benefit analysis a poor tool for reasoning about climate change. For example, how can we account for loss of life, loss of civil liberties, etc in a cost/benefit analysis? Take for example the effect of capital punishment on crime reduction versus the injustice of executing the innocent. This cannot be a cost/benefit decision, it’s a question of social values. In economic theory the “contingent valuation” of non-market costs and benefits is hopelessly broken. Does it make sense to trade off polar bear extinction against Arctic oil revenue by assigning a monetary value to polar bears? A democratic process must make these value judgements – we cannot push them off to economic analysis in terms of cost-benefits. The problem is that the costs and benefits of planetary scale processes are not additive. Therefore cost/benefit is not a suitable tool for making value decisions.

Similarly the use of (growth in) GDP, which is used by economists as a proxy for a nation’s welfare. Bastiat introduced the idea of the broken window fallacy – the idea that damage to people’s property boosts GDP because it increases the need for work to be done to fix it, and hence increases money circulation. This argument is often used by conservatives to poohpooh the idea of green jobs – what’s good for jobs doesnt necessarily make people better off. But right now the entire economy is made out of broken windows: Hummers, Mcmansions, video screens in fastfood joints,… all of it is consumption that boosts GDP without actually improving life for anyone. (Perhaps we should try to measuring gross national happiness instead, like the Bhutanese).

And then there’s discounting – how do we compare the future with the present? The usual technique is to exponentially downweight future harms according to how far in the future they are. The rationale is you could equally well put the money in the bank, and collect interest to pay for future harms (i.e. generate a “richer future”, rather than spend the money now on mitigating the problem). But certain things cannot be replaced by money (e.g. human life, species extinction). Therefore they cannot be discounted. And of course, economists make the 9 billion tonne hamster mistake – they assume the economy can keep on growing forever. [Note: Ray has more to say on cost-benefit and discounting in his slides, which he skipped over in the talk through lack of time]

Fairness is a major issue. How do we approach this? For example, retributive justice – punish the bad guys? You broke it, you fix it? Whoerever suffers the least from fixing it moves first? Consider everyone to be equal?  Well, the Canadian climate policy appears to be: wait to see what Obama does, and do the same, unless we can get away with doing less.

What about China vs. the US, the two biggest emitters of greenhouse gases? The graph of annual CO2 emissions shows that China overtook the US in the last few years (while, for example, France held their emissions constant). But you have to normalize the emissions per capita, then the picture looks very different. And here’s an interesting observation: China has a per capita emissions very close to that of France, but doesn’t have French standard of living. Therefore there is clearly room for China to improve its standard of living without increasing per capita emissions, which means that emissions controls do not necessarily hold back development.

But because it’s cumulative emissions that really matter, we have to look at each nation’s cumulative per capita emissions. The calculation is tricky because we have to account for population growth. It turns out that the US has a bigger population growth problem than China, which, when added to the cumulative emissions, means US has much bigger responsibility to act. If we take the target of 1000GtC as the upper limit on cumulative emissions (to stay within the 2°C temperature rise), and allocate that equally to everyone, based on 2006 population figures, we get about 100 tonnes of carbon per capita as a lifetime allowance. The US has an overdraft on this limit (because the US has used up more than this), while China still has a carbon balance (it’s used up less). In other words, in terms of the thing that matters most, cumulative emissions, the US has used up more than it’s fair share of a valuable resource (slide 43 from Ray’s talk):

This graph shows the cumulative emissions per (2006) capita for the US and China. If we take 100 tonnes as the lifetime limit for each person (to keep within the global 1000Gt target), then the US has already used more than its fair share, and China has used much less.

This analysis makes it clear what the climate justice position is. The Chinese might argue that just to protect themselves and their climate, China might need to do something more than its fair share. In terms of a negotiation, arguing about everyone taking action together, might be expedient. But the right thing to do for the US is not just to reduce emissions to zero immediately, but to pay back that overdraft.

Some interesting questions from the audience:

Q: On geoengineering – why rule out attempts to change the albedo of the earth by sulphate particle seeding when we might need an “all of the above” approach? A: Ray’s argument is largely about what happens if it fails. For example, if the dutch dykes fail, in the worst case, the Dutch could move elsewhere. If global geoengineering fails, we don’t have an elsewhere to move to. Also, if you stop, you get hit all at once with the accumulated temperature rise. This makes Levitt’s suggestion of “burn it all and geoengineer to balance” to be morally reprehensible.

Q: Could you say more about the potential for air capture? A: It’s a very intriguing idea. All the schemes being trialed right now capture carbon in the form of CO2 gas, which would then need to be put down into mineral form somehow. A more interesting approach is to capture CO2 directly in mineral form, e.g. limestone. It’s not obviously crazy, and if it works it would help. It’s more like insurance, and investing in research in this likely to provide a backup plan in a way that albedo alteration does not.

Q: What about other ways of altering the albedo? A: Suggestions such as painting roofs & parking lots white will help reduce urban heat, mitigate effect of heatwaves, and also reduce use of airconditioners. Which is good, but it’s essentially a regional effect. The overall effect on the global scale is probably negligible. So it’s a good idea because it only has a regional impact.

Q: About nuclear – will we need it? A: Ray says probably yes. If it comes down to a choice between nuclear vs. coal, the choice has to be nuclear.

Finally, I should mention Ray has a new book coming out: Principles of Planetary Climate, and is a regular contributor to RealClimate.org.

This morning I attended the first part of the Centre for Environment’s Research Day, and I’m glad I did, because I caught the talk by Chris Kennedy from the Sustainable Infrastructure Group in the Dept of Civil Engineering, on “Greenhouse Gases from Global Cities”. He talked about a study he’s just published, on the contribution of ten major cities to GHG emissions. Chris points out that most of the solutions to climate change will have to focus on changing cities. Lots of organisations are putting together greenhouse gas inventories for cities, but everyone is doing it differently, measuring different things. Chris’s study examined how to come up with a consistent approach. For example, the approach taken in Paris is good at capturing lifecycle emissions, London is good at spatial issues, Tokyo is good at analyzing emissions over time. Each perspective useful, but the differences make comparisons hard. But there’s no obvious right way to do it. For example, how do you account for the timing of emissions release, e.g. for waste disposal? Do you care about current emissions as a snapshot, or future emissions that are committed because of waste generated today?

The IPCC guidelines for measuring emissions take a pure producer perspective. They focus only on emissions that occur within the jurisdiction of each territory. This ignores, for example, consumer emissions when the consumer of a product or service is elsewhere. It also ignores upstream emissions: e.g. electricity generation is generally done outside the city, but used within the city. Then there’s line loss in power transmission to the city; that should also get counted. In Paris, Le Bilan Carbon counts embodied emissions in building materials, maintenance of vehicles, refining of fuels, etc. but it ignores emissions by tourists, which is a substantial part of Paris’ economy.

In the study Chris and colleagues did, they studied ten cities, many iconic: Bankok, Barcelona, Cape Town, Denver, Geneva, London, Los Angeles, New York, Prague and Toronto. Ideally they would like to have studied metropolitan regions rather than cities, because it then becomes simpler to include transport emissions for commuting, which really should be part of the assessment of each city. The study relied partially on existing assessments for some of these cities and analyzed emissions in terms of electricity, heating/industrial fuels (lumped together, unfortunately), ground transport, aviation and marine fuels, industrial processes, and waste (the methodology is described here).

For example, for electricity, Toronto comes second in consumption (MWh per capita), after Denver, and is about double that of London and Prague. Mostly, this difference is due to the different climate, but also the amount of commerce and industry within the city. However, the picture for carbon intensity is very different, as there is a big mix of renewables (e.g. hydro) in Toronto’s power supply, and Geneva gets its power supply almost entirely from hydro. So you get some interesting combinations: Toronto has high consumption but low intensity, whereas Cape Town has low consumption and high intensity. So multiply the two: Denver is off the map  at 9 t eCO2 per capita, because it has high consumption and high intensity, while most others are in the same range, around 2-3 t eCO2 per capita. And Geneva is very low:

Climate has to be taken into account somehow, because there is an obvious relationship between energy used for heating and typical temperatures, which can be assessed by counting heating degree days:

Aviation is very interesting. Some assessments exclude it, on the basis that local government has no control. But Chris points out that when it comes down to it, local government has very little control over anything, so that argument doesn’t really wash. The UNFCC says domestic flights should be included, but this then has a small country bias – small countries tend to have very few domestic flights. A better approach is to include international flights as well, so that we count all flights taking off from that city. Chris’ methodology assesses this as jet fuel loaded at each airport. For this then, London is way out in the lead:

In summary, looking at total emissions, Denver is way out in front. In conversations with them, it was clear they had no idea – they think of themselves as a clean green city up in the mountains. No surprises that the North American cities all fare the worst, driven by a big chunk of emissions from ground transportation. The real surprises though are Bangkok and Cape Town, which compare with New York and Toronto for total emissions:

Chris concluded the talk with some data from Asian cities that were not included in the above study. In particular, Shanghai and Beijing are important in part because of their sheer size. For example, if Shanghai on its own was a country, it would come it about #25 in the world for total emissions.

One thing I found interesting from the paper that Chris didn’t have time to cover in the talk was the obvious relationship between population density and emissions from ground transportation fuels. Clearly, to reduce carbon emissions, cities need to become much denser (and should all be more like Barcelona):

Busy few days coming up in Toronto, around the celebration of Earth day tomorrow:

  • tonight, the Green Party is hosting an evening with Gwynne Dyer and Elizabeth May entitled “Finding Hope: Confronting Climate Wars“. Gwynne is, of course, the author of the excellent book Climate Wars, also available as a series of podcasts from the CBC;
  • tomorrow (April 22) the Centre for Environment has its Research Day, showcasing some of the research of the centre;
  • all week, the Centre for Global Change science is hosting Ray Pierrehumbert giving a lecture series “New Worlds, New Climates“. Tomorrow’s (April 22) looks particularly interesting: “Climate Ethics, Climate Justice”;
  • next Tuesday (April 27), the Centre for Ethics is running a public issues forum on “Climate Change and the Ethics of Responsibility“;

I won’t make it to all of these, but will blog those that I do make. Seems like ethics of climate change is a theme, which I think is very timely.

09. April 2010 · 9 comments · Categories: politics

My debate with George Monbiot is still going on in this thread. I’m raising this comment to be a separate blog post (with extra linky goodness), because I think it’s important, independently of any discussion of the CRU emails (and to point out that the other thread is still growing – go see!)

Like many other commentators, George Monbiot suggests that “to retain the moral high ground we have to be sure that we’ve got our own house in order. That means demanding the highest standards of scientific openness, transparency and integrity”.

It’s hard to argue with these abstract ideals. But I’ll try, because I think this assertion is not only unhelpful, but also helps to perpetuate several myths about science.

The argument that scientists should somehow be more virtuous (than regular folks) is a huge fallacy. Openness and transparency are great as virtues to strive for. But they cannot ever become a standard by which we judge individual scientists. For a start, no scientific field has ever achieved the levels of openness that are being demanded here. The data is messy, the meta-data standards are not in place, the resources to curate this data are not in place. Which means the “get our own house in order” argument is straight denialist logic – they would have it that we can’t act on the science until every last bit of data is out in the public domain. In truth, climate science has developed a better culture of data sharing, replication, and results checking than almost any other scientific field. Here’s one datapoint to back this up: in no other field of computational science are there 25+ teams around the world building the same simulation models independently, and systematically comparing their results on thousands of different scenarios in order to understand the quality of those simulations.

We should demand from scientists that they do excellent science. But we should not expect them to also somehow be superhuman. The argument that scientists should never exhibit human weaknesses is not just fallacious, it’s dangerous. It promotes the idea that science depends on perfect people to carry it out, when in fact the opposite is the case. Science is a process that compensates for the human failings of the people who engage in it, by continually questioning evidence, re-testing ideas, replicating results, collecting more data, and so on. Mistakes are made all the time. Individual scientists screw up. If they don’t make mistakes, they’re not doing worthwhile science. It’s vitally important that we get across to the public that this is how science works, and that errors are an important part of the process. Its the process that matters, not any individual scientist’s work. The results of this process are more trustworthy than any other way of producing knowledge, precisely because the process is robust in the face of error.

In the particular case [of the CRU emails], calling for scientists to take the moral high ground, and to be more virtuous, is roughly the equivalent of suggesting that victims of sexual assault should act more virtuous. And if you think this analogy is over the top, you haven’t understood the nature of the attacks on scientists like Mann, Santer, Briffa, and Jones. Look at Jones now: he’s contemplated suicide, he’s on drugs just to help him get through the day, and more drugs to allow him to sleep at night. These bastards have destroyed a brilliant scientist. And somehow the correct response is that scientists should strive to be more virtuous?! Oh yes, blame the victim.

08. April 2010 · 6 comments · Categories: politics

Picked up my copy of the Guardian Weekly today, to see a front page story entitled “Trillion-Dollar question: future of climate talks”. Halfway through the article we get:

“Politicians and negotiators will find the mood of the talks very different from where they were left off in Copenhagen in December. For a start, the climate science that has underpinned them has suffered damaging setbacks. There was the leaking from the University of East Anglia’s climate research unit of email exchanges between some of the world’s top meteorologists as well as the discovery that a UN assessment report on climate change had vastly exaggerated the rate of melting of Himalayan glaciers.

The former revelation suggested some researchers were involved in massaging the truth, sceptics claimed, while the latter exposed deficiencies in the way the UN’s Intergovernmental Panel on Climate Change – authors of the report – go about their business. The overall effect has been to damage the credibility of the large number of scientists who fear our planet faces climatic disaster” (The Guardian Weekly, 2-8 April 2010)

This is such utter nonsense, I’m left wondering whether the Guardian has been taken over by Fox news. Lots of good science happened in the last few months, but all the media seems to care about is a minor error in one paragraph of a 3000 page document, and emails that show how much climate scientists are being harassed by denialist bullies. This isn’t a damaging setback, it’s a pack of lies. How did this denialist rhetoric come to dominate the media? Has the entire media now gone into denial? Will we get some kind of ransom note explaining what we have to pay to get our science back?

07. April 2010 · 53 comments · Categories: politics

My exposé of why academics’ private emails sometimes seem cranky has gotten a lot of attention. Joe Romm posted it at ClimateProgress, where it generated many comments, many expressing thanks for saying what needed to be said. George Monbiot posted a comment, pointing out that for a journalist, FoI laws are sacred: a hard won concession that allows them to fight the secrecy that normally surrounds the political establishment. So there’s clearly some mutual incomprehension between the two cultures, academic and journalistic. For journalists, FoI is a vital weapon to root out corruption in a world where few people can be trusted. For scientists, FoI is a blunt instrument, unneeded in a world where honesty and trust are the community norms, and data is freely shared as much as is practically possible.

George expands this theme on his blog, and I appear to have shifted his perspective on the CRU emails, although perhaps not as far as I might have hoped. His thesis is that scientists and journalists have each formed a closed culture, leading each to be suspicious (and worse) of the other. Well, I think this is not strictly accurate. I don’t think either culture is walled in. In fact, I’m beginning to think I overstated the case in my original post: scientists are certainly not “walled in like an anchorite”. Pretty much every scientist I know will happily talk at length to anyone who shows an interest in their work, and will nearly always share data and code with anyone who is engaged in honest scientific work. For our own research into software quality, we have obtained source code, datasets, software bug histories, and extensive access for interviews in every climate modeling centre we have approached.

Unfortunately, scientists tend to be way too focussed (obsessed?) for most people’s taste, so lay people don’t generally want to talk to them in the first place. But scientists will accept into the community anyone who’s willing to work at it (after all most of us spend a lot of time training students), as long as they show the necessary commitment to the scientific process and the pursuit of truth. Traditional investigative journalism used to share these values too, but this tradition now seems to be another endangered species. The experience when scientists talk to journalists is usually more about the journalist seeking a sensationalist angle to sell a story, rather than a quest for understanding. And a reliance on false balance rather than weighing up the evidence.

So there is a bit of a gulf between the two cultures, but its not insurmountable, and there are plenty of examples of good science reporting to show that people regularly do bridge this gulf.

No, the real story is not the relationship between science and the media at all. It’s the story of how the media has been completely taken in by a third group, a third culture, consisting of ideologically-driven, pathological liars, who will say almost anything in order to score political points, and will smear anyone they regard as an opponent. Stern calls climate change the greatest ever failure of the free markets. I think that looking back, we may come to regard the last six months as the greatest ever failure of mass media. Or alternatively, the most successful disinformation campaign ever waged.

At the centre of this story are people like Marc Morano and Jim Inhofe. They haven’t a clue what science is; to them it’s just one more political viewpoint to attack. They live in a world of paranoid fantasies, where some secret cabal is supposedly trying to set up a world government to take away their freedoms. Never mind that every credible scientific body on the planet is warning about the wealth of evidence we now have about the risk of dangerous climate change. Never mind that the IPCC puts together one of the most thorough (and balanced!) state-of-the-art surveys ever undertaken in any scientific field. Never mind that the newest research suggests that these assessments are, if anything, underestimating the risk. No, these people don’t like the message, and so set out to attack the messengers with a smear campaign based on hounding individual scientists for years and years until they snap, and then spreading stories in the media about what happens when the scientists tell them to piss off.

Throughout all this, in underfunded labs, and under a barrage of attacks, scientists have done their job admirably. They chase down the uncertainties, and report honestly and accurately what they know. They doggedly compile assessment reports year after year to present the mass of evidence to anyone who cares to listen. It simply beggars belief that journalists could, in 2010, still be writing opinion pieces arguing that the scientists need to do a better job, that they are poor communicators, that we need more openness and more data sharing. That these themes dominate the reporting is a testament to how effective the disinformation campaign has been. The problem is not in the science, or with scientists at all, nor with a culture gap between science and the media. The problem is with this third group, the disinformers, who have completely dominated the framing of the story, and how honest journalists have been completely taken in by this framing.

How did they do it? Well, one crucial element of their success is their use of FoI laws. By taking the journalists’ most prized weapon, and wielding it against climate scientists, they achieved a whole bunch of successes all at once. They got journalists on their side, because journalists have difficulty believing that FoI laws could be used for anything other than good old-fashioned citizen democracy. They got the public on their side by appearing to be the citizens fighting the establishment. They set up the false impression that scientists have stuff to hide, by ignoring the vast quantities of open data in climate science, and focussing on the few that were tied up with commercial licence agreements. And they effected a denial of service attack by flooding a few target scientists with huge numbers of FoI requests. Add to this the regular hate mail and death threats that climate scientists receive, and you have a recipe for personal meltdowns. And the media lapped up the story about personal meltdowns, picked it up and ran with it, and never once asked whose framing they were buying into.

And the result is that, faced with one of the greatest challenges humanity has ever faced, the media got the story completely backwards. Few journalists and few scientists seem to have any conception of how this misinformation campaign works, how nasty these people are, and how dirty they play. They have completely owned the story for the last few months, with their framing of “scientists making mistakes” and “scientists distorting their data”. They’ve successfully portrayed the scientists as being at fault, when it is the scientists who are the victims of one of the nastiest public bullying campaigns ever conducted. History will have to judge how it compares to other such episodes (McCarthyism would make a fascinating comparator). And the stakes are high: at risk is our ability to make sensible policy choices and international agreements based on good scientific evidence, to ensure that our children and grandchildren can flourish as we do.

We’re fucking this up bigtime, and it’s not the scientists who are at fault.