Wednesday morning also saw the poster session “IN31B – Emerging Issues in e-Science: Collaboration, Provenance, and the Ethics of Data”. I was presenting Alicia‘s poster on open science and reproducibility:

Identifying Communication Barriers to Scientific Collaboration (click for fullsize)

Identifying Communication Barriers to Scientific Collaboration (click for fullsize)

The poster summarizes Alicia’s master’s thesis work – a qualitative study of what scientists think about open science and reproducibility, and how they use these terms (Alicia’s thesis will be available real soon now). The most interesting outcome of the study for me was the realization that innocent sounding terms such as “replication” mean very different things to different scientists. For example, when asked how many experiments in their field are replicated, and how many should be replicated, the answers are all over the map. One reason is that the term “experiment” can have vastly different meanings to different people, from a simple laboratory procedure that might take an hour or so, to a journal-paper sized activity spanning many months. Another reason is that it’s not always clear what it means to “replicate” an experiment. To some people it means following the original experimental procedure exactly to try to generate the same results, while to others, replication includes different experiments intended to test the original result in a different way.

Once you’ve waded through the different meanings, there still seems to be a range of opinion on the desirability of frequent replication. In many fields (including my field, software engineering) there are frequent calls for more replication, along with complaints about the barriers (e.g. some journals won’t accept papers reporting replications because they’re not ‘original’ enough). However, on the specific question of how many published studies should be replicated, an answer other than “100%” is quite defensible: some published experiments are dead-ends (research questions that should not be pursued further), and some are just bad experiments (experimental designs that in hindsight were deeply flawed). And then there’s the opportunity cost – instead of replicating an experiment for a very small knowledge gain, it’s often better to design a different experiment to probe new aspects of the same theory, for a much larger knowledge gain. We reflected on some of these issues in our ICSE’2008 paper On the Difficulty of Replicating Human Subjects Studies in Software Engineering.

Anyway, I digress. Alicia’s study also revealed a number of barriers to sharing data, suggesting that some of the stronger calls for open science and reproducibility standards are, at least currently, too impractical. At a minimum, we need better tools for capturing data provenance and scientific workflows. But more importantly, we need to think more about the balance of effort – a scientist who has spent many years developing a dataset needs the appropriate credit for this effort (currently, we only tend to credit the published papers based on the data), and perhaps even some rights to exploit the dataset for their own research first, before sharing. And for large, complex datasets, there’s the balance between ‘user support’ as other people try to use the data and have many questions about it, versus getting on with your own research. I’ve already posted about an extreme case in climate science, where such questions can be used strategically in a kind of denial of service attack. The bottom line is that while in principle, openness and reproducibility are important cornerstones of scientific process, in practice there are all sorts of barriers, most of which are poorly understood.

Alicia’s poster generated a huge amount of interest, and I ended up staying around the poster area for much longer than I expected, having all sorts of interesting conversations. Many people stopped by to ask questions about the results described on the poster, especially the tables (which seemed to catch everyone’s attention). I had a fascinating chat with Paulo Pinheiro da Silva, from UT El Paso, whose Cyber-Share project is probing many of these issues, especially the question of whether knowledge provenance and semantic web techniques can be used to help establish trust in scientific artefacts (e.g. datasets). We spent some time discussing what is good and bad about current metadata projects, and the greater challenge of capturing the tacit knowledge scientists have about their datasets. Also chatted briefly with Peter Fox, of Rensselaer, who has some interesting example use cases for where scientists need to do search based on provenance rather than (or in addition to) content.

This also meant that I didn’t get anywhere near enough time to look at the other posters in the session. All looked interesting, so I’ll list them here to remind me to follow up on them:

  • IN31B-1001. Provenance Artifact Identification and Semantic Tagging in the Atmospheric Composition Processing System (ACPS), by Curt Tilmes of NASA GSFC.
  • IN31B-1002. Provenance-Aware Faceted Search, by Deborah McGuinness et. al. of RPI (this was the work Peter Fox was telling me about)
  • IN31B-1003. Advancing Collaborative Climate Studies through Globally Distributed Geospatial Analysis, by Raj Singh of the Open Geospatial Consortium.
  • IN31B-1005. Ethics, Collaboration, and Presentation Methods for Local and Traditional Knowledge for Understanding Arctic Change, by Mark Parsons of the NSIDC.
  • IN31B-1006. Lineage management for on-demand data, by Mary Jo Brodzik, also of the NSIDC.
  • IN31B-1007. Experiences Developing a Collaborative Data Sharing Portal for FLUXNET, by Deb Agarwal of Lawrence Berkeley Labs.
  • IN31B-1008. Ignored Issues in e-Science: Collaboration, Provenance and the Ethics of Data, by Joe Hourclé, of NASA GSFC
  • IN31B-1009. IsoMAP (Isoscape Modeling, Analysis, and Prediction), by Chris Miller, of Purdue

Picking up from my last post on communicating with policymakers, the first session on Wednesday morning was on science literacy (or rather, lack of it) in America. The session was packed full of interesting projects.

Frank Niepold from NOAA kicked off the session, presenting the results from an NSF-led study on climate literacy. They have developed some draft goals for a national strategy, and a very nice booklet. The NOAA Climate Program Office has pointers to a useful set of additional material, and there is a fuller account of the development of this initiative in the AAAS interview with Frank. An interesting point was that literacy in this initiative means actionable stuff – a sufficient understanding of climate change to result in behavioural change. The brochures are interesting and useful, but it looks like they’ve got a long way to go to make it actionable.

Connie Roser-Renouf then presented a brilliant 15 minute summary of the fascinating study Global Warming’s Six Americas: An Audience Segmentation Analysis. The study sampled 2129 people from 40,000 respondents in a nationwide survey. Latent class analysis was used to cluster people according to their beliefs, attitudes, etc. [I like this tidbit: 2% of respondents said they’d never heard of global warming]. I already posted a little about this on Sunday, but I find the study sufficiently fascinating that it’s worth looking at in more depth.

Six Americas study, figure 2

Remember, the size of the circles represent the proportion of each group in the sample, and the six groups were identified by cluster analysis on a range of attitudes. Quite clearly the vast majority of respondents understand that global warming is happening, with the dismissives being a notable outlier.

Six Americas Study, figure 12

Interesting that a majority also understand that it will be harmful to people in the US (i.e. not polar bears, not people in other countries), and pretty soon too, although there is a big difference in understanding the urgency across the different groups.

The next graph was interesting, partly because of the vast differences across the groups, but also, as Connie pointed out, because the disengaged group answered 100% “don’t know” and this never happens in social science research (which leads me to speculate that these clusters aren’t like normal social groupings with very fuzzy boundaries, but are more like religious groups, with explicitly affirmed belief sets):

Six Americas Figure 11

But of course, the scientific literacy question is about how well people understand the causes of climate change. The interesting thing about figure 8 is that many respondents wrote in an additional response mode (“caused by both human activities and natural changes”) which wasn’t in the questionnaire. If it had been, it’s likely that more people would have selected this response, and it’s arguably the most accurate scientifically, if you take a broad view of earth system processes:

Six Americas Figure 8

However, there’s a significant number of people in all groups who believe there is a lot of disagreement among scientists (they should come to the AGU meeting, and see if they can find any scientists who who think that human-induced global warming is not happening!):

Six Americas, Figure 9

But note that there is a widespread trust in scientists as a source of information (even the dismissives don’t go so far as to actively distrust scientists):

6 Americas, figure 35

There was strong support for making global warming a priority for the US, but some interesting disagreement about suitable policy responses. For example, very mushy support for cap-and-trade:

Six Americas, figure 22

But very strong support for rebates on purchasing solar panels and fuel-efficient vehicles:

Six Americas, figure 21

As far as climate science literacy is concerned, the question is how to reach the groups other than the alarmed and concerned. These are the people who don’t visit science museums, and who don’t seek out opportunities to learn about the science.

The next talk, by Steven Newton of the National Centre for Science Education, was equally fascinating. NCSE is one of the leading organizations fighting the creationists in their attempts to displace the teaching of evolution in schools (and host of Project Steve, of which I’m Steve #859). Steven presented an analysis of how creationists reacted to the story of the CRU emails over the last few weeks. He’s been monitoring the discussion boards at the Discovery Institute, a group who support the idea of intelligent design, avoid making overt biblical references and prefer to portray themselves as a scientific group. First, since the story broke, there have been doubled the normal number of posts on their blog, with some very strong language: “toxic leftist-atheist ideology”. “a cabal”, “reaction of Darwinist and of global warming scientists to even the most mild skepticism is remarkably vicious”. “there will be an accounting”, “massive defunding of organized science”. This is interesting, because it is exactly what they say about evolution!

Steven argues that it’s clearly denialism. These people don’t think that science as a methodology works (in part, because it refused to acknowledge the role of a supernatural agency). They reject the methods of science (e.g. radiometric dating for evolution; computer modeling for global warming). They reject the data (e.g. observable evolution; observable climate data). They have very fixed mindsets (e.g. “no evidence will prove evolution”; “no evidence will prove global warming”). And they believe in a conspiracy (e.g. the “establishment” hiding the truth; stifling dissent).

This analysis leads to questions about how schools treat climate change, given this is now so contentious for evolution, and a concern that climate science education will become a battleground in the way that evolution did. Only 30 states have standards for teaching global warming in schools, and these are mostly focussed on impacts, not causes. Out of the 30 different statewide standards, only 7 mention fossil fuels, 8 mention mitigation, 17 mention mechanisms, and none mention sea level rise (and I’ll bet none mention ocean acidification either).

Some of these states have clauses that explicitly link topics such as evolution and global warming. For example, the Louisiana Science Education Act (LSEA) allows teachers to bring into the curriculum materials not generally approved, including “on topics of evolution, global warming, human cloning, …”. In Texas, the law mandates that textbooks must promote a free market approach (which would make discussion of cap-and-trade difficult), and textbooks must “Analyze and evaluate different views on the existence of global warming”. The wording here is crucial – it’s about views on the existence of global warming, not the details of the science.

Next up, Dennis Bartels, the executive director of the Exploratorium in San Francisco, talked about the role of science museums in improving climate literacy. He pointed out that it’s hard to explain to kids and adults about climate change – the topic is so complex. One problem is math literacy. Calculus is a capstone course in most high schools, but you need some basic calculus to understand the key concepts of emissions and concentrations of greenhouse gases, so it can’t be taught well in the high school curriculum. Also, kids aren’t ever exposed to thinking about systems. Adults are still invested in the myth that science is about truth, rather than understanding that science is a process. Hence, the public don’t get it, and are especially put off by the typical back and forth you get with the “received wisdom” approach (e.g. “Coffee is good for you”, then “coffee is bad for you”, and eventually “those damn scientists don’t know what they’re doing”).

Science centres that have tackled climate change have tended to fall into a proselytization mode, rather than teaching science as a process. The Exploratorium is exploring ways of overcoming this. For example, they’ve run workshops to bring the scientists who do polar expeditions together for a week, getting them to show and tell how they do their research, and then repackaging this for science centres across the country. The aim is to tell the story of how the science is done. Most of the best work in science centres isn’t the exhibits, it’s mediated sessions and live presentations. This contributes to getting people to think for themselves (rather than telling them what to think).

Another approach is to get people to think through how they know what they know. For example, ask people how they know that the earth is round. Did you directly observe it? Run and experiment to show it? Learn it in school? Get it from some authority? Then compare answers with others, and come to realise that most of what you know about the world you get from others (not through direct experience/experimentation). The role of trusted sources of scientific expertise is taken for granted for some areas of science, but not for others. You can then challenge people to think about why this is.

In the question session, somebody asked how do you reach out to people who don’t come to museums (e.g. the dismissive and disengaged)? Dennis’ answer pointed to the European cafe scientifique idea – after the soccer game, do an hour of science in a cafe.

Tom Bowman, a graphic designer, talked about how we translate scientific conclusions about risk for the public. His approach is to identify key misconceptions, and then re-adapt key graphic figures (eg from IPCC) to address them. Then, test these with difference audiences (he gave a long list of business conferences, public lectures, informal science institutions where he has done this).

Key misunderstandings seem to be (esp from Connie’s talk) about the immediacy of impacts (both in time and scale), the scale of the mitigation required, whether viable solutions exist, and whether we make effective choices. Hence, Tom suggested to start not by explaining the greenhouse effect, but by focussing on the point at which people make decisions about risk. For example, Limit + Odds = Emissions Trajectory (i.e. figure out what temperature change limit you’ve comfortable with, and what odds of meeting it you’d like, and design emissions trajectories accordingly). Then from there work out options and tradeoffs.

Take, for example, the table in the IPCC report on emissions scenarios – to a graphic designer, this is a disaster – if the footnotes are as big as the figure, you’ve failed. So what are the alternatives? Tom’s designed a graphic with a thermometer (with the earth as the bulb), on which he plots difference scenarios (I can’t find the graphic online, but this is similar). Points on the thermometer can then be used to show impacts at that level of global warming. This graphic has worked so well that everyone he has presented it to wants a copy (er, add me to the list, Tom).

However, sometimes things don’t work out. He also tried to map peak year emissions onto the thermometer. This doesn’t work so well, so instead he’s using the trillionth tonne analysis, and then the “ski slope” figure from the Copenhagen diagnosis:

Emissions pathways to give 67% chance of limiting global warming to 2ºC

Emissions pathways to give 67% chance of limiting global warming to 2ºC

Tom’s still working on the graphics for this and hopes to have full set by February, looking for ways to test them out.

Finally, talking about making climate science accessible, Diane Fisher presented NASA’s Climate Kids website, which is hosted by NASA’s webby award-winning Eyes on the Earth site. A key challenge is to get kids thinking about this subject right at the beginning of the educational process. In designing the site, they had some debate on whether to do the “most scientists agree” approach. In the end, they decided to take the “this is the way it is” approach, which certainly reduces the potential to confuse. But there’s a difficult balance between telling kids how things are, and scaring them to death. A key principle for ClimateKids was to make sure to give them positive things to do, even if it’s just simple stuff like reusable water bottles and riding your bike. More interestingly, the site also covers some ideas about career choices, for longer term action. I like this: get them to grow up to be scientists.

There has been a strong thread at the AGU meeting this week on how to do a better job of communicating the science. Picking up on the workshop I attended on Sunday, and another workshop and townhall meeting that I missed, came two conference sessions, the first, on Tuesday, on Providing Climate Policy Makers With a Strong Scientific Base, and the second on Wednesday, on Education and Communication for Climate Literacy and Energy Awareness.

I’ll get to the speakers at these sessions in a moment. But first, I need to point out the irony. While the crazier sections of the blogosphere are convinced that scientists are involved in some massive conspiracy with policymakers hanging on their every word, the picture from within the scientific community is very different. Speaker after speaker expressed serious concerns that their work is being ignored, and that policymakers just don’t get it. Yet at the same time, they seemed to have very little clue about the nature of this communication problem, nor how to fix it. In fact, a number of speakers came across as charmingly naive, urging the audience to put a bit more effort into explanation and outreach. The notion that these completely guileless scientists could be conspiring to hide the truth is truly ludicrous.

So why so many speakers and sessions focussing on science communication? Why the sense of failure? Well, on the one hand there was the steady stream of news about a lack of progress in Copenhagen, building on a year of little progress in the US Congress. On the other hand is the widespread sense in the AGU community that the IPCC AR4 is out of date, with the new science of the last 4-5 years dramatically removing some of the remaining uncertainties (see Alley’s talk for one example), and, in nearly every case, confirming that the AR4 projections understate the magnitude of the expected warming and its impacts. Perhaps the best summary of this is the Copenhagen Diagnosis, which documents how…

…several important aspects of climate change are already occurring at the high end, or even beyond, the expectations of just a few years ago. […] global ice sheets are melting at an increased rate; Arctic sea ice is thinning and melting much faster than recently projected, and future sea-level rise is now expected to be much higher than previously forecast.

As an aside, the European EGU meeting in April lacked this strong sense of communication failure. Could this be a mark of the peculiarly American (oh, and Canadian) culture of anti-science?  of lack of science literacy? Or perhaps a sense that the US and Canada are lagging far behind on developing science-based policy?

Anyway, on with the sessions. For the session on “Providing Climate Policy Makers With a Strong Scientific Base”, the first speaker was David Carlson, director of the International Polar Year (IPY) programme office. He argued that scientists are definitely not having the influence on policy that they should and the weakness of any likely agreement in Copenhagen will prove this. He described a number of examples of science communication in policymaking, analyzing whether the transmitters (scientists) and/or the receivers (policymakers) did a good job.

The first example was on ocean fertilization. The Solas study demonstrated that ocean fertilization will be ineffective. This message was received by the IMO convention on marine pollution, which issued a policy statement noting with concern the potential for negative impacts from ocean fertilization. There was one small problem – the policy was widely interpreted to mean all ocean fertilization is bad, which then prevented small scale experiments as part of IPY. So, the transmitter was good, receiver even better, but this led to unintended consequences.

Another example was a US senate heraring that David appeared at earlier this year, for the Kerry-Boxer bill. The hearing focused on the arctic, and included three senators, several senior scientists, lobbyists, a consultant, and a representative from an NGO. (note: David wasn’t complimentary about the roles of the non-scientists: the lobbyists were there to promote appropriations, the consultant was there to promote his previous work, and the NGO rep claimed “the dog ate my homework”). As the chair didn’t hear a clear message from the participants, the hearing was held open, and they were invited submit a joint report in 48 hours. David circulated a draft summary within 8 hours to others, but got no response. In the end, it appears each participant submitted their own text. The Kerry-Boxer bill was introduced in the senate a few weeks later, containing no language at all about the impacts on the arctic. His conclusion is that a failure to work together represented a substantial lost opportunity. And this theme (of acting as individuals, and failing to coordinate, cropped up again and again in the talks).

The next speaker was Ed Struzik, science journalist and author. Ed characterized himself as the entertainer for the session, and laid on a beautiful series of photographs to illustrate his point that climate change is already having a dramatic impact on the acrtic, but it’s a very complicated story. For example, even just defining what constitutes the arctic is difficult (North of arctic circle? Area where warmest summer month averages 9ºC?). It’s a very varied geography (glaciers, tundra, forest, smoking hills, hot springs, river deltas, lakes, etc). It’s a sacred place “where god began” for native peoples. And lots of fresh clean water: big rivers. Arctic wildlife. And while many of the impacts of climate change are frighteningly negative, not all are: For example the barren ground grizzlies will do well – warming will mean they don’t need to hibernate as long. And more ominously, a warming arctic creates opportunities to exploit new fossil fuel reserves, so there are serious economic interests at stake.

James Mueller from the office of US senator Maria Cantwell gave an insider’s view of how the US senate works (and a little plug for Cantwell’s CLEAR Act). Unlike most speakers in the session, he argued that scientists have done a good job with science communication, and even claimed that behind closed doors, most senators would agree climate change is a big problem; the challenge is reaching agreement on what to do about it. He suggested scientists need a more realistic view of the policy process – it takes years to get major policy changes through (cf universal healthcare), and he takes it as a good sign that the House vote passed. He also argued that less than twenty years from the first IPCC report, the science is well-developed and accepted by the majority of non-scientists. In contrast, energy economists have a much wider disagreements, and are envious of the consensus among climate scientists.

James then described some difficulties: the drive towards specialization in the science community has led to isolation among climate scientists and policy experts. There’s a lack of math and science literacy among staff of the politicians. And, most importantly, the use of experts with “privileged information” runs counter to democracy, in that it’s difficult to maintain accountability in technocratic institutions, but difficult to get away from the need for specialist expertise.

However, I couldn’t help feeling that he missed the point, and was just giving us political spin to excuse inaction. If the vast majority of senators truly understood the science, there would have had no problem getting a strong bill through the senate this year. The problem is that they have a vague sense there is a problem, but no deep understanding of what the science really tells us.

Amanda Staudt from the National Wildlife Foundation dissected the way in which IPCC recommendations got translated into the details of the Waxman-Markey bill and the Kerry-Boxer bill (drawing on the analysis of the World Resources Institute):

WRI analysis of the various bills of the 111th Congress

WRI analysis of the various bills of the 111th Congress

The process looks like this:

(1) Politicians converged on 2°C temperature rise as an upper limit considered “safe”. The IPCC had a huge influence on this, but (as I noted above) recent advances in the science were not included.

(2) Then policy community needed to figure out what level of greenhouse gas emissions would achieve this. They looked at table 5.1 in the IPCC Synthesis report:

IPCC AR4 Synthesis report table 5.1

…and concentrated on the first row. The fact that they focussed on the first row is a remarkable achievement of the science, but note that these were never intended to be interpreted as policy options; the table was just a summary of six different scenarios that had been considered in the literature. What was really needed (and wasn’t available when the IPCC report was put together) is a more detailed analysis of even more aggressive emissions scenarios; for example, the study that Meinhausen did earlier this year that gave a 50:50 chance of staying within 2°C for 450ppm stabilization.

(3) Then they have to determine country-by-country emissions reductions, which is what the “Bali box” summarizes.

(4) Then, what will the US do? Much of this depends on the US Climate Action Partnership (USCAP) lobbying group. The USCAP blueprint recommended an 80% reduction by 2050 (but compromised on a 1.3% reduction by 2020 – i.e. a slower start to the reductions.

The key point is that there was little role for the scientists in the later steps. Congress gets exposed to the science through a number of limited mechanisms: Hearings; 2 page summaries; briefings sponsored by interested parties (The CEI runs a lot of these!); lobby visits (there were a record number of these this year, with energy companies vastly outspending green groups); fact sheets; and local news stories in politicians’s own districts.

Amanda’s key point was that scientists need to be more involved throughout the process, and they can do this by talking more to their congresspersons, working with NGOs who are already doing the communications, and to get working on analyzing the emissions targets ready for the next stage of refinements to them. (Which is all well and good, but I don’t believe for one minute that volunteerism by individual scientists will make the slightest bit of difference here).

Chris Elfring of the National Academics Board on Atmospheric Sciences and Climate (BASC) spoke next, and gave a summary of the role of the National Academy of Sciences, including a history of the Academies, and the use of the NRC as it’s operational arm. The most interesting part of the talk was the report America’s Climate Choices, which is being put together now, with four panel reports due in early 2010, and a final report due in summer 2010.

All of the above speakers gave interesting insights into how science gets translated into policy. But none really convinced me that they had constructive suggestions for improving things. In fact the highlight of the session for me was a talk that didn’t seem to fit at all! Jesse Anttila-Hughes, a PhD student at Columbia, presented a study he’s doing on whether people act on what they believe about climate change. To assess this, he studied how specific media stories affects energy company stock prices.

He looked at three specific event types: Annoucements of record a temperature in the previous year according to NASA’s GISS data (6 such events in the last 15 years); Annoucements of sea-ice minima in the Arctic (3 such events); and annoucements of Ice shelf collapses in the Antarctic (6 such events). He then studied how these events affect the stock prices of S&P500 energy companies, which together have about a $1.2 trillion market capitalization, with 6.5 million trades per day. I.e. pretty valuable and busy stocks. The methodology, a version of CAPM, is a bit complicated, because you have to pick a suitable window (e.g. 10 days) to measure the effect, and allow a few days before the story hits the media, to allow for early leaks. And you have to compare how the company is doing compared to markets in general each day.

Anyway, the results are fascinating. Each announcement of a record hot year causes energy stocks across the board drop by 3% (which, given the size of their market capitalization, is a huge loss of value). However, when ice shelves collapse you get the opposite response – a 2.5% rise in energy stocks. And there is no significant reaction to reports about sea ice minima (despite a number of scientists regarding this as a crucial early warning signal).

The interpretation seems pretty straightforward. Record temperatures are the purest signal of climate change, and hence cause investors to pay attention. Clearly, investors are taking the risk seriously, anticipating significant regulation of fossil-fuel energy sources, and have been doing so for a long time (at least back into the mid-90s). On the other hand, collapsing ice sheets are seen as an opportunity, in that they might open up more areas for oil and gas exploration, and it seems that investors expect this to outweigh (or precede) any climate change legislation. And sea ice minima, they probably don’t understand, or end up as a mixed signal with exploration opportunities and regulation threats balancing out.

More detailed followup studies of other such news stories would be fascinating. And that’s really got me really thinking about how to effect behaviour changes…

Given the terrible internet connection and complete lack of power outlets in most meeting rooms, I’ve been reduced to taking paper and pencil notes, so I’m way behind in the blogging. So I’ll console myself with a quick tour of everyone else’s blogs about this AGU meeting:

First, and foremost, the AGU press office is putting together its own blog, coordinated by Mohi Kumar. It does a much better job of capturing the highlights than I do, partly because the staff writers can sample across all the sessions, and partly because they’re good at summarizing (rather than my own overly-detailed blow-by-blow accounts of talks).

There’s lots of talk here about impacts, especially on water. Water, Water Everywhere–Except There, and There, and There is a nice summary of some of the hydrology sessions, with the new results from GRACE being an obvious highlight. And the session on Climate Change in the West covered some pretty serious impacts on California.

But the biggest buzz so far at the meeting seems to be science literacy and how to deal with the rising tide of anti-science. Capitol Hill Needs Earth Scientists, reports on a workshop on communicating with Congress, and Talking about Climate: A Monday Night Town Hall Meeting tackled how to talk to the press. Both of these pick up on the idea that the science just isn’t getting through to the people who need to know it, and we have to fix this urgently. I missed both of these, but managed to attend the presentations this morning on science literacy: Can Scientists Convince the Growing Number of Global Warming Skeptics?, which included a beautifully clear and concise summary of the Six Americas study. I’ll post my own detailed notes from this session soon. The shadow of the recently leaked CRU emails has come up a lot this week, in every case as an example of how little the broader world understands about how science is done. Oh, if only more people could come to the AGU meeting and hang out with climate scientists. Anyway, the events of the last few weeks lent a slight sense of desperation to all these sessions on communicating science.

And here’s something that could do with getting across to policymakers – a new study by Davis and Caldeira on consumption-based accounting – how much of the carbon emissions of the developing world are really just outsourced emissions from the developed world, as we look for cheaper ways to feed our consumption habits.

But there are good examples of how science communication should be done. For example, the Emiliani Lecture and the Tough Task of Paleoceanographers, is a great example of explaining the scientific process, in this case an attempt to unravel a mystery about changes in ocean currents around the Indonesian straits, due to varying El Nino cycles. The point of course, is that scientists are refreshingly open about it when they discover their initial ideas are wrong.

And last, but not least, everyone agrees that Richard Alley’s lecture on CO2 in the Earth’s History was the highlight of the meeting so far, even if it was scheduled to clash with the US Climate Change Research Program report on Impacts.

Some more highlights:

Yesterday afternoon, I managed to catch the Bjernes Lecture, which was given by Richard B. Alley: “The biggest Control Knob: Carbon Dioxide in Earth’s Climate History”. The room was absolutely packed – standing room only, I estimated at least 2,000 people in the audience. And it was easy to see why – Richard is a brilliant speaker, and he was addressing a crucial topic – an account of all the evidence we have of the role of CO2 throughout prehistory.
By way of introduction, he pointed out how many brains are in the room, and how much good we’re all doing. Although he characterizes himself as not being an atmospheric scientist, except perhaps by default, but as he looks more and more at paleo-geology, it becomes clear how important CO2 is. He has found that CO2 makes a great organising principle for his class on the geology of climate change at Penn State, because CO2 keeps cropping up everywhere anyway. So, he’s going to take us through the history to demonstrate this. His central argument is that we have plenty of evidence now (some of it very new) that CO2 dominates all other factors, hence “the biggest control knob” (later in the talk he extended the metaphor by referring to other forcings as fine tuning knobs).
He also pointed out that, from looking at the blogosphere, it’s clear we live in interesting times, with plenty of people WILLING TO SHOUT and distort the science. For example, he’s sure some people will distort his talk title because, sure, there are other things that matter in climate change than CO2. As an amusing aside, he showed us a chunk of text from an email sent to the administrators at his university, on which he was copied, the gist of which is that his own research proves CO2 is not the cause of climate change, and hence he is misrepresenting the science and should be dealt with severely for crimes against the citizens of the world. And then he points out a fundamental error of logic in the first sentence of the email, to illustrate the level of ignorance about CO2 that we’re faced with.
So the history: 4.6 billions years ago, the suns output was lower (approx 70% of today’s levels), often referred to as the faint young sun. But we know there was liquid water on earth back then, so there must have been a stronger greenhouse effect. Nothing else explains it – orbital differences for example weren’t big enough. The best explanation for the process so far is the Rock-Weathering Thermostat. When it’s too cold, CO2 builds up to warm up. Turn up the temperature, and the sequestration of CO2 in the rocks goes faster. This is probably what has kept the planet in the right range for liquid water and life for 4 billion years.
But can we demonstrate it? The rock thermostat takes millions of years to work, because the principle mechanism is geological. One consequence is that the only way to get to a “snowball earth” is that some other cause of change has to happen fast – faster than the rock-themostat effect.
One obvious piece of evidence is in the rock layers. Glacial layers are always covered from above by carbonate rocks, hence carbonation follows icing. This shows part of the mechanism. But to explore this properly, we need good CO2 paleo-barometers. The gold standard is ice core record. So far the oldest ice core record is 800,000 years, although we only have one record this old. Several records go back 450K years, and many more shorter records. The younger samples all overlap, giving some confidence that they are correct. We also now know a lot about how to sort out ‘good’ ice core record from poor (contaminated) ones.
But to back this up, there are other techniques with independent assumptions (but none as easy to analyze as ice cores). When they all agree, this gives us more confidence in the reconstructions. One example: growth of different plant species – higher CO2 gives preference to certain species. Similarly, different ways in which carbonate shells in the ocean grow, depending on pH of the ocean (which in turn is driven by atmospheric concentrations of CO2). Also the fossil-leaf stomata record. Stomata are the pores in leaves that allow them to breathe. Plants grow leaves with more pores when there is low CO2, to allow them to breathe better, and less when there is more CO2, to minimize moisture loss.
So, we have a whole bunch of different paths, none of which are perfect, but together work pretty well.
Now what about those other controllers, beyond the rock-thermostat effect? CO2 is raised by:
the amount of CO2 coming out of volcanoes
slower weathering of rock
less plant activity
less fossil burial.
Graph of reconstruction of CO2 over 400Ma to present. With Ice coverage mapped on (measured as how far down towards the equator the ice reaches. (Jansen 2007, fig 6.1).
251 million years ago, pretty much every animal dies – 95% of species wiped out. End-permian extinction. Widespread marine green sulfur bacteria that use H2S for photosynthesis. Hydrogen sulphide as a result kills most other life off. And it coincides with a warm period. Probably kicked off by greater vulcanisms (siberian traps). When the ocean is warm, it’s easy to starve it of oxygen; when it’s cold it’s well oxygenated.
The mid-cretaceous “saurian sauna” – no ice at sea level at poles. Again, CO2 is really high again. High CO2 explains the warmth, although the models make it a little too warm at the CO2 levels.
One more blip before the ice ages. Co2 didn’t kill the dinosaurs, a meteorite did. Paleocene-eocene thermal maximum. Big temperature change. Already hot, world gets even hotter. Most sea-floor life dies out. Acidic ocean. Models have difficulty simulating as much warming. Happens very fast, although the recovery process matches our carbon cycle models very well. Shows up everywhere: e.g. leaf damage in fossil leaves at PETM.
But a mystery: temperature and CO2 highly correlated, with no other way to explain it. Occasionally there were places where temperature changes did not match CO2 changes. But over last couple of decades, these have all gone, as we’ve refined our knowledge of the CO2 record. The mismatches have mostly dissapeared.
2 years would have said something wrong in miocene, but today looks better. Two years ago got new records that improve the match. Two weeks ago, Tripati et al published a new dataset that agrees even better. So, two years ago, Miocene anomalies looked important, now not so clear – looks like CO2 and temperature do track.
The changes of temperature were predicted 50 years ahead of data that orbital issues were the cause. But what do we say to people who say the lag proves current warming isn’t caused by CO2.
Temperature never goes far without the CO2, and vice versa, but sometimes one lags the other by about 2 centuries. Analogy: credit card interest lags debt. By the denialist logic, because interest lags debt then I never have to worry about interest and the credit card company can never get me. Simple numerical model demonstrates that interest causes the debt. So, it’s basic physics. The orbits initially kick off the warming, but the CO2 then kicks in and drives it.
Explanation of ice-age cycling: A David Archer cocktail.
So, CO2 explains almost all the historical temperature change.
What’s left:
– Solar irradiance changes, volcanic changes. When these things change, we do see the change in the temperature record. For solar changes, there clearly aren’t many. (“if volcanoes could get organised, they could rule the world” – luckily they aren’t organised). Occasionally several volcanoes erupting together makes a bigger change, but again a rare even.
Solar changes look much more like a fine tuning knob, rather than a major control.
40,000 years ago the magnetic field stopped, letting in huge amounts of cosmic rays, but the climate ignored it. Hence, at best a fine tuning knob.
Space dust hasn’t changed much over time and there isn’t much of it.
What about climate sensitivity?
Sensitivity from models matches the record well (approx 3C per doubling of CO2).
Interesting experiment – Royer et al Nature 29 March 2007 vol 446.
Hence paleoclimate says more extreme claims about sensitivity must be wrong.
Hence, if CO2 doesn’t warm, then we have to explain why the physicists are stupid, and then we still have no other explanation. If there is a problem it is that occasionally the world seems a little more sensitive to CO2 than the models say.
Lots of possible fine-tuning knobs that might explain these – lots of current research looking into it.
Oh, and this is a global story not a regional one. Lots of local effects on regional climate.
Note that most of these recent discoveries haven’t percolated to IPCC yet. Current science says CO2 is the most important driver of climate throughout the earth’s history.
Q: If we burn all the available fossil fuel reserves, where do we get to? If you burn it all at once, some chance of getting above the cretaceous level, but lots of uncertainty, including how much reserves there really are. In a “burn it all” future, it’s likely to get really hot – +6 to 7C.
Q: We know if we stop emitting, it will slow down global warming. But from geology, what do we know about removal? A: Anything that increases weatherabilty of rocks. Can we make it go fast enough to make a difference, at an economic level. Eg how much energy would we need to do it – dig up shells from ocean bed and allow them to carbonate. Probably easier to keep it out of the air than to take it out of the air.
Q: is there a relationship between ocean acidity and atmospheric CO2.
Q about feedbacks. As we put up more CO2, the oceans take up about half of it. As the world gets warmer, the ability of the ocean to buffer that reduces. Biggest concern is probably in changes in the arctic soils. Methane on the seafloor. As you go from decades to centuries, the sensitivity to CO2 goes up a little, because of these amplfying feedbacks.

Yesterday afternoon, I managed to catch the Bjerknes Lecture, which was given by Richard B. Alley: “The biggest Control Knob: Carbon Dioxide in Earth’s Climate History”. The room was absolutely packed – standing room only, I estimated at least 2,000 people in the audience. And it was easy to see why – Richard is a brilliant speaker, and he was addressing a crucial topic – an account of all the lines of evidence we have of the role of CO2 in climate changes throughout prehistory.

[Update: the AGU has posted the video of the talk]

By way of introduction, he pointed out how many brains are in the room, and how much good we’re all doing. Although he characterizes himself as not being an atmospheric scientist, except perhaps by default, but as he looks more and more at paleo-geology, it becomes clear how important CO2 is. He has found that CO2 makes a great organising principle for his class on the geology of climate change at Penn State, because CO2 keeps cropping up everywhere. So, he’s going to take us through the history to demonstrate this. His central argument is that we have plenty of evidence now (some of it very new) that CO2 dominates all other factors, hence “the biggest control knob” (later in the talk he extended the metaphor by referring to other forcings as fine tuning knobs).

He also pointed out that, from looking at the blogosphere, it’s clear we live in interesting times, with plenty of people WILLING TO SHOUT and distort the science. For example, he’s sure some people will distort his talk title because, sure, there are other things that matter in climate change than CO2. As an amusing aside, he showed us a chunk of text from an email sent by an alumnus to the administrators at his university, on which he was copied, the gist of which is that Alley’s own research proves CO2 is not the cause of climate change, and hence he is misrepresenting the science and should be dealt with severely for crimes against the citizens of the world. To the amusement of the audience, he points out a fundamental error of logic in the first sentence of the email, to illustrate the level of ignorance about CO2 that we’re faced with. Think about this: an audience of 2,000 scientists, all of whom share his frustration at such ignorant rants.

So the history: 4.6 billions years ago, the sun’s output was lower (approx 70% of today’s levels), often referred to as the faint young sun. But we know there was liquid water on earth back then, and the only thing that could explain that is a stronger greenhouse effect. Nothing else works – orbital differences, for example, weren’t big enough. The best explanation for the process so far is the Rock-Weathering Thermostat. CO2 builds up in the atmosphere over time from volcanic activity. As this CO2 warms the planet through the greenhouse effect, the warmer climate increases the chemical weathering of rock, which in turn removes carbon dioxide through the formation of calcium carbonate, that gets washed into the sea and eventually laid down as sediment. Turn up the temperature, and the sequestration of CO2 in the rocks goes faster. If the earth cools down this process slows, allowing CO2 to build up again in the atmosphere. This is process is probably what has kept the planet in the right range for liquid water and life for most of the last 4 billion years.

But can we demonstrate it? The rock thermostat takes millions of years to work, because the principle mechanism is geological. One consequence is that the only way to get to a “snowball earth” (times in the Cryogenian period when the earth was covered in ice even down to the tropics) is that some other cause of change has to happen fast – faster than the rock-themostat effect.

An obvious piece of evidence is in the rock layers. Glacial layers are always covered from above by carbonate rocks, showing that increased carbonation (as the earth warmed) follows periods of icing. This shows part of the mechanism. But to explore the process properly, we need good CO2 paleo-barometers. The gold standard is ice core record. So far the oldest ice core record is 800,000 years, although we only have one record this old. Several records go back 450,000 years, and there are many more shorter records. The younger samples all overlap, giving some confidence that they are correct. We also now know a lot about how to sort out ‘good’ ice core record from poor (contaminated) ones.

But to back the evidence from the ice cores, there are other techniques with independent assumptions (but none as easy to analyze as ice cores). When they all agree, this gives us more confidence in the reconstructions. One example: growth of different plant species – higher CO2 gives preference to certain species. Similarly, different ways in which carbonate shells in the ocean grow, depending on pH of the ocean (which in turn is driven by atmospheric concentrations of CO2). Also the fossil-leaf stomata record. Stomata are the pores in leaves that allow them to breathe. Plants grow leaves with more pores when there is low CO2, to allow them to breathe better, and less when there is more CO2, to minimize moisture loss.

So, we have a whole bunch of different paths, none of which are perfect, but together work pretty well. Now what about those other controllers, beyond the rock-thermostat effect? CO2 is raised by:

  • the amount of CO2 coming out of volcanoes
  • slower weathering of rock
  • less plant activity
  • less fossil burial.

He showed the graph reconstructing what we know of CO2 levels over the last 400 million years. Ice coverage is shown on the chart as blue bars, showing how far down towards the equator the ice reaches, and this correlates with low CO2 levels from all the different sources of evidence. 251 million years ago, pretty much every animal dies – 95% of marine species wiped out, in the end-permian extinction. Probable cause: rapid widespread growth of marine green sulfur bacteria that use H2S for photosynthesis. The hydrogen sulphide produced as a result kills most other life off. And it coincides with a very warm period. The process was probably kicked off by greater vulcanism (the siberian traps) spewing CO2 into the atmosphere. When the ocean is very warm, it’s easy to starve it of oxygen; when it’s cold it’s well oxygenated. This starvation of oxygen killed off most ocean life.

Fast forward to the mid-cretaceous “saurian sauna”, when there was no ice at sea level at poles. Again, CO2 is really high again. High CO2 explains the warmth (although in this case, the models tend to make it a little too warm at these CO2 levels). Then there was one more blip before the ice ages. (Aside: CO2 is responsible for lots of things, but at least it didn’t kill the dinosaurs, a meteorite did). The paleocene-eocene thermal maximum meant big temperature changes. It was already hot, and the world gets even hotter. Most sea-floor life dies out. Acidic ocean. This time, the models have difficulty simulating this much warming. And it happened very fast, although the recovery process matches our carbon cycle models very well. And it shows up everywhere: e.g. leaf damage in fossil leaves at PETM.

But for many years there was still a mystery: The temperature and CO2 levels are highly correlated throughout the earth’s history, and with no other way to explain the climate changes. But occasionally there were places where temperature changes did not match CO2 changes. Over last couple of decades, as we have refined our knowledge of the CO2 record, these divergences have gone. The mismatches have mostly dissapeared.

Even just two years ago, Alley would have said something was still wrong in miocene, but today it looks better. Two years ago, we got new records that improve the match. Two weeks ago, Tripati et. al. published a new dataset that agrees even better. So, two years ago, miocene anomalies looked important, now not so clear, it looks like CO2 and temperature do track.

But what do we say to people who say the lag (CO2 rises tend to lag behind the temperature rise) proves current warming isn’t caused by CO2? We know that orbital changes (the Milankovitch cycles) kick off the ice ages – this was predicted 50 years before we had data (in the 1970s) to back it up. But temperature never goes far without the CO2, and vice versa, but sometimes one lags the other by about 2 centuries. And a big problem with the Milankovich cycles is that they only explain a small part of the temperature changes. The rest is when CO2 changes kick in. Alley offered the following analogy: credit card interest lags debt. By the denialist logic, because interest lags debt, then I never have to worry about interest and the credit card company can never get me. However, a simple numerical model demonstrates that interest can be a bigger cause of overall debt in the long run (even though it lags!!). So, it’s basic physics. The orbits initially kick off the warming, but the release of CO2 then kicks in and drives it.

So, CO2 explains almost all the historical temperature change. What’s left? Solar irradiance changes, volcanic changes. When these things change, we do see the change in the temperature record. For solar changes, there clearly aren’t many, and they act lik a fine tuning knob, rather than a major control. 40,000 years ago the magnetic field almost stopped (it weakened to about 10% of its current level), letting in huge amounts of cosmic rays, but the climate ignored it. Hence, we know cosmic rays are at best a fine tuning knob. Volcanic activity is important, but essentially random (“if volcanoes could get organised, they could rule the world” – luckily they aren’t organised). Occasionally several volcanoes erupting together makes a bigger change, but again a rare event. Space dust hasn’t changed much over time and there isn’t much of it (Alley’s deadpan delivery of this line raised a chuckle from the audience).

So, what about climate sensitivity (i.e. the amount of temperature change for each doubling of CO2)? Sensitivity from models matches the record well (approx 3°C per doubling of CO2). Recently, Royer et al conducted an interesting experiment, calculating equilibrium climate sensitivity from models, and then comparing with the proxy records, to demonstrate that climate sensitivity has been consistent over the last 420 million years. Hence paleoclimate says that the more extreme claims about sensitivity (especially those claiming very low levels) must be wrong.

In contrast, if CO2 doesn’t warm, then we have to explain why the physicists are stupid, and then we still have no other explanation for the observations. If there is a problem, it is that occasionally the world seems a little more sensitive to CO2 than the models say. There are lots of possible fine-tuning knobs that might explain these – and lots of current research looking into it. Oh, and this is a global story not a regional one; there are lots of local effects on regional climate.

Note that most of these recent discoveries haven’t percolated to the IPCC yet – much of this emerged in the last few years since the last IPCC report was produced. The current science says that CO2 is the most important driver of climate throughout the earth’s history.

Some questions from the audience followed:

  • Q: If we burn all the available fossil fuel reserves, where do we get to? A: If you burn it all at once, there is some chance of getting above the cretaceous level, but lots of uncertainty, including how much reserves there really are. In a “burn it all” future, it’s likely to get really hot: +6 to 7C.
  • Q: We know if we stop emitting, it will slow down global warming. But from geology, what do we know about removal? A: Anything that increases weatherabilty of rocks. But seems unlikely that we can make it go fast enough to make a difference, at an economic level. Key question is how much energy we would we need to do it, e.g. to dig up shells from ocean bed and allow them to carbonate. It’s almost certainly easier to keep it out of the air than to take it out of the air (big round of applause from the audience in response to this – clearly the 2,000+ scientists present know this is true, and appreciate it being stated so clearly).
  • Q What about feedbacks? A: As we put up more CO2, the oceans take up about half of it. As the world gets warmer, the ability of the ocean to buffer that reduces. Biggest concern is probably in changes in the arctic soils. Methane on the seafloor. As you go from decades to centuries, the sensitivity to CO2 goes up a little, because of these amplfying feedbacks.

Well that was it – a one hour summary of how we know that CO2 is implicated as the biggest driver in all climate change throughout the earth’s history. What I found fascinating about the talk was the way that Alley brought together multiple lines of evidence, and showed how our knowledge is built up from a variety of sources. Science really is fascinating when presented like this. BTW I should mention that Alley is author of The Two-Mile Time Machine, which I now have to go and read…

Finally, a disclaimer. I’m not an expert in any of this, by any stretch of the imagination. If I misunderstood any of Alley’s talk, please let me know in the comments.

Well, my intention to liveblog from interesting sessions is blown – the network connection in the meeting rooms is hopeless. One day, some conference will figure out how to provide reliable internet…

Yesterday I attended an interesting session in the afternoon on climate services. Much of the discussion was building on work done at the third World Climate Conference (WCC-3) in August, which set out to develop a framework for provision of climate services. These would play a role akin to local, regional and global weather forecasting services, but focussing on risk management and adaptation planning for the impacts of climate change. Most important is the emphasis on combining observation and monitoring services with research and modeling services (both of which already exist) with a new climate services information system (I assume this would be distributed across multiple agencies across the world) and system of user interfaces to deliver the information in forms needed for different audiences. Rasmus at RealClimate discusses some of the scientific challenges.

My concern in reading the outcomes of the WCC this is that it’s all focussed on a one-way flow of information, with insufficient attention to understanding who the different users would be, and what they really need. I needn’t have worried – the AGU session demonstrated that there’s plenty of people focussing on exactly this issue. I got the impression that there’s a massive international effort quietly putting in place the risk management and planning tools needed for a us to deal with the impacts of a rapidly changing climate, but which is completely ignored by a media still obsessed with the “is it happening?” pseudo-debate. The extent of this planning for expected impacts would make a much more compelling media story, and one that matters, on a local, scale to everyone.

Some highlights from the session:

Mark Svboda from the National Drought Mitigation Centre at the University of Nebraska, talking about drought planning the in US. He pointed out that drought tends to get ignored compared to other kinds of natural disasters (tornados, floods, hurricanes), presumably because it doesn’t happen within a daily news cycle. However drought dwarfs the damage costs in the US from all other kinds of natural disasters except hurricanes. One problem is that population growth has been highest in regions most subject ot drought, especially in the southwest US. The NDMC monitoring program includes the only repository of drought impacts. Their US drought monitor has been very successful, but next generation of tools need better sources of data on droughts, so they are working on adding a drought reporter, doing science outreach, working with kids, etc. Even more important, is improving the drought planning process, hence a series of workshops on drought management tools.

Tony Busalacchi from the Earth System Science Interdisciplinary Centre at the University of Maryland. Through a series of workshops in the CIRUN project, they’ve identified the need for tools for forecasting, especially around risks such as sea level rise. Especially the need for actionable information, but no service currently provides this. Climate information system needed for policymakers, on scales of seasons to decades, providing tailorable to regions, and with ability to explore “what-if” questions. To build this, needs coupling of models not used together before, and the synthesis of new datasets.

Robert Webb from NOAA, in Boulder, on experimental climate information services to support risk management. The key to risk assessment is to understand it’s across multiple timescales. Users of such services do not distinguish between weather and climate – they need to know about extreme weather events, and they need to know how such risks change over time. Climate change matters because of the impacts. Presenting the basic science and predictions of temperature change are irrelevant to most people – its the impacts that matter (His key quote: “It’s the impacts, stupid!”). Examples: water – droughts and floods, changes in snowpack, river stream flow, fire outlooks, and planning issues (urban, agriculture, health). He’s been working with the Climate Change and Western Water Group (CCAWWG) to develop a strategy on water management. How to get people to plan and adapt? The key is to get people to think in terms of scenarios rather than deterministic forecasts.

Guy Brasseur from German Climate Services Center, in Hamburg. German adaption strategy developed by german federal government, which appears to be way ahead of the US agencies in developing climate services. Guy emphasized the need for seamless prediction – need a uniform ensemble system to build from climate monitoring of recent past and present, and forward into the future, at different regional scales and timescales. Guy called for an Apollo-sized program to develop the infrastructure for this.

Kristen Averyt from the University of Colorado, talking about her “Climate services machine” (I need to get hold of the image for this – it was very nice). She’s been running workshops for Colorado-specific services, with beakout sessions focussed on impacts and utility of climate information. She presented some evaluations of the success of these workshop, including a climate literacy test they have developed. For example at one workshop, the attendees had 63% correct answers at the beginning (where the wrong answers tended to cluster and indicate some important misperceptions. I need to get hold of this – it sounds like an interesting test. Kristen’s main point was that these workshops play an important role in reaching out to people of all ages, including kids, and getting them to understand how climate change will affect them.

Overall, the main message of this session was that while there have been lots of advances in our understanding of climate, these are still not being used for planning and decision-making.

First proper day of the AGU conference, and I managed to get to the (free!) breakfast for Canadian members, which was so well attended that the food ran out early. Do I read this as a great showing for Canadians at AGU, or just that we’re easily tempted with free food?

Anyway, on to the first of three poster sessions we’re involved in this week. This first poster was on TracSnap, the tool that Ainsley and Sarah worked on over the summer:

Our tracSNAP poster for the AGU meeting. Click for fullsize.

Our tracSNAP poster for the AGU meeting. Click for fullsize.

The key idea in this project is that large teams of software developers find it hard to maintain an awareness of one another’s work, and cannot easily identify the appropriate experts for different sections of the software they are building. In our observations of how large climate models are built, we noticed it’s often hard to keep up to date with what changes other people are working on, and how those changes will affect things. TracSNAP builds on previous research that attempts to visualize the social network of a large software team (e.g. who talks to whom), and relate that to couplings between code modules that team members are working on. Information about the intra-team communication patterns (e.g. emails, chat sessions, bug reports, etc) can be extracted automatically from project repositories, as can information about dependencies in the code. TracSNAP extracts data automatically from the project repository to provide answers to questions such as “Who else recently worked on the module I am about to start editing?”, and “Who else should I talk to before starting a task?”. The tool extracts hidden connections in the software by examining modules that were checked into the repository together (even though they don’t necessarily refer to each other), and offers advice on how to approach key experts by identifying intermediaries in the social network. It’s still a very early prototype, but I think it has huge potential. Ainsley is continuing to work on evaluating it on some existing climate models, to check that we can pull out of the respositories the data we think we can.

The poster session we were in, “IN11D. Management and Dissemination of Earth and Space Science Models” seemed a little disappointing as there were only three posters (a fourth poster presenter hadn’t made it to the meeting). But what we lacked in quantity, we made up in quality. Next to my poster was David Bailey‘s: “The CCSM4 Sea Ice Component and the Challenges and Rewards of Community Modeling”. I was intrigued by the second part of his title, so we got chatting about this. Supporting a broader community in climate modeling has a cost, and we talked about how university labs just cannot afford this overhead. However, it also comes with a number of benefits, particularly the existence of a group of people from different backgrounds who all take on some ownership of model development, and can come together to develop a consensus on how the model should evolve. With the CCSM, most of this happens in face to face meetings, particularly the twice-yearly user meetings. We also talked a little about the challenges of integrating the CICE sea ice model from Los Alamos with CCSM, especially given that CICE is also used in the Hadley model. Making it work in both models required some careful thinking about the interface, and hence more focus on modularity. David also mentioned people are starting to use the term kernelization as a label for the process of taking physics routines and packaging them so that they can be interchanged more easily.

Dennis Shea‘s poster, “Processing Community Model Output: An Approach to Community Accessibility” was also interesting. To tackle the problem of making output from the CCSM more accessible to the broader CCSM community, the decision was taken to standardize on netCDF for the data, and to develop and support a standard data analysis toolset, based on the NCAR Command Language. NCAR runs regular workshops on the use of these data formats and tools, as part of it’s broader community support efforts (and of course, this illustrates David’s point about universities not being able to afford to provide such support efforts).

The missing poster also looked interesting: Charles Zender from UC Irvine, comparing climate modeling practices with open source software practices. Judging from his abstract, Charles makes many of the same observations we made in our CiSE paper, so I was looking forward to comparing notes with him. Next time, I guess.

Poster sessions at these meetings are both wonderful and frustrating. Wonderful because you can wonder down aisles of posters and very quickly sample a large slice of research, and chat to the poster owners in a freeform format (which is usually much better than sitting through a talk). Frustrating because poster owners don’t stay near their posters very long (I certainly didn’t – too much to see and do!), which means you get to note an interesting piece of work, and then never manage to track down the author to chat (and if you’re like me, you also forget to write down contact details for posters you noticed. However, I did manage to make notes on two to follow up on:

  • Joe Galewsky caught my attention with an provocative title: “Integrating atmospheric and surface process models: Why software engineering is like having weasels rip your flesh”
  • I briefly caught Brendan Billingsley of NSIDC as he was taking his poster down. It caught my eye because it was a reflection on software reuse in the  Searchlight tool.

This week I’ll be blogging from the American Geosciences Union (AGU) Fall Meeting, one of the biggest scientific meetings of the year for the climate science community (although the European EGU meeting, held in the spring, rivals it for size). About 16,000 geoscientists are expected to attend. The scope of the meeting is huge, taking in anything from space and planetary science, vulcanology, seismology, all the way through to science education and geoscience informatics. Climate change crops up a lot, most notably in the sessions on atmospheric sciences and global environmental change, but also in sessions on the cryosphere, ocean sciences, biogeosciences, paleoclimatology, and hydrology. There’ll be plenty of other people blogging the meeting, a twitter feed, and a series of press conferences. The meeting clashes with COP15 in Copenhagen, but scientists see Copenhagen as a purely political event, and would much rather be at a scientific meeting like the AGU. To try to bridge the two, the AGU has organised a 24/7 climate scientist hotline aimed at journalists and other participants in Copenhagen, although more on this initiative a little later…

Today, the meeting kicked off with some pre-conference workshops, most notably, a workshop on “Re-starting the climate conversation“, aimed at exploring the sociological factors in media coverage and public understanding of climate science. I picked up lots of interesting thoughts around communication of climate science from the three speakers, but the discussion sessions were rather disappointing. Seems like everyone recognises a problem in the huge gulf between what climate scientists themselves say and do, versus how climate science is portrayed in the public discourse. But nobody (at least among the scientific community) has any constructive ideas for how to fix this problem.

The first speaker was Max Boykoff, from University of Colorado-Boulder, talking about “Mass Media and the Cultural Politics of Climate Change”. Boykoff summarized the trends in media coverage of climate change, particularly the growth of the web as a source of news. A recent Pew study shows that in 2008, the internet overtook newspapers as people’s preferred source of news, although television still dominates both. And even though climate change was one of the two controversies dominating the blogosphere in the last two weeks, it still only accounts for about 1.5% of news coverage in 2009.

Boykoff’s research focusses more on the content, and reveals a tendency in the media to conflate the many issues related to climate change into just one question: whether increasing CO2 warms the planet. In the scientific community, there is a strong convergence of agreement on this question, in contrast to say, the diversity of opinion on whether the Kyoto protocol was a success. Yet the media coverage focusses almost exclusively on the former, and diverges wildly from scientific opinion. He showed a recent clip from NBC news, which frames the whole question in terms of a debate over whether it’s happening or not, with a meteorologist, geologist and sociologist discussing this question in a panel format. Boykoff’s studies showed 53% of news stories (through 2002) diverge from the scientific consensus, and, even more dramatically, 70% of TV segments (through 2004) diverge from this consensus. In a more recent study, he showed that the ‘quality’ newspapers (in the US and UK) show no divergence, while the tabloid newspapers significantly diverge in all sources. Worse still, much of this tabloid coverage is not just on the fence, but is explicitly denialist, with a common framing to present the issue as a feud between strong personalities (e.g. Gore vs. Palin).

Some contextual factors help explain these trends, including a shrinking technical capacity (specialist training) in newspaper/TV; the tendency for extreme weather events to drive coverage, which leads to the obvious framing of “is it or isn’t it caused by climate change?”; cultural values such as trust in scientists. Carvalho and Burgess present an interesting case study on the history of how the media and popular culture have shaped one another over issues such as climate change. The key point is that the idea of an information deficit is a myth – instead the media coverage is better explained as a complex cycle of cultural pressures. One of the biggest challenges for the media is how to cover a long, unfolding story within short news cycles. Which leads to an ebb and flow such as the following: In May 2008 the Observer ran a story entitled “Surging fatal shark attacks blamed on global warming“, although the content of the article is much more nuanced: some experts attribute it to global warming, others to increased human activites in the water. But then a subsequent story in the Guardian in Feb 2009 was “Sharks go hungry as tourists stay home“. Implication: the global warming problem is going away!

The second speaker was Matthew C. Nisbet, from the American University, Washington, perhaps best known for his blog, Framing Science. Nisbet began with a depressing summary of the downward trend in American concern over climate change, and the Pew study from Jan 2009 that showed global warming ranked last among 20 issues people regarded as top priorities for congress. The traditional assumption is that if the public is opposed, or does not accept the reality, the problem is one of ignorance, and hence scientific literacy is the antidote (and hence calls for more focus on formal science education and popular science outlets – “if we only had more Carl Sagans”). Along with this also goes the assumption that the science compels action in policy debates. However, Nisbet contends that when you oversimplify a policy debate as a matter of science, you create the incentive to distort the science, which is exactly what has happened over climate change.

This traditional view of a lack of science literacy has a number of problems. It doesn’t take into account the reality of audiences and how people make up their minds – essentially there is nothing unique about the public debate over climate change compared to other issues: people rely on information shortcuts – people tend to be “cognitive misers“. And it ignores the effects of media fragmentation: in 1985, most people (in the US) got their news from four main sources, the four main TV network news services. By contrast, in 2009, there are a huge array of sources of information, and because of our inability to take advantage of such a range, people have a tendency to rely on those that match their ideological preferences. On the internet, this problem of choice is greatly magnified.

Not surprisingly, Nisbet focussed mainly on the question of framing. People look for frames of reference that help them make sense of an issue, with little cognitive effort. Frames organise the central ideas on an issue, and make it relevant to the audience, so that it can be communicated by shorthand, such as catchphrases (“climategate”), cartoons, images, etc. Nisbet has a generalized typology of ways in which science issues get framed, and points out that you cannot avoid these frames in public discourse. E.g. Bill O’Reilly on Fox starts every show with “talking points”, which are the framing references for the likeminded audience; Many scientists who blog write with a clear liberal frame of reference, reflecting a tendency among scientists to identify themselves as more liberal than conservative.

The infamous Lunz memo contains some interesting framings: “the scientific debate remains open”, the “economic burden of environmental regulation”, and “the international fairness issue” – if countries like China and India not playing along, US shouldn’t make sacrifcies. In response, many scientists and commentators (e.g. Gore) have tended to frame around the potential for catastrophe (e.g. “the climate crisis”; “Be worried“). This is all essentially a threat appeal. The problem is that if you give an audience a threat, but no information on how to counter it, they either become fatalist or ignore it, and also, it opens the door to the counter-framing of calling people “alarmists”. This also plays into a wider narrative about the “liberal media” trying to take control.

Another framing is around the issue of public accountability and scientific evidence. For example, Mooney’s book “The Republican War on Science” itself became a framing device for liberals, which led to Obama’s “must restore science to its rightful place”. However, this framing can reinforce the signal that science is for democrats, not for republicans. Finally, “climategate” itself is a framing devices that flips the public accountability frame to one of accountability of scientists themselves – questioning their motivations. This has also been successfully coupled to the media bias frame, allowing the claim that the liberal media is not covering the alternative view.

So how do we overcome this framing? Nisbet concluded with examples of framings that reach out to difference kinds of audience. For example: EO Wilson’s book “The creation“, frames it as a religious/moral duty, specifically as a letter to a southern baptist. This framing helps to engage with evangelical audiences. Friedman frames it as a matter of economic growth – we need a price on carbon to stimulate innovation in a second American industrial revolution. Gore’s “We” campaign has been rebranded as “Repower America“. And the US Congress no longer refers to a “cap and trade bill”, but the American Clean Energy and Security Act (ACES).

Nisbet thinks that a powerful new frame is to talk about climate change as a matter of public health. The strategy is to shift the perception away from issues of remote regions (e.g. ice caps and polar bears) to focus instead on the impact in urban areas, especially on minorities, the elderly and children. Frame in terms of allergies, heatstress, etc. The lead author of the recent Lancet study, Anthony Costello, said that public health issues are under-reported, and need attention because they affect billions of people.

The recent Maibach and Leiserowitz study identifies six distinct audience groups, and changes ideas about where public perceptions are, especially the idea that the public is not concerned about climate change:

Figure 1

As an experiment, Nisbet and colleagues set up a tent on the National Mall in Washington, interviewed people who said they were from outside of DC, and categorized them in one of the six audiences. They then used this to identify a sample of each group to be invited to a focus group session, where they could test out the public health framing for climate change. Sentence by sentence analysis of their responses to a short essay on health issues and climate change proved very interesting, as there are specific sections, especially towards the end about policy options, where even the dismissives had positive responses. For example:

  • all six groups agreed “good health is a great blessing.
  • all six groups agreed to suggestions around making cities cleaner, easier to get around, etc.
  • 4 of the 6 groups found it helpful to learn about health threats of CC (which corresponds to a big majority of American audience)
  • All 6 groups reacted negatively to suggestions that we should make changes in diet and food choices, such as increasing vegetable and fruit and cutting down meat consumption. Hence, this is not a good topic to lead on!

In the question-and-answer session after Nisbet’s talk, there was an interesting debate about the lack of discussion in the media about how the science was obtained, concentrating on the results as if they came from a black box. Nisbet pointed out that this might not be helpful, and cited the fact that the climategate emails surprised many people with as how human scientists are (with egos and political goals) and the level of their uncertainty about the specific data analysis questions, which has made it a very successful framing device for those promoting the message that the science is weak. However, several audience members contended that this just means we need to do a better job of getting people to think like scientists, and bring the broader society more into the scientific process. While others pointed out how hard it is to get journalists to write about the scientific method, so the idea of partnering with others (e.g. religious leaders, health professionals) makes sense if it helps to identify and motivate particular audiences. This still leaves open the question of how to communicate uncertainty. For example, in the public health framing, people will still want to know does it affect, say, asthma or not. And as we’re still uncertain on this, it leads to the same problem as the shark attack story. So we still have to face the problem of how people understand (or not!) the scientific process.

The final speaker was Gwendolyn Blue, from the University of Calgary, Canada, talking about “Public Engagement with Climate Change”. Blue contends that major issues of science and technology require a participatory democracy that does not really exist yet (although is starting to appear). A portion of the lay public is distrustful of scientific institutions, so we need more effective ways of engaging the more diverse groups in the conversation.

Blue defines public engagement as “a diverse set of activities whereby non-experts become involved in agenda setting, decision-making, policy forming and knowledge production processes regarding science”. The aim is to overcome the traditional “one-way” transmission model of science communication, which tend to position lay audience as passive, and hence obsesses over whether they are getting good or bad information. But public understandings of science are much more sophisticated than many people give credit for, particularly away from questions of basic facts, such as the ethical and social implication of scientific findings.

There are clearly a number of degrees of citizen participation, for example Arnstein’s ‘ladder’. Blue is particularly interested in the upper rungs – i.e. not just ‘informing’ (which movements such as cafe scientifique, and public lectures try to do) but through engagements that aim  to empower and transform (e.g. citizen science, activism, protests, boycotts, buycotts). But she doesn’t think citizen participation in science is a virtue in it’s own right, as it can be difficult, frustrating, and can fail: success is highly context dependent.

Examples include: 350.org, which uses social networking media to bring people together. Lots of people creating images of the number 350 and uploading their efforts (but how many participants actually understand what the number 350 represents?); tcktcktck which collected 10 million signatures and delivered them to the COP15 negotiators. And the global day of action yesterday, which was one of the biggest demonstrations of public activism. However, this type of activism tends to be bound up with social identity politics.

Deliberative events aim to overcome this by bringing together people who don’t necessarily share the same background / assumptions. Blue described her experiences with one particular iniative, the World Wide Views on Global Warming events on September 26, 2009. The idea grew out of the Danish model of consensus politics. Randomly selected participants in each country, were offered free trip to a workshop (in Canada, it took place in Calgary), with some effort to select approximately 100 people representing the demographic makeup of each country. The aim was to discuss the policy context for COP15. There were (deliberately) no “experts” in the room, to remove the inequality of experts vs. lay audience. Instead, a background document was circulated in advance, along with a short video. Clear ground rules were set for good dialogue, with a trained facilitator for each table. Used cultural activities too: e.g. at the Canadian event, they used music and dance from across Canada, Inuit throat singers, opening prayer by Blackfoot Elder, etc.

The result was a fascinating attempt to build an engaged public conversation around climate change and the decision making process we face. A number of interesting themes emerged. For example, despite a very diverse set of participants, lots of common ground emerged, which surprised many participants, especially around scale/urgency of the problem, and the overall goals of society. A lot of social learning took place – many participants knew very little about climate science at the outset. However, Blue did note that success for these events requires scientific literacy and well as civil literacy in the participants, reflexivity, humility and willingness to learn. But it is part of a broader cultural shift towards understanding the potential and limitations of participatory democracy.

The results were reported in a policy report, but also, more interestingly, on a website that allows you to compare results from different countries. Much of the workshops were about public deliberation, which can be very unruly. But at the end this was distilled down to an opinion poll with simple multiple choice questions to communicate the results.

The discussion towards the end of the workshop focussed on how much researchers should be involved in outreach activities. It is not obvious who should be doing this work. There is no motivation for scientists to do it, and lots of negatives – we get into trouble, and our institutions don’t like it; it doesn’t do anything for your career in most cases. And several of the speakers at the workshop described strategies in which there doesn’t seem to be a role for climate scientists themselves. Instead, the work seems to point the need for a new kind of “outreach professional” who is both scientific expert and trained in outreach activities.

Which brings me back to that experiment I mentioned at the top of the post, on the AGU providing a 24 hour hotline for COP15 participants to talk directly with small groups of climate scientists. It turns out the idea has been a bit of a failure, and has been scaled back due to a lack of demand. Perhaps this has something to do with the narrow terms that were set for what kinds of questions people could ask of the scientists. The basic science is not what matters in Copenhagen. Which means the distance between Copenhagen and San Francisco this week is even greater than I thought it would be.

[Update: Michael Tobis has a much more critical account of this workshop]

As a follow-on from yesterday’s post on making climate software open source, I’d like to pick up on the oft-repeated slogan “Many eyeballs make all bugs shallow”. This is sometimes referred to as Linus’ Law (after Linus Torvalds, creator of Linux), although this phrase is actually attributed to Eric Raymond (Torvalds would prefer “Linus’s Law” to be something completely different). Judging from the number of times this slogan is repeated in the blogosphere, there must be lots of very credulous people out there. (Where are the real skeptics when you need them?)

Robert Glass tears this one apart as a myth in his book “Facts and Fallacies about Software Engineering“, on the basis of three points: it’s self-evidently not true (the depth of a bug has nothing to do with how many people are looking for it); there’s plenty of empirical evidence that the utility of adding additional reviewers to a review team tails off very quickly after around 3-4 reviewers; and finally there is no empirical evidence that open source software is less buggy than its alternatives.

More interestingly, companies like Coverity, who specialize in static analysis tools, love to run their tools over open source software and boast about the number of bugs they find (it shows off what their tools can do). For example, their 2009 study found 38,453 bugs in 60 million lines of source code (a bug density of about 0.64 defect/KLOC). Quite clearly, there are many types of bugs that you need automated tools to find, no matter how many eyeballs have looked at the code.

Part of the problem is that the “many eyeballs” part isn’t actually true anyway. In a study conducted by Xu et. al. in 2005 of the sourceforge community, they found that participation in projects follows the power law well known in social network theory: a few open source projects have a very large number of participants, and a very large number have very few participants. Similarly, a very small number of open source developers participate in lots of projects; the majority participate in just one or two:

SourceForge Project and Developer Community Scale Free Degree Distributions (Figure 7d from Xu et al 2005)

SourceForge Project and Developer Community Scale Free Degree Distributions (Figure 7d from Xu et al 2005)

For example, the data shown in these graphs include all developers and active users for about 160,000 sourceforge projects. Of these projects, 25% had only a single person involved (as either developer or user!), and a further 10% had only 2-3 people involved. Clearly, a significant number of open source projects never manage to build a community of any size.

This is relevant to the climate science community because many of the tens of thousands of scientists actively pursing research relevant to our understanding of climate change build software. If all of them release their software as open source, there’s no reason to expect a different distribution from the graphs above. So most of this software will never attract any participants outside the handful of scientists who wrote it, because there simply aren’t enough eyeballs or interest available. The kind of software described in the famous “Harry” files at the CRU is exactly of this nature – if it hadn’t been picked out in the stolen CRU emails, nobody other than “Harry” would ever take the time to look at it. And even if lots of people’s attention was drawn to this particular software (as it has been), there are still thousands of other scraps of similar software out there which would also remain single person projects like those on sourceforge. In contrast, a very small number of projects will attract hundreds of developers/users.

The thing is, this is exactly how the climate science community operates already. A small number of projects (like the big GCMs, listed here) already have a large number of developers and users – for example, CCSM and Hadley’s UM have hundreds of active developers, and a very mature review process. Meanwhile a very large number of custom data analysis tools are built by a single person for his/her own use. Declaring all of these projects to be open source will not magically bring “many eyeballs” to bear on them. And indeed, as Cameron Neylon argues, those that do will immediately have to protect themselves from a large number of clueless newbies by doing exactly what many successful open source projects do: the inner clique closes ranks and refuses to deal with outsiders, ignores questions on the mailing lists, etc. Isn’t that supposed to be the problem we were trying to solve?

The argument that making climate software open source will somehow magically make it higher quality is therefore specious. The big climate models already have many eyeballs, and the small data handling tools will never attract large numbers of eyeballs. So, if any of the people screaming about openness are truly interested in improving software quality, they’ll argue for something that is actually likely to make a difference.

Well, this is what it comes down to. Code reviews on national TV. Who would have thought it? And, by the standards of a Newsnight code review, the code in question doesn’t look so good. Well, it’s not surprising it doesn’t. It’s the work of one, untrained programmer, working in an academic environment, trying to reconstruct someone else’s data analysis. And given the way in which the CRU files were stolen, we can be pretty sure this is not a random sample of code from the CRU; it’s handpicked to be one of the worst examples.

Watch the clip from about 2:00. They compare the code with some NASA code, although we’re not told what exactly. Well, duh. If you compare the experimental code written by one scientist on his own, which has clearly not been through any code review, with that produced by a NASA’s engineering processes, of course it looks messy. For any programmers reading this: How many of you can honestly say that you’d come out looking good if I trawled through your files, picked the worst piece of code lying around in there, and reviewed it on national TV? And the “software engineer” on the program says it’s “below the standards you would expect in any commercial software”. Well, I’ve seen a lot of commercial software. It’s a mix of good, bad, and ugly. If you’re deliberate with your sampling technique, you can find a lot worse out there.

Does any of this matter? Well, a number of things bug me about how this is being presented in the media and blogosphere:

  • The first, obviously, is the ridiculous conclusion that many people seem to be making that poor code quality in one, deliberately selected program file somehow invalidates all of climate science. As cdavid points out towards the end of this discussion, if you’re going to do that, then you pretty much have to throw out most results in every field of science over the past few decades for the same reason. Bad code is endemic in science.
  • The slightly more nuanced, but equally specious, conclusion that bugs in this code mean that research results at the CRU must be wrong. Eric Raymond picks out an example he calls blatant data-cooking, but is quite clearly fishing for results, because he ignores the fact that the correction he picks on is never used in the code, except in parts that are commented out. He’s quote mining for effect, and given Raymond’s political views, it’s not surprising. Just for fun, someone quote mined Raymond’s own code, and was horrified at what he found. Clearly we have to avoid all open source code immediately because of this…? The problem, of course, is that none of these quote miners have gone to the trouble to establish what this particular code is, why it was written, and what it was used for.
  • The widely repeated assertion that this just proves that scientific software must be made open source, so that a broader community of people can review it and improve it.

It’s this last point that bothers me most, because at first sight, it seems very reasonable. But actually, it’s a red herring. To understand why, we need to pick apart two different arguments:

  1. An argument that when a paper is published, all of the code and data on which it is based should be released so that other scientists (who have the appropriate background) can re-run it and validate the results. In fields with complex, messy datasets, this is exceedingly hard, but might be achievable with good tools. The complete toolset needed to do this does not exist today, so just calling for making the code open source is pointless. Much climate code is already open source, but that doesn’t mean anyone in another lab can repeat a run and check the results. The problems of reproducibility have very little to do with whether the code is open – the key problem is to capture the entire scientific workflow and all data provenance. This is very much an active line of research, and we have a long way to go. In the absence of this, we rely on other scientists testing the results with other methods, rather than repeating the same tests. Which is the way it’s done in most branches of science.
  2. An argument that there is a big community of open source programmers out there who could help. This is based on a fundamental misconception about why open source software development works. It matters how the community is organised, and how contributions to the code are controlled by a small group of experts. It matters that it works as a meritocracy, where programmers need to prove their ability before they are accepted into the inner developer group. And most of all, it matters that the developers are the domain experts. For example, the developers who built the Linux kernel are world-class experts on operating systems and computer architecture. Quite often they don’t realize just how high their level of expertise is, because they hang out with others who also have the same level of expertise. Likewise, it takes years of training to understand the dynamics of atmospheric physics in order to be able to contribute to the development of a climate simulation model. There is not a big pool of people with the appropriate expertise to contribute to open source climate model development, and nor is there ever likely to be, unless we expand our PhD programs in climatology dramatically (I’m sure the nay-sayers would like that!).

We do know that most of the heavy duty climate models are built at large government research centres, rather than at universities. Dave Randall explains why this is: the operational overhead of developing, testing and maintaining a Global Climate Model is far too high for university-based researchers. The Universities use (parts of) the models, and do further data analysis on both observational data and outputs from the big models. Much of this is the work of indivdual PhD students or postdocs. Which means that the argument that all code written at all stages of climate research must meet some gold standard of code quality is about as sensible as saying no programmer should ever be allowed to throw together a script to test out if some idea works. Of course bad code will get written in a hurry. What matters is that as a particular line of research matures, the coding practices associated with it should mature too. And we have plenty of evidence that this is true of climate science: the software practices used at the Hadley Centre for their climate models are better than most commercial software practices. Furthermore, they manage to produce code that appears to be less buggy than just about any other code anywhere (although we’re still trying to validate this result, and understand what it means).

None of this excuses bad code written by scientists. But the sensible response to this problem is to figure out how to train scientists to be better programmers, rather than argue that some community of programmers other than scientists can take on the job instead. The idea of open source climate software is great, but it won’t magically make the code better.

Justyna sent me a pointer to another group of people exploring an interesting challenge for computing and software technology: The Crisis Mappers Net. I think I can characterize this as another form of collective intelligence, harnessed to mobile networks and visual analytics, to provide rapid response to humanitarian emergencies. And of course, after listening to George Monbiot in the debate last night, I’m convinced that over the coming decades, the crises to be tackled will increasingly be climate related (forest fires, floods, droughts, extreme weather events, etc).

This evening I’m attending a debate on climate change in Toronto, at the Munk Centre. George Monbiot (UK journalist) and Elizabeth May (leader of the Canadian Green party) are debating Bjorn Lomborg (Danish economist) and Nigel Lawson (ex UK finance minister) on the resolution “Be it resolved climate change is mankind’s defining crisis and demands a commensurate response“. I’m planning to liveblog all evening. Feel free to use the comments thread to play along at home. Update: I’ve added a few more links, tidied up the writing, and put my meta-comments in square bracket. Oh, and the result of the vote is now at the end.

Monbiot has long been a critic of the debate format for discussing climate change, because it allow denialists (who only have to sow doubt) to engage in the gish gallop, which forces anyone who cares about the truth to engage in a hopeless game of whack-a-mole. The was an interesting story over the summer on how Ian Plimer (Australia’s most famous denialist) challenged Monbiot to a debate. Monbiot insisted on written answers to some questions about Plimer’s book as a precondition to the debate, to ensure the debate would be grounded. Plimer managed to ignore the request, and then claim Monbiot had chickened out. Anyway, Monbiot has now decided to come to Canada and break his no fly rule, because he now sees Canada as the biggest stumbling block to international progress on climate change.

Lomborg, of course, got famous as the author of the Skeptical Environmentalist. His position is that climate change is real, but much less of a problem than many other pressing issues, particularly third world development. He therefore opposes any substantial action on climate change.

May is leader of the Canadian Green Party, which regularly polls 5-6%  of the popular vote in federal elections, but has never had an MP elected in Canada’s first past the post system. She is also co-author of Global Warming for Dummies.

Lawson is a washed up UK Tory politician. He was once chancellor of the exchequer (=finance minister) under Margaret Thatcher (and energy minister prior to that), where he was responsible for the “Lawson boom” in the late 1980’s, which, being completely unsustainable, led to a an economic crash in the UK. Lawson resigned in disgrace, and Thatcher was later forced out of office by her backbenchers. [personal note: I was in debt for many years as a result of this, due to money we lost on our apartment, bought at the peak of the boom. I’m still sore. Can you tell?]

I think they’re about to start. To make this easier, and to attempt to diagnose any attempt at the gish gallop, I’ll use the numbers from Skeptical Science whenever I hear a long debunked denialist talking point. By the way, there’s a live feed if you’re interested.

First up. Peter Munk is introducing the event. He’s pointing out that the four debaters are the “rock stars” of climate change, and they have travelled from all over the world to the “little town” of Toronto. [Dunno about that. There are no scientists among them. Surely the science matters here?]

Oh cool, I just discovered Dave Roberts is liveblogging too.

At the beginning, 61% of the audience of 1100 people support the proposition, but 79% of the audience said they could potentially change their mind over the course of the debate. Seven minutes each for opening statements.

First speaker is Nigel Lawson. He agrees it’s an important issue, and is seldom properly debated. He claims it’s a religion and that people like Gore will not debate, and will not tolerate dissent. He’s separated the issue from environmentalism, and framed it as a policy question. He claims that most climate scientists don’t even support the proposition. He cites a survey in which just 8% of scientists said that global warming was the most important issue facing humanity [is this an attempt to invoke SS3? Maybe not – see comments]. [Oh SS8!]. And he’s called for an enquiry into the CRU affair. Okay, now he’s picking apart the IPCC report. Now he’s trying to claim that economically, global warming doesn’t matter, even at the upper end of the IPCC’s temperature anomaly forecast. And now he’s onto the Lomborg argument that fastest possible global economic growth is needed to lift the third world out of poverty, which must be based on the cheapest form of energy [by which he presumably means the dirtiest]. And he’s also arguing that mankind will always adapt to changing climate.

Okay, he’s run out of time. Summary: he thinks the proposition is scientifically unfounded and morally wrong.

Next up: Elizabeth May. The clock run over on Lawson’s time, and the moderator has credited the time to May, so she kicked off with a good joke about an innovative use of cap-and-trade. She is grieved that in the year 2009 we’re still asking whether we should act, and whether this is the defining threat. She says we should have been talking tonight about how to reach the targets that have been set for us by the scientific community, not whether we should do it [good point, except that the proposition is about “mankind’s defining crisis”, not whether we should tackle climate change]. She’s covering some of the history, including the 1988 Toronto conference on climate change, and it’s conclusions that the threat of climate change was second only to global nuclear war. And now a dig at Lawson, who served under prime minister Margaret Thatcher, who fully understood 19 years ago that the science was clear. We know we have changed the chemistry of the atmosphere. This year 30% more CO2 in the atmosphere than any time in the last few million years [this is great – she’s summarizing the scientific evidence!]. She’s pointing out the CRU emails are irrelevant, there was no dishonesty, just decent scientists being harrassed. And anyway, their work is only one small strand of the scientific work. Quick summary of the indicators: melting glaciers, melting polar ice, sea level rise 80% faster than the last IPCC projection. Since the Kyoto protocol, political will has evaporated. Now we’ve run down the clock, and there’s very little time to act. [big applause! The audience likes her].

Next up: Bjorn Lomborg: Human nature is a funny thing – we’re not able to take anything seriously unless it’s hyped up to be the worst thing ever. He’s claiming the “defining crisis” framing is so completely over the top that it only provokes extremist positions in response. So he thinks polarization is not helpful. He’s listing numbers of people living without food, shelter, water, medicines to cure diseases. Hence, he can’t accept that global warming cannot compare to these pressing issues of today. Oh, he’s an eloquent speaker. He’s thinks we lack the political will because we’re barking up the wrong tree. [False dichotomy about to appear…] we cannot focus on third world poverty if we focus on climate change. He thinks the cure for climate change is much more expensive than the problem, and that’s why the political will is missing. He thinks solar panels are not going to matter today, because only once they are cheap enough for everyone to put them up, then everyone will just put them up anyway. Hence we need lots more research and development instead, not urgent policy changes. So we need to stop talking about “cuts, cuts, cuts” in emissions, and find a different approach. So he says global warming is definitely a problem, but it’s not the most important thing, and while we will have to solve it this century, we should be more smart about how we fix it. And he summarizes by saying there are many other challenges that are just as important.

Monbiot: Hidden in the statement is a question: how lucky do you feel. Lawson and Lomborg obviously feel lucky because they don’t think we should prepare for the worst case, and what they are advocating does not even address the most optimistic scenario of the IPCC. He’s making fun of Lawson, who he says has single handedly rumbled those scientists and caught them at it: whereas NL says warming has stopped, the scientists instead show that 8 out of the last 10 years are the warmest in recorded history. Now he’s showing a blank sheet of paper saying it’s the sum total of Lawson’s research on the science! And he’s demolishing the economic arguments of Lawson and Lomborg and countering with the extensive research of the Stern report, who found that the cost of fixing the problem amounted to 1% of GDP, while the costs amounted to 5% to 20% of GDP [pity he didn’t use my favourite quote from Stern: climate change is the greatest and widest-ranging market failure ever]. So who do you believe: Stern’s extensive research or Lawson’ belief in luck? And he’s attacking the argument that we can adapt. And especially in the poorer parts of the world – he’s pointing out that these people suffer the most from the effects (he’s familiar with impact of the climate change droughts in the horn of Africa where he worked for a while). Best adaptation technology there is the AK47: when the drought hits, the killing begins. [Bloody hell, he’s good at this stuff]. Now he’s pointing out Lomborg’s false dichotomy – money for fixing climate change doesn’t have to come out of foreign aid budgets. The answer to the question of whether we should invest in fixing climate change or in development, the answer is ‘yes’ [laughter]. So this is not a time for cheap political shots [much laughter, as he acknowledges that’s what he’s been doing], then says “but we can do that because we’re on the side of the angels!”. [Hmmm. Not sure about that line – it’s a bit arrogant].

Okay, now the rebuttals [although I think Monbiot’s opening statement was already a great rebuttal].

Lawson first: he disagrees with everything the other side said. He says Monbiot is incapable of distinguishing a level from a trend. If a population grew and then stopped growing you could still say the population is at the highest level it has ever been. [and so the inference is that we’re currently at the peak of a warming trend. That’s a really stupid thing for an economist to argue.] He’s trying to claim that most scientists admit there has been no warming and (oh, surely not) he’s using the CRU emails to back this up – they are embarrassed they can’t explain the lack of warming. [pity Lawson doesn’t know that the quote is about something quite different!]. He’s trying to discredit Stern now by citing other economists who disagree. He’s claiming Stern was asked to write a report to support existing government policies (and isn’t even peer-reviewed).

May up next: In terms of years on record, she’s pointing out that year-on-year comparisons are not relevant, and scientists know this. We’re dealing with very large systems. Oh, and she’s pointed out the problem with ocean acidification, which she says Lomborg and Lawson ignore. She’s looked at the CRU emails, and she’s read them all. She’s pointing out that, like Watergate, what was stolen was irrelevant, what matters here is who stole them and why. She’s citing the differences between the Hadley data and NASA data, and pointing out that the scientists are speculating about the differences, and that they are due to lower coverage of the arctic in the Hadley data set. She’s quoting from specific emails with references, and she’s checked with the IPCC scientists (including U of T’s own Dick Peltier) and they have no doubt about the trend decade-on-decade. [Nice to see local researchers getting a mention for the local U of T audience – it’s a great stump speech tactic]

Monbiot’s up again, and the moderator has asked him to address the issues about the Stern report. Monbiot points out that having accused the scientists of fraud, Lawson is now implying that the UK government was trying to commit suicide, if it’s true (as Lawson asserts) they had demanded the results Stern offered. Far from confirming the government’s position, it put the wind up the government. And he’s trying to drive a knife between L and L, by pointing out they have different positions on whether there has been warming this century.

Lomborg: Claims that the Stern report is an extremist view among economists. And that the UK government had approached two other economists before Stern but didn’t get the answer they wanted. And he’s trying to claim that because Stern’s work was a review of the science rather than original research, that makes it less credible [huh?? That’s got to be a candidate for stupidest claim of the evening]. So now he’s saying that on the one hand Monbiot would have it that thousands of scientists support the IPCC reports, but ignores the fact that thousands of climate economists disagree with Stern’s numbers.

Now the moderator is back to Lawson, and asking about the insurance issue: Doesn’t it make sense to insure ourselves against the worst case scenario? Lawson says this is not really like insurance, because it’s not compensation we’d be after  [okay, good point]. He says it’s like proposing to spend more money on fireproofing the house than the house is worth [wait, what?? He thinks the world is worth less than the cost of mitigating climate change??]. Clearly Lawson thinks the cost-benefit trade-off isn’t worth it.

May: a lot of people in the developing world are extremely concerned about the effects of climate change. Oh great dig at Bjorn, she’s pointing out that Bjorn only argues against climate change action, but doesn’t ever argue in favour of development spending in the third world, and therefore is a huge hypocrite [yes, she called him that to his face]. And the climate crisis is making AIDS worse in Africa every day. Lomborg is tring to butt in, but the moderator is calling for civility – he’s given them a time-out!!! May isn’t accepting it! [Well, that was exciting!].

Monbiot: we spend very little on foreign aid, and would like to see us spend more. And points out that climate change makes things worse, because the drought causes men to leave the land, and move to the cities, where they meet more prostitutes, and then bring AIDS back to their families (this is according to Oxfam). Just to maintain global energy supplies (from fossil fuels), between now and 2030, we need to spend $25 trillion US dollars. And the transfer to the oil rich nations in the process will be $30 trillion. So it isn’t a case of whether or not we spend money on fighting climate change. It’s a question of what investments we will make in which forms of energy in the future. And he’s pointed out that peak oil might mean we simply cannot carry on depending on fossil fuels anyway.

Lawson again. The moderator asked him to comment on whether there are beneficial effects of investing in alternative energy – He briefly admits there might but then ignores the question. He’s trying to debunk peak oil by pointing out that when he was energy secretary in the 80’s experts told him we only had 40 years of oil supplies left, and they still say that today, and they always say that. And the thinks global agreement will never happen anyway, because China will never agree to move to more expensive energy sources, and is busy buying up oil supplies from all the surrounding countries.

The moderator has cut off discussion of peak oil, and wants to talk about what’s the tipping point about CO2 concentrations in the atmosphere: 450ppm? 500ppm? Lomborg first. He accepts we’re going to see a continued rise in concentrations. He’s pointing out the difference between intensity cuts and real cuts [sure, but where is this going? Oh, I see…] he claims that because China realised they would get the intensity cuts of 40% anyway just by efficiency gains, they could claim to have set agressive targets and will make them by doing nothing different, but everyone then applauds them for making progress on emissions [I’m still not sure of the point – I don’t remember anyone heaping praise on China for emissions progress, given how many new coal-fired power stations they are building]. He’s now saying that if Monbiot says we should also spend on development then he’s moving over to their side, because climate change is no longer the defining crisis, it’s just one of many. He’s pointing out that fighting climate change is a poor way to fight AIDS, compared to say, handing out condoms.

May: points out that Lomborg puts forward strawman arguments and false choices. The people arguing for action on climate change *are* the people calling for more efficient technology, new alternative energy sources, etc. Whereas Lomborg would put this up as a false dichotomy. She’s pointing out that many of the actions actually have negative cost, especially changes that come from efficiency gains. In Canada, we waste more energy than we use. She says the problem with debating Lomborg is that he quotes some economists and then ignores what else they say (and she’s waving Lomborg’s book around and quoting directly out of it).

Monbiot again, and the moderator is asking about potential benefits from rising temperatures, e.g. for Canada. GM says in the IPCC report, beyond 3ºC of warming, we have a “net decrease in global food production”. Behind these innocent sounding words in the IPCC report is a frightning problem. 800 million people already go hungry. If there’s a net decrease in food production, it’s saying we are moving to structural famine situation, which makes all the other issues look like sideshows in the circus of human suffering [Nice point!]. So, let’s not make false choices, we need to deal with all these issues, but if we don’t tackle climate change, it makes all other development issues far worse. In Africa, 2°C of warming is catastrophic, and these are the people who are not responsible in any way for climate change. The cost for them isn’t in dollars, it’s in human lives, and you can’t put a price on that. You can’t put that in your cost-benefit analysis. Human life must come first.

Lomborg: we all care about other species on the planet and about human life. But he’s arguing that we can save species more effectively by making more countries rich so they don’t have to cut down their forests [I’m hopping up and down at the stupidity of this! Why does he think the Amazon rainforest is disappearing?!], rather than by fighting climate change. Okay, now he’s arguing that cutting emissions is futile because it will make very little difference to the warming that we experience. So it’s better not to do it, and go for fast economic growth instead. He’s claiming that economic development is much more effective than emissions reduction [again with these false dichotomies!!]. He claiming that each dollar spent on climate change mitigation would save more lives if spend directly on development.

Okay, now he’s going to the audience for questions. Or apparently not: Lawson wants to say something: The great killer is poverty. Whereas economic aid helps a little bit, what really helps is economic development. He’s arguing that forcing people to rely on more expensive energy slows down development. Now he’s arguing with Monbiot’s point about a net reduction in food production after 3ºC rise. He’s saying that food production will rise up to 3°C, and after that will still be higher than today, but will not rise further [This is utter bollocks. He’s misunderstood the summary for policymakers, and failed to look at the graphs on page 286 of AR4 WG2]. He says the IPCC also says, on the topic of health, that the only health outcome that they IPCC regards as virtually certain is the reduction in death from cold exposure [Oh, stupid, stupid stupid. He’s claiming that the certainty factors are more important than the number of different types of impact. How does he think he can get away with this crap?].

Monbiot again, and the moderator is asking what’s the best way to lift people out of poverty. Monbiot points out that in Africa it’s much cheaper to build solar panels than to build an energy infrastructure based on bringing in oil. You can help people to escape from poverty without having to mine fossil fuels, and thereby threaten the very lives we’re trying to protect. And now, he’s citing the actual table in the IPCC report to prove Lawson wrong. He’s pointing out to an economist, every thing is flexible, if you want more food you just change the price signals. But if the rains stop, you can’t get more food just by the changing price signals, because nature doesn’t pay any attention to the economy. E.g. a recent Hadley study showed that 2.1 million extra people will be subjected to water stress at 2°C rise, and these people can’t be magic’d away by fiddling with a spreadsheet. Climate change isn’t about the kinds of choices that L&L are suggesting.

And the moderator is inviting May to add and last comments before the wrap up. She says the problem with this discussion is that we haven’t established the context for why action is so urgent. The climate crisis is putting in place some fundemental new processes (in earth systems), and the question is when can we stabilize carbon concentrations so that the temperature rise stops, giving us a chance to adapt (and she thinks adaption is just as important). Only one of the issues we face on the planet today moves in an accelerating fashion, unleashing positive feedback effects – e.g. releasing methane from the melting permafrost, the impact on pine forests by increasing insect activity, releasing more as they decay. The decreased albedo when the polar ice melts. Good point: she points out the work of Stephen Lewis who has done far more than Lomborg to address poverty, and he agrees that climate change is an urgent issue.

Now, final wrap up, 4 minutes each, opposite order to the opening remarks:

Monbiot: He’s concerned about climate change because of his experience in Kenya. In 1992, when he was there, they were suffering their worst drought to date. They had run out of basic resources, and the only thing they could do was raid neighbouring tribes for resources. Mobiot was supposed to visit a cattle camp, but collapse with malaria and was taken off to hospital, and it was the luckiest thing in his life, because when he finally made it to visit the place a few weeks later, the cattle camp he was supposed to have visited had been totally destroyed – all that was left of the 93 people who lived there were their skulls – shot in the night by raiders who were desperate because of the drought, which was almost certainly due to climate change. This is what it’s really about – not spreadsheets and figures, but life and death. This is what switched Monbiot on to climate change. All our work on fighting for social justice and fighting poverty will have been in vane if we don’t stop climate change. All the development agencies – Oxfam, etc, who are on the front line of this, are telling us that climate change is mankind’s defining crisis.

Lomborg: Nobody doubts that everyone here has their heart in the right place. However, he’s arguing that it’s not clear they are suffering because of global warming. Rather than reducing drought by some small percentage by the end of the century, we should make sure they get development now. He’s arguing against Monbiot’s water stress numbers. He’s claiming that “George and Elizabeth” moved over to his side (they’re violently shaking their heads!). He’s claiming that when Elizabeth supports investment in clean energy, she’s come over to his side! His core argument is that the best we can do is postpone global warming by six hours at the end of the century [This is truly a bizarre claim. Where does he get this from?]. So how do we want to be remembered by our kids: for spending trillion dollars on something that was not effective, or working on economic development now.

May: We’ve seen lots of theatre this evening, but the issues are serious. She says Lomborg plays with numbers and figures in a way she finds deplorable. The scientists have solid science that compelled people like Brian Mulroney and Margaret Thatcher to call for action. And somehow we’ve lost that momentum. She’s pointing out the flaw in Lomborg’s argument about water – if the average amount of water is the same, that’s no good if it’s an average over periods of drought and deluge. She’s raised ocean acidification again: how will we feed the world’s people if we kill off life in the ocean? She’s talking about the GRACE project (Dick Peltier gets a mention again) monitoring the West Antarctic ice sheet, and how it is melting now. If it melts, we get a nine metre sea level rise, and no economist can calculate the cost of that. And she’s giving a nice extended analogy about how if the theatre really is on fire, you don’t listen to people trying to reassure everyone and tell them to stay in their seats.

Lawson: Why aren’t scientists pleased there hasn’t been warming over the last few years? [SS4 again. How does he think he’ll get away with this?] they’re upset about it rather than being pleased! [CRU misinterpretation again]. Again, on the water issue – if you get cycles of drought and deluge you capture the water, and solve the real problem rather than the climate change problem [Oh, this is just stupid. You patch the effects rather than tackling the cause??]. He’s now saying that May and Monbiot have the best rhetoric, but there’s a gap between politicians’ rhetoric and the reality. And in all his career he’s never seen such a gap between politician’s rhetoric and what they are doing (as on the topic of climate change). And he claims it is because the cost is so great there’s no way they can go along with what the rhetoric says [Oh, surely this is an own goal? The gap is so big exactly because this is a ‘defining’ crisis!]. He doesn’t believe in rhetoric, he believes in reason [LOL], working out what it is sensible do do.

Moderator: it’s one thing to give a set speech, and quite another to come onto a stage and confront one another views in this type of forum. He’s calling for a vote from the audience. Pre-debate, 61% supported the motion. They will collect the results on the way out and announce them shortly after 9pm tonight. And now he’s invited the audience to move to the reception. Okay, I guess that’s it for now.

Update: The results show that some people were swayed against the proposition: still a majority in favour, but now down to 56% after the debate, with 1050 votes cast.

Okay, time for some quick reflections. Liveblogging debates is much harder than liveblogging scientific talks – no powerpoints, and they go much much faster. I’m typing so fast I can’t reflect, but at least it means I’m focussing on what they’re saying rather than drifting off on tangental thoughts…

I think the framing of the debate was all wrong in hindsight. The proposition that it’s “mankind’s defining crisis” allows Lomborg to say that there’s all this other development stuff that’s important too (although May managed to call him a hypocrite on that, rightly so), and then get Monbiot and May to say that of course they support development spending in the poorer parts of the world as well, which then lets Lomborg come back with the rejoinder that the proposition must be wrong because even Monbiot and May agree we have many different major problems to solve. Of course, this is all a rhetorical trick, which would allow him to claim he won the debate – he even tried twice to claim M & M had moved over to his side. Meanwhile in the real world all these rhetorical tricks make no difference because the science hasn’t changed, and the climate change problem still has the capacity to spiral out of control and, as Monbiot points out, swamp all our other problems. And there’s Lawson at the end claiming he doesn’t believe in rhetoric, he believes in reason, all the while misquoting and misrepresenting the science. I actually think Lawson was an embarrassment, while Lomborg was pretty effective – I can see why lots of people who don’t know the science that well are taken in by his arguments.

Ultimately I’m disappointed there was so little science. May did a great job summarizing a lot of the science issues, but everyone else just ignored them. I doubt this debate changed anyone’s minds. And the conclusion: A majority of the audience agreed with the proposition that climate change is mankind’s defining crisis. I.e. not just that it’s important, and we need action, but the whole issue really is a massive game-changer.