The American Geophysical Union’s Joint Assembly is in Toronto this week. It’s a little slim on climate science content compared to the EGU meeting, but I’m taking in a few sessions as it’s local and convenient. Yesterday I managed to visit some of the climate science posters. I also caught the last talk of the session on connecting space and planetary science, and learned that the solar cycles have a significant temperature impact on the upper atmosphere, but no obvious effect on the lower atmosphere, but more research is needed to understand the impact on climate simulations. (Heather Andres‘ poster has some more detail on this).

This morning, I attended the session on Regional Scale Climate  Change. I’m learning that understanding the relationship between temperature change and increased tropical storm activity is complicated, because tropical storms seem to react to complex patterns of temperature change, rather than just the temperature itself. I’m also learning that you can use statistical downscaling from the climate models to get finer grained regional simulations of the changes in rainfall, e.g. over the US, leading to predictions for increased precipitation over much of the US in the winters and decreased in the summers. However, you have to be careful, because the models don’t capture seasonal variability well in some parts of the continent. A particular challenge for regional climate predictions is that some placed (e.g. Carribean Islands) are just too small to show up in the grids used in General Circulation Models (GCMs), which means we need more work on Regional Models to get the necessary resolution.

Final talk is Noah Diffenbaugh‘s talk on an ensemble approach to regional climate forecasts. He’s using the IPCC’s A1B scenario (but notes that in the last few years, emissions have exceeded those for this scenario). The model is nested – a hight resolution regional model (25km) is nested within a GCM (CCSM3, at T85 resolution), but the information flows only in one direction, from the GCM to the RCM. As far as I can tell, the reason it’s one way, is because the GCM run is pre-computed; specifically, it is taken by averaging 5 existing runs of the CCSM3 model from the IPCC AR4 dataset, and generate 6-hourly 3D atmosphere fields to drive the regional model. The runs show that by 2030-2039, we should expect 6-8 heat stress events per deacade across the whole of the south-west US (where a heat stress event is the kind of thing that should only hit once per  decade). Interestingly, the warming is greater in the south-eastern US, but because the south-western states are already closer to the threshold temperature for heat stress events, they get more heatwaves. Noah also showed some interesting validation images, to demonstrate that the regional model reproduced 20th Century temperatures over the US much better than the GCM does. 

Noah also talked a little about the role of the 2°C threshold used in climate negotiations, particularly at the Copenhagen meeting. The politicians don’t like that the climate scientists are expressing uncertainty about the 2°C threshold. But there has to be, because the models show that even below 2 degrees, there are some serious regional impacts, in this case on the US. His take home message is that we need to seriously question greenhouse gas mitigation targets. One of the questioners pointed out that there is also some confusion between whether the 2°C is supposed to be above pre-industrial temperatures.

After lunch, I attended the session on Breakthrough Ideas and Technologies for a Planet at Risk II. First talk is by Lewis Gilbert on monitoring and managing a planet at risk. First, he noted that really, the planet itself isn’t at risk – destroying it is still outside our capacity. Life will survive. Humans will survive (at least for a while). But it’s the quality of that survival that is at question. Some definitions of sustainability (he has quibbles with them all). First Bruntland’s – future generations should be able to meet their own needs; Natural Capital – future generations should have a standard of living better or equal to our own. Gilbert’s own: existance of a set of possible futures that are acceptable in some satisficing sense. But all of these definitions are based on human values and human life. So the concept of sustainability has human concerns deeply embedded in it. The rest of his talk was a little vague – he described a state space, E, with multiple dimensions (e.g. physical, such as CO2 concentrations; sociological, such as infant mortality in Somalia; biological, such as amphibian counts in Sierra Nevada), in which we can talk about quality of human life a some function of the vectors. The question then becomes what are the acceptable and unacceptable regions of E. But I’m not sure how this helps any.

Alan Robock talked about Geoengineering. He’s conducted studies of the effect of seeding sulphur particles into the atmosphere, using NASA’s climate model. In particular, injecting them over the arctic, where there is the most temperature change, and least impact on humans. His studies show that the seeding does have a significant impact on temperature, but as soon as you stop the seeding, the global warming quickly rises to where it would have been. So basically, once you start, you can’t stop. Also, you get other effects: e.g. a reduction of the tropical monsoons, a reduction of precipitation. Here’s an alternative: could it be done by just seeding in the arctic summer (when the temperature rise matters), and not in the winter. e.g. seed in April, May and June, or just in April, rather than year round. He’s exploring options like these with the model. Interesting aside: Rolling Stone Magazine, Nov 3, 2006 “Dr Evil’s plan to stop Global Warming”. There was a meeting convened by NASA, at which Alan started to create a long list of risks associated with geoengineering (and has a newer paper updating the list currently in submission).

George Shaw talked about biogeologic carbon sequestration. First, he demolished the idea that peak oil / peak coal etc will save us, by calculating the amount of carbon that can be easily extracted by known fossil fuel reserves. Carbon capture ideas include iron fertilization of the oceans, which stimulates plankton growth, which extracts carbon from. Cyanobacteria also extract carbon. E.g. attach an algae farm to every power station smoke stack. However, to make any difference, the algae farm for one power plant might have to be 40-50 square km. He then described a specific case study, of taking the Salton Basin Area in southern California, and filling it up with an algae farm. This would remove a chunk of agricultural land, but would probably make money under the current carbon trading schemes.

Roel Snieder gave a talk “Facing the Facts and Living Our Values”. Interesting graph on energy efficiency, which shows that 60% of the energy we use is lost. Also presents a version of the graph showing cost of intervention against emissions reduction, point out that sequestration is the most expensive choice of all. Another nice point: understanding of the facts – how much CO2 gas is produced by burning all the coal in one railroad car. Answer is about 3 times the weight of the coal, but most people would say only a few ounces, because gases are very light. Also he has a neat public lecture, and encouraged the audience to get out and give similar lectures to the public.

Eric Barron: Beyond Climate Science. It’s a mistake for the climate science community to say that “the science is settled”, and we need to move on to mitigation strategies. Still five things we need:

  1. A true climate services – an authoritative, credible, user-centric source of information on climate (models and data). E.g. Advice on resettlement of threatened towns, advice on forestry management, etc.
  2. Deliberately expand the family of forecasting elements. Some natural expansion of forecasting is occurring, but the geoscience community needs to push this forward deliberately.
  3. Invest in stage 2 science – social sciences and the human dimension of climate change (physical science budget dwarves the social sciences budget).
  4. Deliberately tackle the issue of scale and the demand for an integrated approach.
  5. Evolve from independent research groups to environmental “intelligence” centres. Cohesive regional observation and modeling framework. And must connect vigorously with users and decision-makers.

Key point: we’re not ready. Characterizes the research community as a cottage industry of climate modellers. Interesting analogy: health sciences, which is almost entirely a “point-of-service” community that reacts to people coming in the door, with no coherent forecasting service. Finally, some examples of forecasting spread of west nile disease, lyme disease, etc.

ICSE proper finished on Friday, but a few brave souls stayed around for more workshops on Saturday. There were two workshops in adjacent rooms that had a big topic overlap: SE Foundations for End-user programming (SEE-UP) and Software Engineering for Computational Science and Engineering (SECSE, pronounced “sexy”). I attended the latter, but chatted to some people attending the former during the breaks – seems we could have merged the two workshops for interesting effect. At SECSE, the first talk was by Greg Wilson, talking about the results of his survey of computational scientists. Some interesting comments about the qualitative data he showed, including the strong confidence exhibited in most of the responses (people who believe they are more effective at using computers than their colleagues). This probably indicates a self-selection bias, but it would be interesting to probe the extent of this. Also, many of them take a “toolbox” perspective – they treat the computer as a set of tools, and associate effectiveness with how well people understand the different tools, and how much they take the time to understand them. Oh and many of them mention that using a Mac makes them more effective. Tee Hee.

Next up: Judith Segal, talking about organisational and process issues – particularly the iterative, incremental approach they take to building software. Only cursory requirements analysis and only cursory testing. The model works because the programmers are the users – they build software for themselves, and because the software is developed (initially) only to solve a specific problem, so they can ignore maintainability and usability. Of course, the software often does escape from the lab, and get used by others, which leads to a large risk of using incorrect, poorly designed software leading to incorrect results. For the scientific communities Judith has been working with, there’s a cultural issue too – the scientists don’t value software skills, because they’re focussed on scientific skills and understanding. Also, openness is a problem because they are busy competing for publications and funding. But this is clearly not true of all scientific disciplines, as the climate scientists I’m familiar with are very different: for them computational skills are right at the core of their discipline, and they are much more collaborative than competitive.

Roscoe Bartlett, from Sandia Labs, presenting “Barely Sufficient Software Engineering: 10 Practices to Improve Your CSE Software”. It’s a good list: Agile (incremental) development, Code management, mail lists, checklists, make the source code the primary source of documentation. Most important was the idea of “barely sufficient”. Mindless application of formal software engineering processes to computational science doesn’t make any sense.

Carlton Crabtree described a study design to investigate the role of agile and plan-driven development processes among scientific software development projects. They are particularly interested in exploring the applicability of the Boehm and Turner model as an analytical tool. They’re also planning to use grounded theory to explore the scientists own perspectives, although I don’t quite get how they will reconcile the contructivist stance of grounded theory (it’s intended as a way of exploring the participants’ own perspectives), with the use of a pre-existing theoretical framework, such as the Boehm and Turner model.

Jeff Overbey, on refactoring Fortran. First, he started with a few thoughts on the history of Fortran (the language that everyone keeps thinking will die out, but never does. Some reference to zombies in here…). Jeff pointed out that languages only ever accumulate features (because removing features breaks backwards compatibility), so they just get more complex and harder to use with each update to the language standard. So, he’s looking at whether you can remove old language features using refactoring tools. This is especially useful for the older language features that encourage bad software engineering practices. Jeff then demo’d his tool. It’s neat, but is currently only available as an Eclipse plugin. If there was an emacs version, I could get lots of climate scientists to use this. [note: In the discussion, Greg recommended the book Working effectively with legacy code].

Next up: Roscoe again, this time on integration strategies. The software integration issues he describes are very familiar to me. and he outlined an “almost” continuous integration process, which makes a lot of sense. However, some of the things he describes a challenges don’t seem to be problems in the environment I’m familiar with (the climate scientists at the Hadley Centre). I need to follow up on this.

Last talk before the break: Wen Yu, talking about the use of program families for scientific computation, including a specific application for finite element method computations.

After an infusion of coffee, Ritu Arora, talking about the application of generative programming for scientific applications. She used a checkpointing example as a proof-of-concept, and created a domain specific language for describing checkpointing needs. Checkpointing is interesting, because it tends to be a cross cutting concern; generating code for this and automatically weaving it into the code is likely to be a significant benefit. Initial results are good: the automatically generated code had similar performance profiles to hand generated checkpointing code.

Next: Daniel Hook on testing for code trustworthiness. He started with some nice definitions and diagrams that distinguish some of the key terminology e.g. faults (mistakes in the code) versus errors (outcomes that affect the results). Here’s a great story: he walked into a glass storefront window the other day, thinking it was a door. The fault was mistaking a window for a door, and the error was about three feet. Two key problems: the oracle problem (we often have only approximate or limited oracles for what answers we should get) and the tolerance problem (there’s no objective way to say that the results are close enough to the expected results so that we can say they are correct). Standard SE techniques often don’t apply. For example, the use of mutation testing to check the quality of a test set doesn’t work on scientific code because of the tolerance problem – the mutant might be closer to the expected result than the unmutated code. So, he’s exploring a variant and it’s looking promising. The project is called matmute.

David Woollard, from JPL, talking about inserting architectural constraints into legacy (scientific) code. David has been doing some interesting work with assessing the applicability of workflow tools to computational science.

Parmit Chilana from U Washington. She’s working mainly with bioinformatics researchers, comparing the work practices of practitioners with researchers. The biologists understand the scientific relevance , but not the technical implementation; the computer scientists understand the tools and algorithms, but not the biological relevance. She’s clearly demonstrated the need for domain expertise during the design process, and explored several different ways to bring both domain expertise and usability expertise together (especially when the two types of expert are hard to get because they are in great demand).

After lunch, the last talk before we break out for discussion. Val Maxville, preparing scientists for scaleable software development. Val gave a great overview of the challenges for software development at iVEC. AuScope looks interesting – an integration of geosciences data across Australia. For each of the different projects. Val assessed how much they have taken practices from the SWEBOK – how much have they applied them, and how much do they value them. And she finished with some thoughts on the challenges for software engineering education for this community, including balancing between generic and niche content, and balance between ‘on demand’ versus a more planned skills development process.

And because this is a real workshop, we spent the rest of the afternoon in breakout groups having fascinating discussions. This was the best part of the workshop, but of course required me to put away the blogging tools and get involved (so I don’t have any notes…!). I’ll have to keep everyone in suspense.

I’m going to SciBarCamp this Saturday. The theme is open science, although we’re free to interpret that as broadly as possible. So here’s my pitch for a session:

Climate Change is the biggest challenge ever faced by humanity. In the last two years, it has become clear that climate change is accelerating, outpacing the IPCC’s 2007 assessment. The paleontological record shows that the planet is “twitchy“, with a number of tipping points at which feedback effects kick in, to take the the planet to a dramatically different climate, which would have disastrous impacts  on the human population. Some climate scientists think we’ve already hit some of these tipping points. However, the best available data suggests that if we can stop the growth of carbon emissions within the next five years, and then then aggressively reduce them to zero over the next few decades, we stand a good chance of averting the worst effects of runaway warming. 

It’s now clear that we can’t tackle this through volunteerism. Asking people to change their lightbulbs and turn off unnecessary appliances is nothing but a distraction: it conceals the real scale of the problem. We need a systematic rethinking of how energy is produced and used throughout society. We need urgent government action on emissions regulation and energy pricing. We need a massive investment in R&D on zero emissions technology (but through an open science initiative, rather than a closed, centralized Manhattan Project style effort). We need a massive R&D effort into how to adapt to those climate changes that we cannot  now avoid: on a warmer planet, we will need to completely rethink food production, water management, disease control, population migration, urban planning, etc. And we will need to understand the potential impacts of the large scale geo-engineering projects that might buy us more time. We need an “all of the above” solution.

Put simply, we’ll need all the brainpower that the planet has to offer to figure out how to meet this challenge. We’ll need scientists and engineers from every discipline to come to the table, and figure out where their particular skills and experience can be most useful. We’ll need to break out of our disciplinary straightjackets, and engage in new interdisciplinary and problem-oriented research programs, to help us understand this new world, and how we might survive in it.

Governments are beginning to recognize the scale of the problem, and are starting to devote research funding to address it. It’s too little, and too late, but it’s a start. This funding is likely to grow substantially over the next few years, depending on how quickly politicians grasp the scale and urgency of the problem. But, as scientists, we shouldn’t wait for governments to get it. We need to get together now, to help explain the science to policymakers and to the public, and to start the new research programmes that will fill the gaps in our current knowledge.

So, here’s what I would like to discuss:

  • How do we get started?
  • How can we secure funding and institutional support for this?
  • How can professional scientists redirect their research efforts to this (and how does this affect the career scientist)?
  • How can scientists from different disciplines identify where their expertise might be needed and identify opportunities to get involved?
  • How can we foster the necessary inter-disciplinary links and open data sharing?
  • What barriers exist, and how can they be overcome?

Had an interesting conversation this afternoon with Brad Bass. Brad is a prof in the Centre for Environment at U of T, and was one of the pioneers of the use of models to explore adaptations to climate change. His agent based simulations explore how systems react to environmental change, e.g. exploring population balance among animals, insects, the growth of vector-borne diseases, and even entire cities. One of his models is Cobweb, an open-source platform for agent-based simulations. 

He’s also involved in the Canadian Climate Change Scenarios Network, which takes outputs from the major climate simulation models around the world, and extracts information on the regional effects on Canada, particularly relevant for scientists who want to know about variability and extremes on a regional scale.

We also talked a lot about educating kids, and kicked around some ideas for how you could give kids simplified simulation models to play with (along the line that Jon was exploring as a possible project), to get them doing hands on experimentation with the effects of climate change. We might get one of our summer students to explore this idea, and Brad has promised to come talk to them in May once they start with us.

Oh, and Brad is also an expert on green roofs, and will be demonstrating them to grade 5 kids at the Kids World of Energy Festival.

As my son (grade 4) has started a module at school on climate and global change, I thought I’d look into books on climate change for kids. Here’s what I have for them at the moment:

Weird Weather by Kate Evans. This is the kids favourite at the moment, probably because of its comicbook format. The narrative format works well – its involves the interplay between three characters: a businessman (playing the role of a denier), a scientist (who shows us the evidence) and a idealistic teenager, who gets increasingly frustrated that the businessman won’t listen.

The Down-to-Earth Guide to Global Warming, by Laurie David and Cambria Gordon. Visually very appealling, lots of interesting factoids for the kids, and a particular attention to the kinds of questions kids like to ask (e.g. to do with methane from cow farts).

How We Know What We Know About Our Changing Climate by Lynne Cherry and Gary Braasch. Beautiful book (fabulous photos!), mainly focusing on sources of evidence (ice cores, tree rings, etc), and how they were discovered. Really encourages the kids to do hands on data collection. Oh, and there’s a teacher’s guide as well, which I haven’t looked at yet.

Global Warming for Dummies by Elizabeth May and Zoe Caron. Just what we’d expect from a “Dummies Guide…” book. I bought it because I was on my way to a bookstore on April 1, when I heard an interview on the CBC with Elizabeth May (leader of the Canadian Green Party) talking about how they were planning to reduce the carbon footprint of their next election campaign, by hitchhiking all over Canada. My first reaction was incredulity, but then I remembered the date, and giggled uncontrollably all the way into the bookstore. So I just had to buy the book.

Whom do you believe: The Cato Institute, or the Hadley Centre? Both cannot be right. Yet both claim to be backed by real scientists.

First, to get this out of the way, the latest ad from Cato has been thoroughly debunked by RealClimate, including a critical look at whether the papers that Cato cites offer any support for Cato’s position (hint: they don’t), and a quick tour through related literature. So I won’t waste my time repeating their analysis.

The Cato folks attempted to answer back, but it’s largely by attacking red herrings. However, one point from this article jumped out at me:

“The fact that a scientist does not undertake original research on subject x does not have any bearing on whether that scientist can intelligently assess the scientific evidence forwarded in a debate on subject x”.

The thrust of this argument is an attempt to bury the idea of expertise, so that the opinions of the Cato institute’s miscellaneous collection of people with PhDs can somehow be equated with those of actual experts. Now, of course it is true that a (good) scientist in another field ought to be able to understand the basics of climate science, and know how to judge the quality of the research, the methods used, and the strength of the evidence, at least at some level. But unfortunately, real expertise requires a great deal of time and effort to acquire, no matter how smart you are.

If you want to publish in a field, you have to submit yourself to the peer-review process. The process is not perfect (incorrect results often do get published, and, on occasion, fabricated results too). But one thing it does do very well is to check whether authors are keeping up to date with the literature. That means that anyone who regularly publishes in good quality journals has to keep up to date with all the latest evidence. They cannot cherry pick.

Those who don’t publish in a particular field (either because they work in an unrelated field, or because they’re not active scientists at all) don’t have this obligation. Which means when they form opinions on a field other than their own, they are likely to be based on a very patchy reading of the field, and mixed up with a lot of personal preconceptions. They can cherry pick. Unfortunately, the more respected the scientist, the worse the problem. The most venerated (e.g. prize winners) enter a world in which so many people stroke their egos, they lose touch with the boundaries of their ignorance. I know this first hand, because some members of my own department have fallen into this trap: they allow their brilliance in one field to fool them into thinking they know a lot about other fields.

Hence, given two scientists who disagree with one another, it’s a useful rule of thumb to trust the one who is publishing regularly on the topic. More importantly, if there are thousands of scientists publishing regularly in a particular field and not one of them supports a particular statement about that field, you can be damn sure it’s wrong. Which is why the IPCC reviews of the literature are right, and Cato’s adverts are bullshit.

Disclaimer: I don’t publish in the climate science literature either (it’s not my field). I’ve spent enough time hanging out with climate scientists to have a good feel for the science, but I’ll also get it wrong occasionally. If in doubt, check with a real expert.

In honour of Ada Lovelace day, I decided to write a post today about Prof Julia Slingo, the new chief scientist at the UK Met Office. News of Julia’s appointment came out in the summer last year during my visit to the Met Office, coincidentally on the same day that I met her, at a workshop on the HiGEM project (where, incidentally, I saw some very cool simulations of ocean temperatures). Julia’s role at the meeting was to represent the sponsor (NERC – the UK equivalent of Canada’s NSERC), but what impressed me about her talk was both her detailed knowledge of the project, and the way she nurtured it – she’ll make a great chief scientist.

Julia’s research has focussed on tropical variability, particularly improving our understanding of the monsoons, but she’s also played a key role in earth system modeling, and especially in the exploration of high resolution models. But best of all, she’s just published a very readable account of the challenges in developing the next generation of climate models. Highly recommended for a good introduction to the state of the art in climate modeling.

First a couple of local ones, in May:

Then, this one looks interesting: The World Climate Conference, in Geneva at the end of August. It looks like most of the program will be invited, but they will be accepting abstracts for a poster session. Given that the theme is to do with how climate information is generated and used, it sounds very appropriate.

Followed almost immediately by EnviroInfo2009, in Berlin, in September. I guess the field I want to name “Climate Informatics” would be a subfield of environmental informatics. Paper deadline is April 6.

Finally, there’s the biggy in Copenhagen in December, where, hopefully, the successor to the Kyoto agreement will be negotiated.

Over the last two years, evidence has accumulated that the IPCC reports released just two years ago underestimate the pace of climate change. Nature provides this summary. See also this article in Science Daily; and there are plenty more like it;

Emissions from fossil fuels growing faster than any of the scenarios included in the IPCC reports (news article ; original paper here). And recent studies indicate the effects are irreversible., at least for the next 1000 years.

Arctic Sea Ice, which is probably the most obvious “canary in the coal mine” is melting faster than the models predicted, and will likely never recover (Story from IPY here)

Greenland and Antarctic ice sheets melting 100 years ahead of schedule (news report; original papers here and here). Meanwhile new studies show the effect on the coastlines will be worse than previously thought, especially in North America and around the Indian Ocean (press release here; original paper here).

Sea level rise following the worst case scenario given in the IPCC reports (news report; original paper here and here).

Oceans soaking up less CO2, and hence losing their role as a carbon sink. (news report; original paper here

And finally some emerging evidence of massive methane releases as the permafrost melts (news report; no peer-reviewed paper yet).

I originally wrote this as a response to a post on RealClimate on hypothesis testing

I think one of the major challenges with public understanding of climate change is that most people have no idea of what climate scientists actually do. In the study I did last summer of the software development practices at the Hadley Centre, my original goal was to look just at the “software engineering” of climate simulation models -i.e. how the code is developed and tested. But the more time I spend with climate scientists, the more I’m fascinated by the kind of science they do, and the role of computational models within it.

The most striking observation I have is that climate scientists have a deep understanding of the fact that climate models are only approximations of earth system processes, and that most of their effort is devoted to improving our understanding of these processes (“All models are wrong, but some are useful” – George Box). They also intuitively understand the core ideas from general systems theory – that you can get good models of system-level processes even when many of the sub-systems are poorly understood, as long as you’re smart about choices of which approximations to use. The computational models have an interesting status in this endeavour: they seem to be used primarily for hypothesis testing, rather than for forecasting. A large part of the time, climate scientists are “tinkering” with the models, probing their weaknesses, measuring uncertainty, identifying which components contribute to errors, looking for ways to improve them, etc. But the public generally only sees the bit where the models are used to make long term IPCC-style predictions.

I never saw a scientist doing a single run of a model and comparing it against observations. The simplest use of models is to construct a “controlled experiment” by making a small change to the model (e.g. a potential improvement to how it implements some piece of the physics), comparing this against a control run (typically the previous run without the latest change), and comparing both runs against the observational data. In other words, there is a 3-way comparison: old model vs. new model vs. observational data, where it is explicitly acknowledged that there may be errors in any of the three. I also see more and more effort put into “ensembles” of various kinds: model intercomparison projects, perturbed physics ensembles, varied initial conditions, and so on. In this respect, the science seems to have changed (matured) a lot in the last few years, but that’s hard for me to verify.

It’s a pretty sophisticated science. I would suggest that the general public might be much better served by good explanations of how this science works, rather than with explanations of the physics and mathematics of climate systems.

I was recently asked (by a skeptic) whether I believed in global warming. It struck me that the very question is wrong-headed. Global warming isn’t a matter for belief. It’s not a religion. The real question is whether you understand the available evidence, and whether that evidence supports the theory. When we start talking about what we believe, we’re not doing science any more – we’re into ideology and pseudo-science.

Here’s the difference. Scientists proceed by analyzing all the available data, weighing it up, investigating its validity, and evaluating which theory best explains the evidence. It is a community endeavour, with checks and balances such as the peer review process. It is imperfect (because even scientists can make mistakes) but it is also self-correcting (although sometimes it takes a long time to discover mistakes).

Ideology starts with a belief, and then selects just that evidence that reinforces the belief. So if a blog post (or newspaper column) provides a few isolated data points to construct an entire argument about climate change, the chances are it’s ideology rather than science. Ideologists cherry-pick bits of evidence to reinforce an argument, rather than weighing up all the evidence. George Will’s recent column in the Washington Post is a classic example. When you look at all the data, his arguments just don’t stand up.

The deniers don’t do science. There is not one peer-reviewed publication in the field of climate science that sheds any doubt whatsoever on the theory of anthropogenic global warming. If the deniers were doing good science, they would be able to publish it. They don’t. They send it to the media. They are most definitely not scientists.

The key distinction between science and ideology is how you engage with the data.

  1. Because their salaries depend on them not understanding. Applies to anyone working for the big oil companies, and apparently to a handful of “scientists” funded by them .
  2. Because they cannot distinguish between pseudo-science and science. Seems to apply to some journalists, unfortunately.
  3. Because the dynamics of complex systems are inherently hard to understand. Shown to be a major factor by the experiments Sterman did on MIT students.
  4. Because all of the proposed solutions are incompatible with their ideology. Applies to most rightwing political parties, unfortunately.
  5. Because scientists are poor communicators. Or, more precisely, few scientists can explain their work well to non-scientists.
  6. Because they believe their god(s) would never let it happen. And there’s also a lunatic subgroup who welcome it as part of god’s plan (see rapture).
  7. Because most of the key ideas are counter-intuitive. After all, a couple of degrees warmer is too small to feel.
  8. Because the truth is just too scary. There seem to be plenty of people who accept that it’s happening, but don’t want to know any more because the whole thing is just too huge to think about.
  9. Because they’ve learned that anyone who claims the end of the world is coming must be a crackpot. Although these days, I suspect this one is just a rhetorical device used by people in groups (1) and (4), rather than a genuine reason.
  10. Because most of the people they talk to, and most of the stuff they read in the media also suffers from some of the above. Selective attention allows people to ignore anything that challenges their worldview.

But I fear the most insidious is because people think that changing their lightbulbs and sorting their recyclables counts as “doing your bit”. This idea allow you to stop thinking about it, and hence ignore just how serious a problem it really is.

Next month, I’ll be attending the European Geosciences Union’s General Assembly, in Austria. It will be my first trip to a major geosciences conference, and I’m looking forward to rubbing shoulders with thousands of geoscientists.

My colleague, Tim, will be presenting a poster in the Climate Prediction: Models, Diagnostics, and Uncertainty Analysis session on the Thursday, and I’ll be presenting a talk on the last day in the session on Earth System Modeling: Strategies and Software. My talk is entitled Are Earth System model software engineering practices fit for purpose? A case study.

While I’m there, I’ll also be taking in the Ensembles workshop that Tim is organising, and attending some parts of the Seamless Assessment session, to catch up with more colleagues from the Hadley Centre. Sometime soon I’ll write a blog post on what ensembles and seamless assessment are all about (for now, it will just have to sound mysterious…)

The rest of the time, I plan to talk to as many climate modellers as a I can from other centres, as part of my quest for comparison studies for the one we did at the Hadley Centre.