Whom do you believe: The Cato Institute, or the Hadley Centre? Both cannot be right. Yet both claim to be backed by real scientists.

First, to get this out of the way, the latest ad from Cato has been thoroughly debunked by RealClimate, including a critical look at whether the papers that Cato cites offer any support for Cato’s position (hint: they don’t), and a quick tour through related literature. So I won’t waste my time repeating their analysis.

The Cato folks attempted to answer back, but it’s largely by attacking red herrings. However, one point from this article jumped out at me:

“The fact that a scientist does not undertake original research on subject x does not have any bearing on whether that scientist can intelligently assess the scientific evidence forwarded in a debate on subject x”.

The thrust of this argument is an attempt to bury the idea of expertise, so that the opinions of the Cato institute’s miscellaneous collection of people with PhDs can somehow be equated with those of actual experts. Now, of course it is true that a (good) scientist in another field ought to be able to understand the basics of climate science, and know how to judge the quality of the research, the methods used, and the strength of the evidence, at least at some level. But unfortunately, real expertise requires a great deal of time and effort to acquire, no matter how smart you are.

If you want to publish in a field, you have to submit yourself to the peer-review process. The process is not perfect (incorrect results often do get published, and, on occasion, fabricated results too). But one thing it does do very well is to check whether authors are keeping up to date with the literature. That means that anyone who regularly publishes in good quality journals has to keep up to date with all the latest evidence. They cannot cherry pick.

Those who don’t publish in a particular field (either because they work in an unrelated field, or because they’re not active scientists at all) don’t have this obligation. Which means when they form opinions on a field other than their own, they are likely to be based on a very patchy reading of the field, and mixed up with a lot of personal preconceptions. They can cherry pick. Unfortunately, the more respected the scientist, the worse the problem. The most venerated (e.g. prize winners) enter a world in which so many people stroke their egos, they lose touch with the boundaries of their ignorance. I know this first hand, because some members of my own department have fallen into this trap: they allow their brilliance in one field to fool them into thinking they know a lot about other fields.

Hence, given two scientists who disagree with one another, it’s a useful rule of thumb to trust the one who is publishing regularly on the topic. More importantly, if there are thousands of scientists publishing regularly in a particular field and not one of them supports a particular statement about that field, you can be damn sure it’s wrong. Which is why the IPCC reviews of the literature are right, and Cato’s adverts are bullshit.

Disclaimer: I don’t publish in the climate science literature either (it’s not my field). I’ve spent enough time hanging out with climate scientists to have a good feel for the science, but I’ll also get it wrong occasionally. If in doubt, check with a real expert.

Well, this is a little off topic, but we (Janice, Dana, Peggy and I) have been invited to run this year’s International Advanced School of Empirical Software Engineering, in Florida in October. We’ve planned the day around the content of our book chapter on Selecting Empirical Research Methods for Software Engineering Research, which appeared in the book Guide to Advanced Empirical Software Engineering. It’s going to be a lot of fun!

I originally wrote this as a response to a post on RealClimate on hypothesis testing

I think one of the major challenges with public understanding of climate change is that most people have no idea of what climate scientists actually do. In the study I did last summer of the software development practices at the Hadley Centre, my original goal was to look just at the “software engineering” of climate simulation models -i.e. how the code is developed and tested. But the more time I spend with climate scientists, the more I’m fascinated by the kind of science they do, and the role of computational models within it.

The most striking observation I have is that climate scientists have a deep understanding of the fact that climate models are only approximations of earth system processes, and that most of their effort is devoted to improving our understanding of these processes (“All models are wrong, but some are useful” – George Box). They also intuitively understand the core ideas from general systems theory – that you can get good models of system-level processes even when many of the sub-systems are poorly understood, as long as you’re smart about choices of which approximations to use. The computational models have an interesting status in this endeavour: they seem to be used primarily for hypothesis testing, rather than for forecasting. A large part of the time, climate scientists are “tinkering” with the models, probing their weaknesses, measuring uncertainty, identifying which components contribute to errors, looking for ways to improve them, etc. But the public generally only sees the bit where the models are used to make long term IPCC-style predictions.

I never saw a scientist doing a single run of a model and comparing it against observations. The simplest use of models is to construct a “controlled experiment” by making a small change to the model (e.g. a potential improvement to how it implements some piece of the physics), comparing this against a control run (typically the previous run without the latest change), and comparing both runs against the observational data. In other words, there is a 3-way comparison: old model vs. new model vs. observational data, where it is explicitly acknowledged that there may be errors in any of the three. I also see more and more effort put into “ensembles” of various kinds: model intercomparison projects, perturbed physics ensembles, varied initial conditions, and so on. In this respect, the science seems to have changed (matured) a lot in the last few years, but that’s hard for me to verify.

It’s a pretty sophisticated science. I would suggest that the general public might be much better served by good explanations of how this science works, rather than with explanations of the physics and mathematics of climate systems.