This week I’m visiting NCAR, in Colorado.

As it’s my first visit, I’m still blown away by the beauty of the place – both the location and the building itself (which was designed by I M Pei, the architect better known for the Louvre pyramid). I’m hoping to get some time today to visit the hiking trails around the facility.

Anyway, I gave my talk yesterday, and have had many interesting chats with the scientists and the software engineering team working on the Community Climate System Model (CCSM).

My plan is to do a detailed comparison of the software development practices at NCAR with what I saw in my Hadley study. There seem to be more similarities than differences, but three differences that have struck me so far are:

  • the much greater use of multi-site development (which I expected – it is a community model, after all);
  • the fact that each part of the coupled model (ocean, atmosphere, land, sea ice,…) has a distinct stand-alone identity, each with its own release cycle, which means there are some interesting challenges negotiating the (sometimes) conflicting needs for the stand-alone models, versus the CCSM coupled model;
  • a much longer release cycle – years between official releases, compared to the Hadley’ Centre’s four month release cycle.

We speculated that the length of the release cycles might be largely to do with the major uses of the model. At the Hadley Centre, the climate and weather forecasting models are unified in a single code base. The weather forecasters need regular model improvements to meet annual targets for forecast improvements, and also need to make sure they are using a stable, robust model version (never a pre-release experimental one). Hence a short release cycle makes sense. At NCAR, the main driver for official releases is the IPCC assessment process, which operates on a 5-year cycle. Hence, carefully maintained official releases are only needed every few years. Meanwhile, scientists who want a more up-to-date model can play with unreleased experimental versions at their own risk, if they choose. Creating and supporting an official release takes a large software engineering overhead, and the resources just aren’t available to do it very often, in part because funding agencies much prefer to fund the science, rather than the software infrastructure needed to support that science. The lack of resources for software support seems to be a consistent problem across all the modeling centres I’ve visited so far.

03. March 2010 · 2 comments · Categories: politics

Someone recently challenged me to debate the existence of climate change. Debates are extremely useful for discussing matters that require value judgements. But pointless for establishing what is true of the physical world – for that you need the scientific process. In a complex field like climate change, the best approach is a systematic assessment of the scientific literature.

Debates are won or lost on the rhetorical skills of the debaters. If we were to debate the science of climate change, the set up is somewhat stacked against scientists. Scientists are obliged to stick to the evidence, deal honestly with the uncertainties, and attempt to show how the many different lines of evidence give us confidence in our understanding of climate systems. Scientists eschew rhetoric. Those who want to attack the science need only throw enough talking points around to sow doubt in the minds of the audience. They have at their disposal rhetorical tricks like the gish gallop. The entire exercise is pointless.

Now, if someone wants to debate, say the ethics of leaving subsequent generations to clean up our polluting ways, I’m all on it. That’s a matter of value judgement. If anyone wants to debate the existence or seriousness of anthropogenic climate change, I’d give the same response as I would if they wanted to debate the existence or strength of gravity.

Update: Joe Romm explains it in much more depth.