In our climate brainstorming session last week, we invited two postdocs (Chris and Lawrence) from the atmospheric physics group to come and talk to us about their experiences of using climate models. Most of the discussion focussed on which models they use and why, and what problems they experience. They’ve been using the GFDL models, specifically AM2 and AM3 (atmosphere only models) for most of their research, largely because of legacy: they’re working with Paul Kushner, who is from GFDL,  and the group now has many years experience working with these models. However, they’re now faced with having to switch to NCAR’s Community Climate System Model (CCSM). Why? Because the university has acquired a new IBM supercomputer, and the GFDL models won’t run on it (without a large effort to port them). The resulting dilemma reveals a lot about the current state of climate model engineering:

  • If they stick with the GFDL models, they can’t make use of the new supercomputer, hence miss out on a great opportunity to accelerate their research (they could do many more model runs).
  • If they switch to CCSM, they lose a large investment in understanding and working with the GFDL models. This includes both their knowledge of the model (some of their research involves making changes to the code to explore how perturbations affect the runs), and the investment in tools and scripts for dealing with model outputs, diagnostics, etc.

Of course, the obvious solution would be to port the GFDL models to the new IBM hardware. But this turns out to be hard because the models were never designed for portability. Right now, the GFDL models won’t even compile on the IBM compiler, because of differences in how picky different compilers are over syntax and style checking – climate models tend to have many coding idiosyncrasies that are never fixed because the usual compiler never complains about them: e.g. see Jon’s analysis of static checking NASA’s modelE. And even if they fix all these and get the compiler to accept the code, they’re still faced with extensive testing to make sure the models’ runtime behaviour is correct on the new hardware.

There’s also a big difference in support available. GFDL doesn’t have the resources to support external users (particularly ambitious attempts to port the code). In contrast, NCAR has extensive support for the CCSM, because they have made community building an explicit goal. Hence, CCSM is much more like an open source project. Which sounds great, but it also comes at a cost. NCAR have to devote significant resources to supporting the community. And making the model open and flexible (for use by a broader community) hampers their ability to get the latest science into the model quickly. Which leads me to hypothesize that it is the diversity of your user-base that most restricts the ongoing rate of evolution of a software system. For a climate modeling centers like GFDL, if you don’t have to worry about developing for multiple platforms and diverse users, you can get new ideas into the model much quicker.

Which brings me to a similar discussion over the choice of weather prediction models in the UK. Bryan recently posted an article about the choice between WRF (NCAR’s mesoscale weather model) versus the UM (the UK Met Office’s model). Alan posted a lengthy response which echoes much of what I said above (but with much more detail): basically the WRF is well supported and flexible for a diverse community. The UM has many advantages (particularly speed), but is basically unsupported outside the Met Office. He concludes that someone should re-write the UM to run on other hardware (specifically massively parallel machines), and presumably set up the kind of community support that NCAR has. But funding for this seems unlikely.

Leave a Reply

Your email address will not be published. Required fields are marked *