For the Computing in Atmospheric Sciences workhop next month, I’ll be giving a talk entitled “On the relationship between earth system models and the labs that build them”. Here’s the abstract:

In this talk I will discuss a number of observations from a comparative study of four major climate modeling centres:
- the UK Met Office Hadley Centre (UKMO), in Exeter, UK
- the National Centre for Atmospheric Research (NCAR) in Boulder Colorado,
- the Max-Planck Institute for Meteorology (MPI-M) in Hamburg, Germany
- the Institute Pierre Simon Laplace (IPSL) in Paris, France).
The study focussed on the organizational structures and working practices at each centre with respect to earth system model development, and how these affect the history and current qualities of their models. While the centres share a number of similarities, including a growing role for software specialists and greater use of open source tools for managing code and the testing process, there are marked differences in how the different centres are funded, in their organizational structure and in how they allocate resources. These differences are reflected in the program code in a number of ways, including the nature of the coupling between model components, the portability of the code, and (potentially) the quality of the program code.

While all these modelling centres continually seek to refine their software development practices and the software quality of their models, they all struggle to manage the growth (in terms of size and complexity) in the models. Our study suggests that improvements to the software engineering practices at the centres have to take account of differing organizational constraints at each centre. Hence, there is unlikely to be a single set of best practices that work anywhere. Indeed, improvement in modelling practices usually come from local, grass-roots initiatives, in which new tools and techniques are adapted to suit the context at a particular centre. We suggest therefore that there is need for a stronger shared culture of describing current model development practices and sharing lessons learnt, to facilitate local adoption and adaptation.


  1. “Our study suggests that improvements to the software engineering practices at the centres have to take account of differing organizational constraints at each centre. Hence, there is unlikely to be a single set of best practices that work anywhere.”

    This has always been true. I used to occasionally teach a session in the Bell Labs Software Project Management course in the 1980s. A strong message was that methodologies and tools needed to be tuned for the organization, personnel, project nature, etc.

    A bit earlier, IBM was on a Chief Programmer Team kick, which showed up in Fred Brooks’ great Mythical Man-Month. (we, Programmer’s WorkBench folks) had our doubts, preferring automated tools and flexibility. I.e., we all knew great programmers were really great, and if you happened to have a Harlan Mills around to lead a CPT, then go ahead. But even at Bell Labs, we couldn’t think of many people like this … and it turned out, apparently hardly anyone else could. [Just being a great programmer wasn't enough.]

    BUT, if there was one wish I had (and maybe for this), it is that more failures or partial successes got reported … as near-misses are often quite instructive.
    They are of course hard to get written and published.

  2. John Mashey :

    This has always been true. …

    Yes, but many software people have forgotten it. I see way too many comments along the lines of “at my company we do X, so if the climate scientists don’t, then their software must be rubbish”.

    The problem is also endemic in software engineering books and software research, where contextual factors (including human variability) are usually ignored altogether. This is especially so among those people who adopt a process-oriented view of software development:

  3. Forgotten often ~ never knew.

    Back at Bell Labs in the 1970s, many people understood this, but some did not.

    (In 1970s, BTL software efforts expanded rapidly, which meant that some mid/upper managers had much more experience in HW than SW. Sometimes they prey to methodology fads.
    Sometimes…I had a great line of management the first few years:
    Vic Vyssotsky, (Exec Director)
    Maury Irvine (Director, very good)]
    Rudd Canaday (Dept. Head, 3rd guy on UNIX file system patent)
    {Evan Ivie, Ted Dolotta, both very good supervisors; Programmer’s Workbench}

    Of course, a big part of our effort was to raise the level of work, at one level from assembler to C, and at the next level from C to shell programming. It ook me a few years to really make that happen.

    As happens, managers changed.
    One person in the new chain was an old telephone guy, who knew every bit in software mattered, so thought we should be writing everything in assembler. Sigh.

    That and a few other cases was the inspiration for the Small Is Beautiful talk:

    When I pulled that out of the time capsule in 2002, after I gave at, and old Bell Labs guy jumped up and said “I heard the original talk, and we haven’t improved one bit.”
    (Well, computers are faster, and people do work at much higher levels these days.)

Join the discussion: