In the last post, I talked about the opening session at the workshop on “A National Strategy for Advancing Climate Modeling”, which focussed on the big picture questions. In the second session, we focussed on the hardware, software and human resources challenges.

To kick off, Jeremy Kepner from MIT called in via telecon to talk about software issues, from his perspective working on Matlab tools to support computational modeling. He made the point that it’s getting hard to make scientific code work on new architectures, because it’s increasingly hard to find anyone who wants to do the programming. There’s a growing gap between the software stacks used in current web and mobile apps, gaming, and so on, and that used in scientific software. Programmers are used to having new development environments and tools, for example for developing games for Facebook, and regard scientific software development tools as archaic. This means it’s hard to recruit talent from the software world.

Jeremy quipped that software is an evil thing – the trick is to get people to write as little of it as possible (and he points out that programmers make mistakes at the rate of one per 1,000 lines of code). Hence, we need higher levels of abstraction, with code generated automatically from higher level descriptions. Hence, an important question is whether it’s time to abandon Fortran. He also points out that programmers believe they spend most of their time coding, but in fact, coding is a relatively small part of what they do. At least half of their time is testing, which means that effort to speed up the testing process gives you the most bang for the buck.

Ricky Rood, Jack Fellows, and Chet Koblinsky then ran a panel on human resources issues. Ricky pointed out that if we are to identify shortages in human resources, we have to be clear about whether we mean for modeling vs. climate science vs. impacts studies vs. users of climate information, and so on. The argument can be made that in terms of absolute numbers there are enough people in the field, but the problems are in getting an appropriate expertise mix / balance, having people at the interfaces between different communities of expertise, a lack of computational people (and not enough emphasis on training our own), and management of fragmented resources.

Chet pointed out that there’s been a substantial rise in the number of job postings using the term “climate modelling” over the last decade. But there’s still a widespread perception is that there aren’t enough jobs (i.e. more grad students being trained than we have positions for). There are some countervailing voices – for example Pielke argues that universities will always churn out more than enough scientists to support their mission, and there’s a recent BAMs article that explored the question “are we training too many atmospheric scientists?“. The shortage isn’t in the number of people being trained, but in the skills mix.

We covered a lot of ground in the discussions. I’ll cover just some of the highlights here.

Several people observed that climate model software development has diverged from mainstream computing. Twenty years ago, academia was the centre of the computing world. Now most computing is in commercial world, and computational scientists have much less leverage than we used to. This means that some tools we rely on might no longer be sustainable. E.g. fortran compilers (and autogenerators?) – fewer users care about these, and so there is less support for transitioning them to new architectures. Climate modeling is a 10+ year endeavour, and we need a long-term basis to maintain continuity.

Much of the discussion focussed on anticipated disruptive transitions in hardware architectures. Whereas in the past, modellers have relied on faster and faster processors to deliver new computing capacity, this is coming to an end. Advances in clock speed have tailed off, and now its  massive parallelization that delivers the additional computing power. Unfortunately, this means the brute force approach of scaling up current GCM numerical methods on a uniform grid is a dead-end.

As Bryan Lawrence pointed out, there’s a paradigm change here: computers no longer compute, they produce data. We’re entering an era where CPU time is essentially free, and it’s data wrangling that forms the bottleneck. Massive parallelization of climate models is hard because of the volume of data that must be passed around the system. We can anticipated 1-100 exabyte scale datasets (i.e. this is the size not of the archive, but of the data from a single run of an ensemble). It’s unlikely than any institution will have the ability to evolve their existing codes into this reality.

The massive parallelization and data volumes also bring another problem. In the past, climate modellers have regarded bit-level reproducibility of climate runs to be crucial, partly because reproducing a run exactly is considered good scientific practice, and partly because it allows many kinds of model test to be automated. The problem is, at the scales we’re talking about, exact bit reproducibility is getting hard to maintain. When we scale up to millions of processors, and terabyte data sets, bit-level failures are frequent enough that exact reproducibility can no longer be guaranteed – if a single bit is corrupted during a model run, it may not matter for the climatology of the run, but it will mean exact reproducibility is impossible. Add to this the fact that in the future, CPUs are likely to be less deterministic, then, as Tim Palmer argued at the AGU meeting, we’ll be forced to fundamentally change our codes, and therefore, maybe we should take the opportunity to make the models probabilistic.

One recommendation that came out of our discussions is to consider a two track approach for the software. Now that most modeling centres have finished their runs for the current IPCC assessment (AR5), we should plan to evolve current codes towards the next IPCC assessment (AR6), while starting now on developing entirely new software for AR7. The new codes will address i/o issues, new solvers, etc.

One of the questions the committee posed to the workshop was the potential for hardware-software co-design. The general consensus was that it’s not possible in current funding climate. But even if the funding was available, it’s not clear this is desirable, as the software has much longer useful life than any hardware. Designing for specific hardware instantiations tends to bring major liabilities, and (as my own studies have indicated) there seems to be an inverse correlation between availability of dedicated computing resources and robustness/portability of the software. Things change in climate models all the time, and we need the flexibility to change algorithms, refactor software, etc. This means FPGAs might be a better solution. Dark silicon might push us in this direction anyway.

Software sharing came up as an important topic, although we didn’t talk about this as much as I would have liked. There seems to be a tendency among modelers to assume that making the code available is sufficient. But as Cecelia Deluca pointed out, from the ESMF experience, community feedback and participation is important. Adoption mandates are not constructive – you want people to adopt software because it works better. One of the big problems here is understandability of shared code. The learning curve is getting bigger, and code sharing between labs is really only possible with a lot of personal interaction. We did speculate that auto-generation of code might help here, because it forces the development of higher level language to describe what’s in a climate model.

For the human resources question, there was a widespread worry that we don’t have the skills and capacity to deal with anticipated disruptive changes in computational resources. There is a shortage of high quality applicants for model development positions, and many disincentives for people to pursue such a career: the long publication cycle, academic snobbery, and the demands of the IPCC all make model development an unattractive career for grad students and early career scientists. We need a different reward system, so that contributions to the model are rewarded.

However, it’s also clear that we don’t have enough solid data on this – just lots of anecdotal evidence. We don’t know enough about talent development and capacity to say precisely where the problems are. We identified three distinct roles, which someone amusingly labelled: diagnosticians (who use models and model output in their science), perturbers (who explore new types of runs by making small changes to models) and developers (who do the bulk of model development). Universities produce most of the first, a few of the second, and very few of the third. Furthermore, model developers could be subdivided between people who develop new parameterizations and numerical analysts, although I would add a third category: developers of infrastructure code.

As well as worrying about training of a new generation of modellers, we also worried about whether the other groups (diagnosticians and perturbers) would have the necessary skillsets. Students are energized by climate change as a societal problem, even if they’re not enticed by a career in earth sciences. Can we capitalize on this, through more interaction with work at the policy/science interface? We also need to make climate modelling more attractive to students, and to connect them more closely with the modeling groups. This could be done through certificate programs for undergrads to bring them into modelling groups, and by bringing grad students into modelling centres in their later grad years. To boost computational skills, we should offer training in earth system science to students in computer science, and expand training for earth system scientists in computational skills.

Finally, let me end with a few of the suggestions that received a very negative response from many workshop attendees:

  • Should the US be offering only one center’s model to the IPCC for each CMIP round? Currently every major modeling center participates, and many of the centres complain that it dominates their resources during the CMIP exercise. However, participating brings many benefits, including visibility, detailed comparison with other models, and a pressure to improve model quality and model documentation.
  • Should we ditch Fortran and move to a higher level languages? This one didn’t really even get much discussion. My own view is that it’s simply not possible – the community has too much capital tied up in Fortran, and it’s the only language everyone knows.
  • Can we incentivize a mass participation in climate modeling, like the “develop apps for the iphone”? This is an intriguing notion, but one that I don’t think will get much traction, because of the depth of knowledge needed to do anything useful at all in current earth system modeling. Oh, and we’d probably need a different answer to the previous question, too.

5 Comments

  1. Thanks for this, I know next to nothing of parallel computing, but isn’t it clear some grid points have much more to calculate than most? How is the computing time allotted? I mean the grid points having some physical boundary (land/ocean, water/ice, halocline, thermocline, possibly even meteorological throughs/highs where wind changes direction). In any case, a full description of calculations that may be done on a grid point is way longer than the amount of calculations that have to be done on simpler physical situations. I don’t know if this sort of optimization is already being done, but I could imagine it could speed up the thing somewhat. It’s much easier to describe f.e. high pressure over dry land than a low pressure over partly frozen water, I’d imagine.

  2. Tim van Beek

    Should we ditch Fortran and move to a higher level languages?

    Many companies still have mainframes with important software written in COBOL or even more obscure languages, and invest a lot of money to make a step-by-step transition to more modern frameworks, like JEE, because they cannot replace the people with the necessary know how who retire. The main key to success is that they can operate an inhomogenous system consisting of mainframes, SAP, JEE, .NET whatever communicating via different kinds of middleware (MQS or other messaging software, or direct access to databases from different systems).

    I think it is necessary to come up with a solution where young people entering the field can create their modules with modern programming tools and integrate those into the existing infrastructure, you’ll never be able to do a big-bang-transition: Even a big-bang rewrite of one of the necessary libraries like LAPACK won’t be possible, due to the lack of manpower and funding. A company like Microsoft could it, but it won’t happen within the current academic system.

    The transition itself is necessary, unless you’d like to end up with the only viable climate models written in FORTAN and no developers that can do anything with them. And as time goes by, the situation will only become worse. When climate modellers notice that they cannot deploy their current models on the latest supercomputer because no one wrote a compiler, it will be far too late 🙂

  3. I wish I was allowed to participate in the group breakouts. The plenary discussion was interesting – you did a great job summarizing the large number of points mentioned.

    I’m glad to see that Bryan and I are on the same wavelength – GCMs have basically become I/O bound, but not in the strict CS meaning of the word. A simulation can take a few weeks to run, but months to years to properly digest the output, and that’s at today’s terabyte scale. Really, we don’t need to make the models run faster, we need to make the analysis and understanding problem much more tractable.

    You noted that model development isn’t considered sexy (bashing around Fortran isn’t as fun as Angry Birds, true) but dealing with model output in an efficient and knowledge-producing manner is considered even less so. We’ll see how things go with the distribution of AR5 results, but IMHO, major changes to the models’ infrastructure will need to be made for further MIPs.

  4. Someone, maybe it was you, made the suggestion that you could rewrite a modern climate model from Fortran to maybe C in a couple of years with a good team. That would be a straight by-hand translation without any major refactoring. I actually tried that with a summer student and just the radiation code and it worked! I think it should be seriously considered.

  5. Pingback: Workshop on Advancing Climate Modeling (1) | Serendipity

  6. ‘There is a shortage of high quality applicants for model development positions, and many disincentives for people to pursue such a career:’

    Don’t forget to add the lack of interest from talented programmer/scientist to work with such ancient technologies as are prevalent in climate modelling.

    Sandia transitioned its main massively parallel engineering codes from fortran to C++ over 20 years ago, and they haven’t looked back. The level of software in that lab makes places like NCAR look pathetic.
    And such advanced/modern codes/frameworks have been at work in NCAR. However the scientists have all left due to being sick of living on the disrespected fringe when they can make insane $$$ doing this stuff in industry.

Leave a Reply

Your email address will not be published. Required fields are marked *