I picked up Stephen Schneider’s “Science as a Contact Sport” to read on travel this week. I’m not that far into it yet (it’s been a busy trip), but was struck by a comment in chapter 1 about how he got involved in climate modeling. In the late 1960’s, he was working on his PhD thesis in plasma physics, and (in his words) “knew how to calculate magneto-hydro-dynamic shocks at 20,000 times the speed of sound”, with “one-and-a-half dimensional models of ionized gases” (Okay, I admit it, I have no idea what that means, but it sounds impressive)…

…Anyway, along comes Joe Smagorinsky from Princeton, to give a talk on the challenges of modeling the atmosphere as a three-dimensional fluid flow problem on a rotating sphere, and Schneider is immediately fascinated by both the mathematical challenges and the potential of this as important and useful research. He goes on to talk about the early modeling work and the mis-steps made in the early 1970’s on figuring out whether the global cooling from aerosols would be stronger than the global warming from greenhouse gases, and getting the relative magnitudes wrong by running the model without including the stratosphere. And how global warming denialists today like to repeat the line about “first you predicted global cooling, then you predicted global warming…” without understanding that this is exactly how science proceeds, by trying stuff, making mistakes, and learning from them. Or as Ms. Frizzle would say, “Take chances! Make Mistakes! Get Messy!” (No, Schneider doesn’t mention Magic School Bus in the book. He’s too old for that).

Anyway, I didn’t get much further reading the chapter, because my brain decided to have fun with the evocative phrase “modeling the atmosphere as a three-dimensional fluid flow problem on a rotating sphere”, which is perhaps the most succinct description I’ve heard yet of what a climate model is. And what would happen if Ms. Frizzle got hold of this model and encouraged her kids to “get messy” with it. What would they do?

Let’s assume the kids can run the model, and play around with its settings. Let’s assume that they have some wonderfully evocative ways of viewing the outputs too, such as these incredible animations of precipitation from a model (my favourite is “August“) from NCAR, and where greenhouse gases go after we emit them (okay, the latter was real data, rather than a model, but you get the idea).

What experiments might the kids try with the model? How about:

  1. Stop the rotation of the earth. What happens to the storms? Why? (we’ll continue to ask “why?” for each one…)
  2. Remove the land-masses. What happens to the gulf stream?
  3. Remove the ice at the poles. What happens to polar temperatures? Why? (we’ll switch to a different visualization for this one)
  4. Remove all CO2 from the atmosphere. How much colder is the earth? Why? What happens if you leave it running?
  5. Erupt a whole bunch of volcanoes all at once. What happens? Why? How long does the effect last? Does it depend on how many volcanoes you use?
  6. Remove all human activity (i.e. GHG emissions drop to zero instantly). How long does it take for the greenhouse gases to return to the levels they were at before the industrial revolution? Why?
  7. Change the tilt of the earth’s axis a bit. What happens to seasonal variability? Why? Can you induce an ice age? If so, why?
  8. Move the earth a little closer to the sun. What happens to temperatures? How long do they take to stabilize? Why that long?
  9. Burn all the remaining (estimated) reserves of fossil fuels all at once. What happens to temperatures? Sea levels? Polar ice?
  10. Set up the earth as it was in the last ice age. How much colder are global temperatures? How much colder are the poles? Why the difference? How much colder is it where you live?
  11. Melt all the ice at the poles (by whatever means you can). What happens to the coastlines near where you live? Over the rest of your continent? Which country loses the most land area?
  12. Keep CO2 levels constant at the level they were at in 1900, and run a century-long simulation. What happens to temperatures? Now try keeping aerosols constant at 1900 levels instead. What happens? How do these two results compare to what actually happened?

Now compare your answers with what the rest of the class got. And discuss what we’ve learned. [And finally, for the advanced students – look at the model software code, and point to the bits that are responsible for each outcome… Okay, I’m just kidding about that bit. We’d need literate code for that].

Okay, this seems like a worthwhile project. We’d need to wrap a desktop-runnable model in a simple user interface with the appropriate switches and dials. But is there any model out there that would come anywhere close to being useable in a classroom situation for this kind of exercise?

(feel free to suggest more experiments in the comments…)

This week I’m visiting the Max Planck Institute for Meteorology (MPI-M) in Hamburg. I gave my talk yesterday on the Hadley study, and it led to some fascinating discussions about software practices used for model building. One of the topics that came up in the discussion afterwards was how this kind of software development compares with agile software practices, and in particular the reliance on face-to-face communication, rather than documentation. Like many software projects, climate modellers struggle to keep good, up-to-date documentation, but generally feel they should be doing better. The problem of course, is that traditional forms of documentation (e.g. large, stand-alone descriptions of design and implementation details) are expensive to maintain, and of questionable value – the typical experience is that you wade through the documentation and discover that despite all the details, it never quite answers your question. Such documents are often produced in a huge burst of enthusiasm for the first release of the software, but then never touched again through subsequent releases. And as the code in the climate models evolves steadily over decades, the chances of any stand-alone documentation keeping up are remote.

An obvious response is that the code itself should be self-documenting. I’ve looked at a lot of climate model code, and readability is somewhat variable (to put it politely). This could be partially addressed with more attention to coding standards, although it’s not clear how familiar you would have to be with the model already to be able to read the code, even with very good coding standards. Initiatives like Clear Climate Code intend to address this problem, by re-implementing climate tools as open source projects in Python, with a strong focus on making the code as understandable as possible. Michael Tobis and I have speculated recently about how we’d scale up this kind of initiative to the development of coupled GCMs.

But readable code won’t fill the need for a higher level explanation of the physical equations and their numerical approximations used in the model, along with rationale for algorithm choices. These are often written up in various forms of (short) white papers when the numerical routines are first developed, and as these core routines rarely change, this form of documentation tends to remain useful. The problem is that these white papers tend to have no official status (or perhaps at best, they appear as technical reports), and are not linked in any usable way to distributions of the source code. The idea of literate programming was meant to solve this problem, but it never took off, probably because it demands that programmers must tear themselves away from using programming languages as their main form of expression, and start thinking about how to express themselves to other human beings. Given that most programmers define themselves in terms of the programming languages they are fluent in, the tyranny of the source code is unlikely to disappear anytime soon. In this respect, climate modelers have a very different culture from most other kinds of software development teams, so perhaps this is an area where the ideas of literate programming could take root.

Lack of access to these white papers could also be solved by publishing them as journal papers (thus instantly making them citeable objects). However, scientific journals tend not to publish descriptions of the designs of climate models, unless they are accompanied with new scientific results from the models. There are occasional exceptions (e.g. see the special issue of the Journal of Climate devoted to the MPI-M models). But things are changing, with the recent appearance of two new journals:

  • Geoscientific Model Development, an open access journal that accepts technical descriptions of the development and evaluation of the models;
  • Earth Science Informatics, a Springer Journal with a broader remit than GMD, but which does cover descriptions of the development of computational tools for climate science.

The problem is related to another dilemma in climate modeling groups: acknowledgement for the contributions of those who devote themselves more to model development rather than doing “publishable science”. Most of the code development is done by scientists whose performance is assessed by their publication record. Some modeling centres have created job positions such as “programmers” or “systems staff”, although most people hired into these roles have a very strong geosciences background. A growing recognition of the importance of their contributions represents a major culture change in the climate modeling community over the last decade.