So, here’s an interesting thought that came up the the Michael Jackson festschrift yesterday. Michael commented in his talk that understanding is not a state, it’s a process. David Notkin then asked how we can know how well we’re doing in that process. I suggested that one of the ways you know is by discovering where your understanding is incorrect, which can happen if your model surprises you. I noticed this is a basic mode of operation for earth system modelers. They put their current best understanding of the various earth systems (atmosphere, ocean, carbon cycle, atmospheric chemistry, soil hydrology, etc) into a coupled simulation model and run it. Whenever the model surprises them, they know they’re probing the limits of their understanding. For example, the current generation of models at the Hadley centre don’t get the Indian Monsoon in the right place at the right time. So they know there’s something in that part of the model they don’t yet understand sufficiently.

Contrast this with the way we use (and teach) modeling in software engineering. For example, students construct UML models as part of a course in requirements analysis. They hand in their models, and we grade them. But at no point in the process do the models ever surprise their authors. UML models don’t appear to have the capacity for surprise. Which is unfortunate, given what the students did in previous courses. In their programming courses, they were constantly surprised. Their programs didn’t compile. Then they didn’t run. Then they kept crashing. Then they gave the wrong outputs. At every point, the surprise is a learning opportunity, because it means there was something wrong with their understanding, which they have to fix. This contrast explains a lot about the relative value students get from programming courses versus software modeling courses.

Now of course, we do have some software engineering modeling frameworks that have the capacity for surprise. They allow you to create a model and play with it, and sometimes get unexpected results. For example, Alloy. And I guess model checkers have that capacity too. A necessary condition is that you can express some property that your model ought to have, and then automatically check that it does have it. But that’s not sufficient, because if the properties you express aren’t particularly interesting, or are trivially satisifed, you still won’t be surprised. For example, UML syntax checkers fall into this category – when your model fails a syntax check, that’s not surprising, it’s just annoying. Also, you don’t necessarily have to formally state the properties – but you do have to at least have clear expectations. When the model doesn’t meet those expectations, you get the surprise. So surprise isn’t just about executability, it’s really about falsifiability.

Chris Jones, from the UK Met Office Hadley Centre, presented a paper at EGU 2009 yesterday on The Trillionth Tonne. The analysis shows that the key driver of temperature change is the total cumulative amount of carbon emissions. To keep below the 2°C global average temperature rise generally regarded as the threshold for preventing dangerous warming, we need to keep total cumulative emissions below a trillion tonnes. And the world is already halfway there.

Which is why the latest news about Canada’s carbon emissions are so embarrassing. Canada is now top among the G8 nations for emissions growth. Let’s look at the numbers: 747 megatonnes in 2007, up from 592 megatonnes in 1990. Using the figures in the Environment Canada report, I calculated the Canada has emitted over 12 gigatonnes since 1990. That’s 12 billion tonnes. So, in 17 years we burnt though more than 1.2% of the entire world’s total budget of carbon emissions. A total budget that has to last from the dawn of industrialization to the point at which the whole world become carbon-neutral. Oh, and Canada has 0.5% of the world’s population.

Disclaimer: I have to check whether the Hadley Centre’s target is 1 trillion tonnes of CO2-equivalent, or 1 trillion tonnes of Carbon (they are different!). The EnvCanada report numbers refer to the former.

Update: I checked with Chris, and as I feared, I got the wrong units – it’s a trillion tonnes of carbon. The conversion factor is about 3.66, so that gives us about 3.66 trillion tonnes of carbon dioxide to play with. [Note: Emissions targets are usually phrased in terms of “Carbon dioxide equivalent”, which is a bit hard to calculate as different greenhouse gases have both different molecular weights and different warming factors].

So my revised figures are that Canada burnt through only about 0.33% of the world’s total budget in the last 17 years. Which looks a little better, until you consider:

  • by population, that’s 2/3 of Canada’s entire share. 
  • Using the cumulative totals from 1900-2002. plus the figures for the more recent years from the Environment Canada report (and assuming 2008 was similar to 2007) we’ve emitted 27 gigatonnes of CO2 since 1900. Which is about 0.73% of the world’s budget, or about 147% of our fair share per head. 
  • By population, our fair share of the world’s budget is about 18 gigatonnes CO2 (=5 gigatonnes Carbon). We’d burnt through that by 1997. Everything since then is someone else’s share.

When my children grow up, the world they live in is likely be very different from ours. There’s a small chance that humanity will rapidly come to its senses, start massive program of emissions reductions, and avoid the worst climate change scenarios. The Hadley Centre gives us about a 50/50 chance if carbon emissions peak by 2015, and then fall steadily at a rate of 3% per year (They are currently rising by nearly 3% per year). If we manage to pull this off, and also win the 50/50 bet, our children and grandchildren will ask us how the hell we managed it.

If we can’t stop emissions growth in the next five years, things look much more grim. Perhaps the simplest way to explain it is the picture painted by the New Scientist: How to survive the coming century: a world that is 4°C warmer, 90% of the human population wiped out, the rest relocated to dense cities in Canada, Scandinavia and Siberia. Uninhabitable deserts across the subtropics. Virtually no life in the oceans. And that’s the good part. The New Scientist article glosses over the climate wars that are almost certain if large parts of the world become uninhabitable. If they survive, our children will demand to know what the hell we were doing: we knew it was coming, we knew how bad it would be, and still we did almost nothing to prevent it.

What did you do in the war?When my kids ask me these questions in decades to come, I need to be ready with an answer. I’d like to say that I did everything I could possibly do. I’d like to say that what I did was effective. And I’d like to be able to say that I made a difference.

Having talked with some of our graduate students about how to get a more inter-disciplinary education while they are in grad school, I’ve been collecting links to collaborative grad programs at U of T:

The Dynamics of Global Change Doctoral Program, housed in the Munk Centre. The core course, DGC1000H is very interesting – it starts with Malcolm Gladwell’s Tipping Point book, and then tours through money, religion, pandemics, climate change, the internet and ICTs, and development. What a wonderful journey.

The Centre for the Environment runs a Collaborative Graduate Program (MSc and PhD) in which students take some environmental science courses in addition to satisfying the degree requirements of their home department. The core course for this program is ENV1001, Environmental Decision Making, and it also include an internship to get hands on experience with environmental problem solving.

The Knowledge Media Design Institute (KMDI) also has a collaborative doctoral program, perfect for those interested in design and evaluation of new knowledge media, with a strong focus on knowledge creation, social change, and community

Finally, the Centre for Global Change Science has a set of graduate student awards, to help fund grad students interested in global change science. Oh, and they have a fascinating seminar series, mainly focussed on climate science (all done for this year, but get on their mailing list for next years seminars).

Are there any more I missed?

Had an interesting conversation this afternoon with Brad Bass. Brad is a prof in the Centre for Environment at U of T, and was one of the pioneers of the use of models to explore adaptations to climate change. His agent based simulations explore how systems react to environmental change, e.g. exploring population balance among animals, insects, the growth of vector-borne diseases, and even entire cities. One of his models is Cobweb, an open-source platform for agent-based simulations. 

He’s also involved in the Canadian Climate Change Scenarios Network, which takes outputs from the major climate simulation models around the world, and extracts information on the regional effects on Canada, particularly relevant for scientists who want to know about variability and extremes on a regional scale.

We also talked a lot about educating kids, and kicked around some ideas for how you could give kids simplified simulation models to play with (along the line that Jon was exploring as a possible project), to get them doing hands on experimentation with the effects of climate change. We might get one of our summer students to explore this idea, and Brad has promised to come talk to them in May once they start with us.

Oh, and Brad is also an expert on green roofs, and will be demonstrating them to grade 5 kids at the Kids World of Energy Festival.

This is depressing.

We were at the bank this morning, setting up some investment plans for retirement and for the kids to go through University (In Canada-speak: RRSPs and RESPs). We started to pick out a portfolio of mutual funds into which we would be putting the investments, and our financial advisor was showing us one of the mutual funds he would recommend when I noticed the fund included a substantial investment in the Canadian Oil Sands. “No way” says I. So we went to his next pick. Same thing. And the next. And the next….

The oil sands have been described as the most destructive project on earth. They are the major reason that Canada will renege on its Kyoto treaty obligations. They will devastate a huge area of Alberta, and threaten clean water supplies and the wildlife of large parts of North America.

So, I was struck by the irony of funding the kids through University by investing in a project that will so thoroughly screw up the world in which they will have to live when they grow up.

But then I thought about it some more. Pretty much the entire middle class in Canada must have money invested in this project, if it shows up in most of the mutual funds commonly recommended for retirement and education savings plans. Most of them probably have no idea (after all, who actually looks closely at the contents of their mutual funds?) and of those that do know, most of them will prefer the high rate of return on this project because they have no real understanding of the extent of the climate crisis.

Those funds are being used to maximize the profit from the oils sands, by paying for lobbyists to fight environmental regulations, to fight caps on greenhouse gas emissions, and to fight against alternative energy initiatives (which would eat into the market for oil from the oil sands).

How on earth can we make any progress on fighting climate change when we all have a financial stake in not doing so?

We’re fucked.

As my son (grade 4) has started a module at school on climate and global change, I thought I’d look into books on climate change for kids. Here’s what I have for them at the moment:

Weird Weather by Kate Evans. This is the kids favourite at the moment, probably because of its comicbook format. The narrative format works well – its involves the interplay between three characters: a businessman (playing the role of a denier), a scientist (who shows us the evidence) and a idealistic teenager, who gets increasingly frustrated that the businessman won’t listen.

The Down-to-Earth Guide to Global Warming, by Laurie David and Cambria Gordon. Visually very appealling, lots of interesting factoids for the kids, and a particular attention to the kinds of questions kids like to ask (e.g. to do with methane from cow farts).

How We Know What We Know About Our Changing Climate by Lynne Cherry and Gary Braasch. Beautiful book (fabulous photos!), mainly focusing on sources of evidence (ice cores, tree rings, etc), and how they were discovered. Really encourages the kids to do hands on data collection. Oh, and there’s a teacher’s guide as well, which I haven’t looked at yet.

Global Warming for Dummies by Elizabeth May and Zoe Caron. Just what we’d expect from a “Dummies Guide…” book. I bought it because I was on my way to a bookstore on April 1, when I heard an interview on the CBC with Elizabeth May (leader of the Canadian Green Party) talking about how they were planning to reduce the carbon footprint of their next election campaign, by hitchhiking all over Canada. My first reaction was incredulity, but then I remembered the date, and giggled uncontrollably all the way into the bookstore. So I just had to buy the book.

Whom do you believe: The Cato Institute, or the Hadley Centre? Both cannot be right. Yet both claim to be backed by real scientists.

First, to get this out of the way, the latest ad from Cato has been thoroughly debunked by RealClimate, including a critical look at whether the papers that Cato cites offer any support for Cato’s position (hint: they don’t), and a quick tour through related literature. So I won’t waste my time repeating their analysis.

The Cato folks attempted to answer back, but it’s largely by attacking red herrings. However, one point from this article jumped out at me:

“The fact that a scientist does not undertake original research on subject x does not have any bearing on whether that scientist can intelligently assess the scientific evidence forwarded in a debate on subject x”.

The thrust of this argument is an attempt to bury the idea of expertise, so that the opinions of the Cato institute’s miscellaneous collection of people with PhDs can somehow be equated with those of actual experts. Now, of course it is true that a (good) scientist in another field ought to be able to understand the basics of climate science, and know how to judge the quality of the research, the methods used, and the strength of the evidence, at least at some level. But unfortunately, real expertise requires a great deal of time and effort to acquire, no matter how smart you are.

If you want to publish in a field, you have to submit yourself to the peer-review process. The process is not perfect (incorrect results often do get published, and, on occasion, fabricated results too). But one thing it does do very well is to check whether authors are keeping up to date with the literature. That means that anyone who regularly publishes in good quality journals has to keep up to date with all the latest evidence. They cannot cherry pick.

Those who don’t publish in a particular field (either because they work in an unrelated field, or because they’re not active scientists at all) don’t have this obligation. Which means when they form opinions on a field other than their own, they are likely to be based on a very patchy reading of the field, and mixed up with a lot of personal preconceptions. They can cherry pick. Unfortunately, the more respected the scientist, the worse the problem. The most venerated (e.g. prize winners) enter a world in which so many people stroke their egos, they lose touch with the boundaries of their ignorance. I know this first hand, because some members of my own department have fallen into this trap: they allow their brilliance in one field to fool them into thinking they know a lot about other fields.

Hence, given two scientists who disagree with one another, it’s a useful rule of thumb to trust the one who is publishing regularly on the topic. More importantly, if there are thousands of scientists publishing regularly in a particular field and not one of them supports a particular statement about that field, you can be damn sure it’s wrong. Which is why the IPCC reviews of the literature are right, and Cato’s adverts are bullshit.

Disclaimer: I don’t publish in the climate science literature either (it’s not my field). I’ve spent enough time hanging out with climate scientists to have a good feel for the science, but I’ll also get it wrong occasionally. If in doubt, check with a real expert.

I just spent the last two hours chewing the fat with Mark Klein at MIT and Mark Tovey at Carleton, talking about all sorts of ideas, but loosely focussed on how distributed collaborative modeling efforts can help address global change issues (e.g. climate, peak oil, sustainability).

MK has a project, Climate Interactive,[update: Mark tells me I got the wrong project – it should be The Climate Collaboratorium. Climate Interactive is from a different group at MIT] which is exploring how climate simulation tools can be hooked up to discussions around decision making, which is one of the ideas we kicked around in our brainstorming sessions here.

MT has been exploring how you take ideas from distributed cognition and scale them up to much larger teams of people. He has put together a wonderful one-pager that summarized many interesting ideas on how mass collaboration can be applied in this space.

This conversation is going to keep me going for days on stuff to explore and blog about:

And lots of interesting ideas for new projects…

I originally wrote this as a response to a post on RealClimate on hypothesis testing

I think one of the major challenges with public understanding of climate change is that most people have no idea of what climate scientists actually do. In the study I did last summer of the software development practices at the Hadley Centre, my original goal was to look just at the “software engineering” of climate simulation models -i.e. how the code is developed and tested. But the more time I spend with climate scientists, the more I’m fascinated by the kind of science they do, and the role of computational models within it.

The most striking observation I have is that climate scientists have a deep understanding of the fact that climate models are only approximations of earth system processes, and that most of their effort is devoted to improving our understanding of these processes (“All models are wrong, but some are useful” – George Box). They also intuitively understand the core ideas from general systems theory – that you can get good models of system-level processes even when many of the sub-systems are poorly understood, as long as you’re smart about choices of which approximations to use. The computational models have an interesting status in this endeavour: they seem to be used primarily for hypothesis testing, rather than for forecasting. A large part of the time, climate scientists are “tinkering” with the models, probing their weaknesses, measuring uncertainty, identifying which components contribute to errors, looking for ways to improve them, etc. But the public generally only sees the bit where the models are used to make long term IPCC-style predictions.

I never saw a scientist doing a single run of a model and comparing it against observations. The simplest use of models is to construct a “controlled experiment” by making a small change to the model (e.g. a potential improvement to how it implements some piece of the physics), comparing this against a control run (typically the previous run without the latest change), and comparing both runs against the observational data. In other words, there is a 3-way comparison: old model vs. new model vs. observational data, where it is explicitly acknowledged that there may be errors in any of the three. I also see more and more effort put into “ensembles” of various kinds: model intercomparison projects, perturbed physics ensembles, varied initial conditions, and so on. In this respect, the science seems to have changed (matured) a lot in the last few years, but that’s hard for me to verify.

It’s a pretty sophisticated science. I would suggest that the general public might be much better served by good explanations of how this science works, rather than with explanations of the physics and mathematics of climate systems.

  1. Because their salaries depend on them not understanding. Applies to anyone working for the big oil companies, and apparently to a handful of “scientists” funded by them .
  2. Because they cannot distinguish between pseudo-science and science. Seems to apply to some journalists, unfortunately.
  3. Because the dynamics of complex systems are inherently hard to understand. Shown to be a major factor by the experiments Sterman did on MIT students.
  4. Because all of the proposed solutions are incompatible with their ideology. Applies to most rightwing political parties, unfortunately.
  5. Because scientists are poor communicators. Or, more precisely, few scientists can explain their work well to non-scientists.
  6. Because they believe their god(s) would never let it happen. And there’s also a lunatic subgroup who welcome it as part of god’s plan (see rapture).
  7. Because most of the key ideas are counter-intuitive. After all, a couple of degrees warmer is too small to feel.
  8. Because the truth is just too scary. There seem to be plenty of people who accept that it’s happening, but don’t want to know any more because the whole thing is just too huge to think about.
  9. Because they’ve learned that anyone who claims the end of the world is coming must be a crackpot. Although these days, I suspect this one is just a rhetorical device used by people in groups (1) and (4), rather than a genuine reason.
  10. Because most of the people they talk to, and most of the stuff they read in the media also suffers from some of the above. Selective attention allows people to ignore anything that challenges their worldview.

But I fear the most insidious is because people think that changing their lightbulbs and sorting their recyclables counts as “doing your bit”. This idea allow you to stop thinking about it, and hence ignore just how serious a problem it really is.

Greg reminded me the other day about Jeanette Wing‘s writings about “computational thinking“. Is this what I have in mind when I talk about the contribution software engineers can make in tackling the climate crisis? Well, yes and no. I think that this way of thinking about problems is very important, and corresponds with my intuition that learning how to program changes how you think.

But ultimately, I found Jeanette’ description of computational thinking to be very disappointing, because she concentrates too much on algorithmics and machine metaphors. This reminds me of the model of the mind as a computer, used by cognitive scientists – it’s an interesting perspective that opens up new research directions, but is ultimately limiting because it leads to the problem of disembodied cognition: treating the mind as independent from it’s context. I think software engineering (or at least systems analysis) adds something else, more akin to systems thinking. It’s the ability to analyse the interconnectedness of multiple systems. The ability to reason about multiple stakeholders and their interdependencies (where most of the actors are not computational devices!). And the rich set of abstactions we use to think about structure, behaviour and function of very complex systems-of-systems. Somewhere in the union of computational thinking and systems thinking.

How about “computational systems-of-systems thinking”?