Okay, some of the tools produced by the Essence project are pretty cool:

http://globalsensemaking.wik.is/ESSENCE/Tools

I especially like Debategraph, but mainly for its aesthetic properties when you pull the graph around, and the ability to zoom in to nodes by clicking, rather than how it’s actually being used (mapping debates about whether climate change is happening? That’s so last century). Could we use a tool like this for playing with i* models?

I originally wrote this as a response to a post on RealClimate on hypothesis testing

I think one of the major challenges with public understanding of climate change is that most people have no idea of what climate scientists actually do. In the study I did last summer of the software development practices at the Hadley Centre, my original goal was to look just at the “software engineering” of climate simulation models -i.e. how the code is developed and tested. But the more time I spend with climate scientists, the more I’m fascinated by the kind of science they do, and the role of computational models within it.

The most striking observation I have is that climate scientists have a deep understanding of the fact that climate models are only approximations of earth system processes, and that most of their effort is devoted to improving our understanding of these processes (“All models are wrong, but some are useful” – George Box). They also intuitively understand the core ideas from general systems theory – that you can get good models of system-level processes even when many of the sub-systems are poorly understood, as long as you’re smart about choices of which approximations to use. The computational models have an interesting status in this endeavour: they seem to be used primarily for hypothesis testing, rather than for forecasting. A large part of the time, climate scientists are “tinkering” with the models, probing their weaknesses, measuring uncertainty, identifying which components contribute to errors, looking for ways to improve them, etc. But the public generally only sees the bit where the models are used to make long term IPCC-style predictions.

I never saw a scientist doing a single run of a model and comparing it against observations. The simplest use of models is to construct a “controlled experiment” by making a small change to the model (e.g. a potential improvement to how it implements some piece of the physics), comparing this against a control run (typically the previous run without the latest change), and comparing both runs against the observational data. In other words, there is a 3-way comparison: old model vs. new model vs. observational data, where it is explicitly acknowledged that there may be errors in any of the three. I also see more and more effort put into “ensembles” of various kinds: model intercomparison projects, perturbed physics ensembles, varied initial conditions, and so on. In this respect, the science seems to have changed (matured) a lot in the last few years, but that’s hard for me to verify.

It’s a pretty sophisticated science. I would suggest that the general public might be much better served by good explanations of how this science works, rather than with explanations of the physics and mathematics of climate systems.

I was recently asked (by a skeptic) whether I believed in global warming. It struck me that the very question is wrong-headed. Global warming isn’t a matter for belief. It’s not a religion. The real question is whether you understand the available evidence, and whether that evidence supports the theory. When we start talking about what we believe, we’re not doing science any more – we’re into ideology and pseudo-science.

Here’s the difference. Scientists proceed by analyzing all the available data, weighing it up, investigating its validity, and evaluating which theory best explains the evidence. It is a community endeavour, with checks and balances such as the peer review process. It is imperfect (because even scientists can make mistakes) but it is also self-correcting (although sometimes it takes a long time to discover mistakes).

Ideology starts with a belief, and then selects just that evidence that reinforces the belief. So if a blog post (or newspaper column) provides a few isolated data points to construct an entire argument about climate change, the chances are it’s ideology rather than science. Ideologists cherry-pick bits of evidence to reinforce an argument, rather than weighing up all the evidence. George Will’s recent column in the Washington Post is a classic example. When you look at all the data, his arguments just don’t stand up.

The deniers don’t do science. There is not one peer-reviewed publication in the field of climate science that sheds any doubt whatsoever on the theory of anthropogenic global warming. If the deniers were doing good science, they would be able to publish it. They don’t. They send it to the media. They are most definitely not scientists.

The key distinction between science and ideology is how you engage with the data.

  1. Because their salaries depend on them not understanding. Applies to anyone working for the big oil companies, and apparently to a handful of “scientists” funded by them .
  2. Because they cannot distinguish between pseudo-science and science. Seems to apply to some journalists, unfortunately.
  3. Because the dynamics of complex systems are inherently hard to understand. Shown to be a major factor by the experiments Sterman did on MIT students.
  4. Because all of the proposed solutions are incompatible with their ideology. Applies to most rightwing political parties, unfortunately.
  5. Because scientists are poor communicators. Or, more precisely, few scientists can explain their work well to non-scientists.
  6. Because they believe their god(s) would never let it happen. And there’s also a lunatic subgroup who welcome it as part of god’s plan (see rapture).
  7. Because most of the key ideas are counter-intuitive. After all, a couple of degrees warmer is too small to feel.
  8. Because the truth is just too scary. There seem to be plenty of people who accept that it’s happening, but don’t want to know any more because the whole thing is just too huge to think about.
  9. Because they’ve learned that anyone who claims the end of the world is coming must be a crackpot. Although these days, I suspect this one is just a rhetorical device used by people in groups (1) and (4), rather than a genuine reason.
  10. Because most of the people they talk to, and most of the stuff they read in the media also suffers from some of the above. Selective attention allows people to ignore anything that challenges their worldview.

But I fear the most insidious is because people think that changing their lightbulbs and sorting their recyclables counts as “doing your bit”. This idea allow you to stop thinking about it, and hence ignore just how serious a problem it really is.

We had a discussion today with the grad students taking my class on empirical research methods, on the role of blogging by researchers. Some students thought that it was a bad idea to post their research ideas on their blogs, because other people might steal them. This is, of course, a perennial fear amongst grad students – that someone else will do the same research and publish it first. I argued strongly that it doesn’t happen, for two reasons:

  1. the idea is only a tiny part of the research – it’s what you do with the idea that really matters. Bill Buxton has a whole talk on this, the summary of which is:  The worst thing in the world is a precious idea; The worst person to have on your team is someone who thinks his idea is precious; Good ideas are cheap, they are not precious; The key is not to come up with ideas but to cultivate the adoption of ideas.
  2. even if someone else works on the same idea, they will approach it in different way, and both projects will be a contribution to knowledge (and therefore be worthy of publication).

After the class, Simon sent me a pointer to Michael Nielsen’s blog post on the importance of scientists sharing their ideas via blogs. It’s great reading.

Note: I’m particularly chuffed about the relevance of Neilsen’s post to climate science, as the Navier-Stokes equations he mentions in his example lie at the heart of climate simulation models.

In my first post, I said that the climate crisis will make the current financial turmoil look like a walk in the park. Several people thought that the line was too strong for a blurb advertising a session at a conference such as ICSE. They’re probably right, but only because it might serve to distract from the discussions around research ideas that I want to have.

But I stand by the observation that the whole unsustainability of industrialized capitalist economies is a much bigger problem than the credit crunch, and will lead to a much bigger crash when it all falls apart. As usual, Joe Romm can put it much better than I can: Is the global economy a Ponzi scheme, are we all Bernie Madoffs, and what comes next?

Next month, I’ll be attending the European Geosciences Union’s General Assembly, in Austria. It will be my first trip to a major geosciences conference, and I’m looking forward to rubbing shoulders with thousands of geoscientists.

My colleague, Tim, will be presenting a poster in the Climate Prediction: Models, Diagnostics, and Uncertainty Analysis session on the Thursday, and I’ll be presenting a talk on the last day in the session on Earth System Modeling: Strategies and Software. My talk is entitled Are Earth System model software engineering practices fit for purpose? A case study.

While I’m there, I’ll also be taking in the Ensembles workshop that Tim is organising, and attending some parts of the Seamless Assessment session, to catch up with more colleagues from the Hadley Centre. Sometime soon I’ll write a blog post on what ensembles and seamless assessment are all about (for now, it will just have to sound mysterious…)

The rest of the time, I plan to talk to as many climate modellers as a I can from other centres, as part of my quest for comparison studies for the one we did at the Hadley Centre.

Many years ago, Dave Parnas wrote a fascinating essay on software aging, in which he compares old software with old people, pointing out that software gets frail, less able to do things than when it was young, and gets more prone to disease and obesity (actually, I can’t remember whether he mentioned obesity, but you get the idea – software bloat). At some point we’re better off retiring the old system rather than trying to keep updating it.
Well, this quote by Thomas Friedman that showed up on Gristmill over the weekend made me think more about how our entire economic system is in the same boat. We’ve got to the point where we can’t patch it any longer without just making it worse. Is it time for industrialization 2.0? Or maybe it should be globalization 2.0?
The question is, do any of our political leaders understand this? No sign of any enlightenment in Canada’ House of Commons, I’m afraid.

09. March 2009 · Write a comment · Categories: blogging · Tags:

Okay, so now I got the difficult second post out of the way, I’m exploring what else WordPress can do. First up: themes. Excuse me while I try out some themes to see what suits me.

Greg reminded me the other day about Jeanette Wing‘s writings about “computational thinking“. Is this what I have in mind when I talk about the contribution software engineers can make in tackling the climate crisis? Well, yes and no. I think that this way of thinking about problems is very important, and corresponds with my intuition that learning how to program changes how you think.

But ultimately, I found Jeanette’ description of computational thinking to be very disappointing, because she concentrates too much on algorithmics and machine metaphors. This reminds me of the model of the mind as a computer, used by cognitive scientists – it’s an interesting perspective that opens up new research directions, but is ultimately limiting because it leads to the problem of disembodied cognition: treating the mind as independent from it’s context. I think software engineering (or at least systems analysis) adds something else, more akin to systems thinking. It’s the ability to analyse the interconnectedness of multiple systems. The ability to reason about multiple stakeholders and their interdependencies (where most of the actors are not computational devices!). And the rich set of abstactions we use to think about structure, behaviour and function of very complex systems-of-systems. Somewhere in the union of computational thinking and systems thinking.

How about “computational systems-of-systems thinking”?

I’ve been pondering starting a blog for way too long. Time for action. To explain what I think I’ll be blogging about, I put together the following blurb, for a conference session at the International Conference on Software Engineering. I’ll probably end up revising it for the conference, but it will do for a kickoff to the blog:

This year, the ICSE organisers have worked hard to make the conference “greener” – to reduce our impact on the environment. Partly this is in response to the growing worldwide awareness that we need to take more care of the natural environment. But partly it is driven by a deeper and more urgent concern. During this century, we will have to face up to a crisis that will make the current economic turmoil look like a walk in the park. Climate change is accelerating, outpacing the most pessimistic predictions of climate scientists. Its effects will touch everything, including the flooding of low-lying lands and coastal cities, the disruption of fresh water supplies for most of the world, the loss of agricultural lands, more frequent and severe extreme weather events, mass extinctions, and the destruction of entire ecosystems. And there are no easy solutions. We need concerted systematic change in how we live, to stabilize the concentration of greenhouse gases that drive climate change. Not to give up the conveniences of modern life, but to re-engineer them so that we no longer depend on fossil fuels to power our lives. The challenge is massive and urgent – a planetary emergency. The type of emergency that requires all hands on deck. Scientists, engineers, policymakers, professionals, no matter what their discipline, need to ask how their skills and experience can contribute.

We, as software engineering researchers and software practitioners have many important roles to play. Software is part of the problem, as every new killer application drives up our demand for more energy. But it is also a major part of the solution. Our information systems help provide the data we need to support intelligent decision making, from individuals trying to reduce their energy consumption, to policymakers trying to design effective governmental policies. Our control systems allow us to make smarter use of the available power, and provide the  adaptability and reliability to power our technological infrastructure in the face of a more diverse set of renewable energy sources. Less obviously, the software engineering community has many other contributions to make. We have developed practices and tools to analyze, build and evolve some of the most complex socio-technical systems ever created, and to coordinate the efforts of large teams of engineers. We have developed abstractions that help us to understand complex systems, to describe their structure and behaviour, and to understand the effects of change on those systems. These tools and practices are likely to be useful in our struggle to address the climate crisis, often in strange and surprising ways. For example, can we apply the principles of information hiding and modularity to our attempts to develop coordinated solutions to climate change? What is the appropriate architectural pattern for an integrated set of climate policies? How can we model the problem requirements so that the stakeholders can understand them? How do we debug strategies for emissions reduction when they don’t work out as intended?

This conference session is intended to kick start a discussion about the contributions that software engineering can make to tackling the climate crisis. Our aim is to build a community of concerned professionals, and find new ways to apply our skills and experience to the problem. We will attempt to map out a set of ideas for action, and identify potential roadblocks. We will start to build a broad research agenda, to capture the potential contributions of software engineering research. The session will begin with a short summary of the latest lessons from climate science, and a concrete set of examples of existing software engineering research efforts applied to climate change. We will include an open discussion, and structured brainstorming sessions to map out an agenda for action. We invite everyone to come to the session, and take up this challenge.

Okay, so how does that sound as a call to arms?