I posted a while back the introduction to a research proposal in climate change informatics. And I also posted a list of potential research areas, and a set of criteria by which we might judge climate informatics tools. But I didn’t say what kinds of things we might want climate informatics tools to do. Here’s my first attempt, based on a slide I used at the end of my talk on usable climate science:

What do we want the tools to support?

What I was trying to lay out on this slide was a wide range of possible activities for which we could build software tools, combining good visualizations, collaborative support, and compelling user interface design. If we are to improve the quality of the public discourse on climate change, and support the kind of collective decision making that leads to effective action, we need better tools for all four of these areas:

  • Improve the public understanding of the basic science. Much of this is laid out in the IPCC reports, but to most people these are “dead tree science” – lots of thick books that very few people will read. So, how about some dynamic, elegant and cool tools to convey:
    • The difference between emissions and concentrations.
    • The various sources of emissions and how we know about them from detection/attribution studies.
    • The impacts of global warming on your part of the world – health, food and water, extreme weather events, etc.
    • The various mitigation strategies we have available, and what we know about the cost and effectiveness of each.
  • Achieve a better understanding of how the science works, to allow people to evaluate the nature of the evidence about climate change:
    • How science works, as a process of discovery, including how scientists develop theories, and how they correct mistakes.
    • What climate models are and how they are used to improve our understanding of climate processes.
    • How the peer-review process works, and why it is important, both as a filter for poor research, and a way of assessing the credentials of scientists.
    • What it means to be expert in a particular field, why expertise matters, and why expertise in one area of science doesn’t necessarily mean expertise in another.
  • Tools to support critical thinking, to allow people to analyze the situation for themselves:
    • The importance of linking claims to sources of evidence, and the use of multiple sources of evidence to test a claim.
    • How to assess the credibility of a particular claim, and the credibility of its source (desperately needed for appropriate filtering of ‘found’ information on the internet).
    • Systems Thinking – because reductionist approaches won’t help. People need to be able to recognize and understand whole systems and the dynamics of systems-of-systems.
    • Understanding risk – because the inability to assess risk factors is a major barrier to effective action.
    • Identifying the operation of vested interests. Because much of the public discourse isn’t about science or politics. It’s about people with vested interests attempting to protect those interests, often at the expense of the rest of society.
  • And finally, none of the above makes any difference if we don’t also provide tools to support effective action:
    • How to prioritize between short-term and long term goals.
    • How to identify which kinds of personal action are important and effective.
    • How to improve the quality of policy-making, so that policy choices are linked to the scientific evidence.
    • How to support consensus building and democratic action for collective decision making, at the level of communities, cities, nationals, and globally.
    • Tools to monitor effectiveness of policies and practices once they are implemented.

A reader writes to me from New Zealand, arguing that climate science isn’t a science at all because there is no possibility to conduct experiments. This misconception appears to be common, even among some distinguished scientists, who presumably have never taken the time to read many published papers in climatology. The misconception arises because people assume that climate science is all about predicting future climate change, and because such predictions are for decades/centuries into the future, and we only have one planet to work with, we can’t check to see if these predictions are correct until it’s too late to be useful.

In fact, predictions of future climate are really only a by-product of climate science. The science itself concentrates on improving our understanding of the processes that shape climate, by analyzing observations of past and present climate, and testing how well we understand them. For example, detection/attribution studies focus on the detection of changes in climate that are outside the bounds of natural variability (using statistical techniques), and determining how much of the change can be attributed to each of a number of possible forcings (e.g. changes in: greenhouse gases, land use, aerosols, solar variation, etc). Like any science, the attribution is done by creating hypotheses about possible effects of each forcing, and then testing those hypotheses. Such hypotheses can be tested by looking for contradictory evidence (e.g. other episodes in the past where the forcing was present or absent, to test how well the hypothesis explains these too). They can also be tested by encoding each hypothesis in a climate model, and checking how well it simulates the observed data.

I’m not a climate modeler, but I have conducted anthropological studies of how how climate modelers work. Climate models are developed slowly and carefully over many years, as scientific instruments. One of the most striking aspects of climate model development is that it is an experimental science in the strongest sense. What do I mean?

Well, a climate model is a detailed theory of some subset of the earth’s physical processes. Like all theories, it is a simplification that focusses on those processes that are salient to a particular set of scientific questions, and approximates or ignores those processes that are less salient. Climate modelers use their models as experimental instruments. They compare the model run with the observational record for some relevant historical period. They then come up with a hypothesis to explain any divergences between the run and the observational record, and make a small improvement to the model that the hypothesis predicts will reduce the divergence. They then run an experiment in which the old version of the model acts as a control, and the new version is the experimental case. By comparing the two runs with the observational record, they determine whether the predicted improvement was achieved (and whether the change messed anything else up in the process). After a series of such experiments, the modelers will eventually either accept the change to the model as an improvement to be permanently incorporated into the model code, or they discard it because the experiments failed (i.e. they failed to give the expected improvement). By doing this day after day, year after year, the models get steadily more sophisticated, and steadily better at simulating real climactic processes.

This experimental approach has another interesting effect: the software appears to be tested much more thoroughly than most commercial software. Whether this actually delivers higher quality code is an interesting question; however, it is clear that the approach is much more thorough than most industry practices for software regression testing.