Nearly everything we ever do depends on vast social and technical infrastructures, which, when they work, are largely invisible. Science is no exception – modern science is only possible because we have built the infrastructure to support it: classification systems, international standards, peer-review, funding agencies, and, most importantly, systems for the collection and curation of vast quantities of data about the world. Star and Ruhleder point out the infrastructure that supports scientific work is embedded inside of other social and technical systems, and becomes invisible when we come to rely on it. Indeed, the process of learning how to make use of a particular infrastructure is, to a large extent, what defines membership in a particular community of practice. They also observe that our infrastructures are closely intertwined with our conventions and standards. As a simple example, they point to the QWERTY keyboard, which despite its limitations, shapes much of our interaction with computers (even the design of office furniture!), such that learning to use the keyboard is a crucial part of learning to use a computer. And once you can type, you cease to be aware of the keyboard itself, except when it breaks down. This invisibility-in-use is similar to Heidegger’s notion of tools that are ready-to-hand; the key difference is that tools are local to the user, while infrastructures have vast spatial and/or temporal extent.

A crucial point is that what counts as infrastructure depends on the nature of the work that it supports. What is invisible infrastructure for one community might not be for another. The internet is a good example – most users just accept it exists and make use of it, without asking how it works. However, to computer scientists, a detailed understanding of its inner workings is vital. A refusal to treat the internet as invisible infrastructure is a condition to entry into certain geek cultures.

In their book Sorting Things Out, Star and Bowker introduced the term infrastructural inversion, for a process of focusing explicitly on the infrastructure itself, in order to expose and study its inner workings. It’s a rather cumbersome phrase for a very interesting process, kind of like a switch of figure and ground. In their case, infrastructural inversion is a research strategy that allows them to explore how things like classification systems and standards are embedded in so much of scientific practice, and to understand how these things evolve with the science itself.

Paul Edwards applies infrastructural inversion to climate science in his book A Vast Machine, where he examines the history of attempts by meteorologists to create a system for collecting global weather data, and for sharing that data with the international weather forecasting community. He points out that climate scientists also come to rely on that same infrastructure, but that it doesn’t serve their needs so well, and hence there is a difference between weather data and climate data. As an example, meteorologists tolerate changes in the nature and location of a particular surface temperature station over time, because they are only interested in forecasting over the short term (days or weeks). But to a climate scientist trying to study long-term trends in climate, such changes (known as inhomogeneities) are crucial. In this case, the infrastructure breaks down, as it fails to serve the needs of this particular community of scientists.

Hence, as Edwards points out, climate scientists also perform infrastructural inversion regularly themselves, as they dive into the details of the data collection system, trying to find and correct inhomogeneities. In the process, almost any aspect of how this vast infrastructure works might become important, revealing clues about what parts of the data can be used and which parts must be re-considered. One of the key messages in Paul’s book is that the usual distinction between data and models is now almost completely irrelevant in meteorology and climate science. The data collection depends on a vast array of models to turn raw instrumental readings into useful data, while the models themselves can be thought of sophisticated data reconstructions. Even GCMs, which now have the ability to do data assimilation and re-analysis, can be thought of as large amounts of data made executable through a set of equations that define spatial and temporal relationships within that data.

As an example, Edwards describes the analysis performed by Christy and Spencer at UAH on the MSU satellite data, from which they extracted measurements of the temperature of the upper atmosphere. In various congressional hearing, Spencer and Christy frequently touted their work, which showed a slight cooling trend in the upper atmosphere, as superior to other work that showed a warming trend because they were able to “actually measure the temperature of the free atmosphere” whereas other work was merely “estimation” from models (Edwards, p414). However, this completely neglects the fact that the MSU data doesn’t measure temperature in the lower troposphere directly at all, it measures radiance at the top of the atmosphere. Temperature readings for the lower troposphere are constructed from these readings via a complex set of models that take into account the chemical composition of the atmosphere, the trajectory of the satellite, and the position of the sun, among other factors. More importantly, a series of corrections in these models over several years gradually removed the apparent cooling trend, finally revealing a warming trend, as predicted by the theory (see Karl et al for a more complete account). The key point is that the data needed for meteorology and climate science is so vast and so complex that it’s no longer possible to disentangle models from data. The data depends on models to make it useful, and the models are sophisticated tools for turning one kind of data into another.

While the vast infrastructure for collecting and sharing data has become largely invisible to many working meteorologists, but must be continually inverted by climate scientists, in order to use it for analysis of longer term trends. The project to develop a new global surface temperature record that I described yesterday is one example of such inversion – it will involve a painstaking process of search and rescue on original data records dating back more than a century, because of the needs for a more complete, higher resolution temperature record than is currently available.

So far, I’ve only described constructive uses of infrastructural inversion, performed in the pursuit of science, to improve our understanding of how things work, and to allow us to re-adapt an infrastructure for new purposes. But there’s another use of infrastructural inversion, applied as a rhetorical technique to undermine scientific research. It has been applied increasingly in recent years in an attempt to slow down progress on enacting climate change mitigation policies, by sowing doubt and confusion about the validity of our knowledge about climate change. The technique is to dig down into the vast infrastructure that supports climate science, identify weaknesses in this infrastructure, and tout them as reasons to mistrust scientists’ current understanding of the climate system. And it’s an easy game to play, for two reasons: (1) all infrastructures are constructed through a series of compromises (e.g. standards are never followed exactly), and communities of practice develop workarounds that naturally correct for infrastructural weaknesses and (2) as described above, the data collection for weather forecasting frequently does fail to serve the needs of climate scientists. The climate scientists are painfully aware of these infrastructural weaknesses and have to deal with them every day, while those playing this rhetorical game ignore this, and pretend instead that there’s a vast conspiracy to lie about the science.

The problem is that, at first sight, many of these attempts at infrastructural inversion look like honest citizen-scientist attempt to increase transparency and improve the quality of the science (e.g. see Edwards, p421-427). For example, Anthony Watt’s SurfaceStations.org project is an attempt to document the site details of a large number of surface weather measuring stations, to understand how problems in their siting (e.g. growth of surrounding buildings) and placement of instruments might create biases in the long term trends constructed from their data. At face value, this looks like a valuable citizen-science exercise in infrastructural inversion. However, Watts wraps the whole exercise in the rhetoric of conspiracy theory, frequently claiming that climate scientists are dishonest, that they are covering up these problems, and that climate change itself is a myth. This not only ignores the fact that climate scientists themselves routinely examine such weaknesses in the temperature record, but also has the effect of biasing the entire exercise, as Watts’ followers are increasingly motivated to report only those problems that would cause a warming bias, and ignore those that do not. Recent independent studies that have examined the data collected by the SurfaceStations.org project demonstrate that the corrections demanded by Watts are irrelevant.

The recent project launched by the UK Met Office might look to many people like it’s a desperate response to “ClimateGate“, a mea culpa, an attempt to claw back some credibility. But, put into the context of the history of continual infrastructural inversion performed by climate scientists throughout the history of the field, it is nothing of the sort. It’s just one more in a long series of efforts to build better and more complete datasets to allow climate scientists to answer new research questions. This is what climate scientists do all the time. In this case, it is an attempt to move from monthly to daily temperature records, to improve our ability to understand the regional effects of climate change, and especially to address the growing need to understand the effect of climate change on extreme weather events (which are largely invisible in monthly averages).

So, infrastructural inversion is a fascinating process, used by at least three different groups:

  • Researchers who study scientific work (e.g. Star, Bowker, Edwards) use it to understand the interplay between the infrastructure and the scientific work that it supports;
  • Climate scientists use it all the time to analyze and improve the weather data collection systems that they need to understand longer term climate trends;
  • Climate change denialists use it to sow doubt and confusion about climate science, to further a political agenda of delaying regulation of carbon emissions.

And unfortunately, sorting out constructive uses of infrastructural inversion from its abuses is hard, because in all cases, it looks like legitimate questions are being asked.

Oh, and I can’t recommend Edward’s book highly enough. As Myles Allen writes in his review: “A Vast Machine [...] should be compulsory reading for anyone who now feels empowered to pontificate on how climate science should be done.”

When did ignorance become a badge of honour for journalists?
Another tiresome exoneration for climate science

4 Comments

  1. Nice article, Steve. And another addition to my reading TODO.

  2. As a climate scientist, I find that the book by Paul Edwards very helpful in explaining our infrastructure. Thank you very much, Steve, for clarifying the concept “infrastructural inversion” which I did not understand well as I read the book.

    I sense a little problem in both your and his expressions about models. I understand what you mean when you say that every observation involves some kind of models. In such contexts, the concept of “model” covers statistical models and radiative transfer models. On the other hand, in the contexts where you and Edwards discuss models, the focus is usually on physical-science-based prognostic models. I once wanted to say that we need to distinguish two different meanings of “model”. But now I think it is more likely to be a continuum. Still I think we should explicitly mention what part of the continuum we focus on.

    (Another instance of this issue is how to answer to the question whether CRU has developed a climate model.)

  3. Got back from a trip to Cambridge to find my copy waiting for me (along with the newer edition of Weart, which I thought I should read given that I recommend it to everybody). So I hope to get it read this week.

  4. Hi Steve, my infrastructural inversion vis a vis the human-keyboard interface got the better of me.
    A very minor spelling error in an otherwise excellent article.

    Quoting from your article:
    “This not only ignores the “face” that climate scientists themselves routinely examine such weaknesses in the temperature record, but also has the effect of biasing the entire exercise, as Watts’ followers are increasingly motivated to report only those problems that would cause a warming bias, and ignore those that do not.”
    “Face” should probably be “fact”.

    Just so the Watts’ followers in their inversion efforts don’t come and haunt you about it.

    [Thanks - fixed! (I'm just waiting for them to quote the first seven words of my final paragraph out of context...) - Steve]