Here’s an interesting study by Lawrence Hamilton, published in Climatic Change in December: “Education, politics and opinions about climate change evidence for interaction effects“. He finds that whereas in the past, education level strongly correlated with concern for environmental issues, this correlation has now gone away because of a new interaction effect with political beliefs. (Which is consistent with other recent research, e.g. see the work of Kahan and Braman).

From his survey of people in New Hampshire and Rural Michigan, Hamilton identified strong Democrats and Strong Republicans, and discovered that, with respect to climate change, there is a significant interaction effect between education and party support, and another between how well people believe they understand climate change and party support:

Hamilton 2010, Fig 3: Predicted probability that Upper-Peninsula Michigan residents believe that global warming will pose a serious threat vs. respondent education, for “strong Democrats” and “strong Republicans”

Hamilton 2010, Fig 2: Predicted probability that New Hampshire residents believe that global warming will pose a serious threat vs. self-assessed understanding of the issue, for “strong Democrats” and “strong Republicans”

[I can’t help noticing there’s no data points for “Strong Republicans” who say they don’t understand global warming at all!]

Here’s what Hamilton has to say about the findings:

“The inconsistency marks a social shift away from patterns seen in older research. It reflects the efficacy of media campaigns that provide scientific-sounding arguments against taking climate change seriously, which disproportionately reach educated but ideologically receptive audiences […]. Among many educated, conservative citizens, it appears that that such arguments have overshadowed the scientific consensus presented by the IPCC reports and other core science sources.”

Hamilton puts a significant part of the explanation for this shift on internet and cable TV as sources of information, which increasingly allow people to tune into only those sources that they find ideologically compatible, and the ability of these sources to propagate politically inspired but scientific-sounding arguments:

“The effective dissemination of contrarian arguments means that many people who have no contact with climate scientists or the primary research literature can nevertheless learn that a scientist says temperatures have risen on Mars (politically spun as evidence that global warming has solar or cosmic origins), or another scientist says it is cooling in East Antarctica (spun as evidence that our planet is not warming after all). They might consider themselves well informed about climate science even while not understanding its basic ideas.”

And he concludes that having more scientists getting involved in blogs and rapid response initiatives is crucial:

“If non-specialists want to find out what scientists really know about temperature trends of Mars and East Antarctica, or other arguments aired in today’s news or last night’s party, they are best served by a relatively small number of active-response Web sites written by climate scientists, such as Realclimate.org. Unlike journal articles, science meetings or reports, Web sites and blogs have the capability to react quickly (albeit less rigorously), reach broader audiences, and seriously confront arguments that have no scientific merit. Moreover, their online science posts can be passed on from reader to reader, which is difficult to do with journal articles or technical reports.”

[Hat tip to Sol for sending me this]

Update: It appears to also matter whether it’s called “Climate Change” or “Global Warming”, at least to Republicans, who are more likely to believe in the former rather than the latter.

Nearly everything we ever do depends on vast social and technical infrastructures, which, when they work, are largely invisible. Science is no exception – modern science is only possible because we have built the infrastructure to support it: classification systems, international standards, peer-review, funding agencies, and, most importantly, systems for the collection and curation of vast quantities of data about the world. Star and Ruhleder point out the infrastructure that supports scientific work is embedded inside of other social and technical systems, and becomes invisible when we come to rely on it. Indeed, the process of learning how to make use of a particular infrastructure is, to a large extent, what defines membership in a particular community of practice. They also observe that our infrastructures are closely intertwined with our conventions and standards. As a simple example, they point to the QWERTY keyboard, which despite its limitations, shapes much of our interaction with computers (even the design of office furniture!), such that learning to use the keyboard is a crucial part of learning to use a computer. And once you can type, you cease to be aware of the keyboard itself, except when it breaks down. This invisibility-in-use is similar to Heidegger’s notion of tools that are ready-to-hand; the key difference is that tools are local to the user, while infrastructures have vast spatial and/or temporal extent.

A crucial point is that what counts as infrastructure depends on the nature of the work that it supports. What is invisible infrastructure for one community might not be for another. The internet is a good example – most users just accept it exists and make use of it, without asking how it works. However, to computer scientists, a detailed understanding of its inner workings is vital. A refusal to treat the internet as invisible infrastructure is a condition to entry into certain geek cultures.

In their book Sorting Things Out, Star and Bowker introduced the term infrastructural inversion, for a process of focusing explicitly on the infrastructure itself, in order to expose and study its inner workings. It’s a rather cumbersome phrase for a very interesting process, kind of like a switch of figure and ground. In their case, infrastructural inversion is a research strategy that allows them to explore how things like classification systems and standards are embedded in so much of scientific practice, and to understand how these things evolve with the science itself.

Paul Edwards applies infrastructural inversion to climate science in his book A Vast Machine, where he examines the history of attempts by meteorologists to create a system for collecting global weather data, and for sharing that data with the international weather forecasting community. He points out that climate scientists also come to rely on that same infrastructure, but that it doesn’t serve their needs so well, and hence there is a difference between weather data and climate data. As an example, meteorologists tolerate changes in the nature and location of a particular surface temperature station over time, because they are only interested in forecasting over the short term (days or weeks). But to a climate scientist trying to study long-term trends in climate, such changes (known as inhomogeneities) are crucial. In this case, the infrastructure breaks down, as it fails to serve the needs of this particular community of scientists.

Hence, as Edwards points out, climate scientists also perform infrastructural inversion regularly themselves, as they dive into the details of the data collection system, trying to find and correct inhomogeneities. In the process, almost any aspect of how this vast infrastructure works might become important, revealing clues about what parts of the data can be used and which parts must be re-considered. One of the key messages in Paul’s book is that the usual distinction between data and models is now almost completely irrelevant in meteorology and climate science. The data collection depends on a vast array of models to turn raw instrumental readings into useful data, while the models themselves can be thought of sophisticated data reconstructions. Even GCMs, which now have the ability to do data assimilation and re-analysis, can be thought of as large amounts of data made executable through a set of equations that define spatial and temporal relationships within that data.

As an example, Edwards describes the analysis performed by Christy and Spencer at UAH on the MSU satellite data, from which they extracted measurements of the temperature of the upper atmosphere. In various congressional hearing, Spencer and Christy frequently touted their work, which showed a slight cooling trend in the upper atmosphere, as superior to other work that showed a warming trend because they were able to “actually measure the temperature of the free atmosphere” whereas other work was merely “estimation” from models (Edwards, p414). However, this completely neglects the fact that the MSU data doesn’t measure temperature in the lower troposphere directly at all, it measures radiance at the top of the atmosphere. Temperature readings for the lower troposphere are constructed from these readings via a complex set of models that take into account the chemical composition of the atmosphere, the trajectory of the satellite, and the position of the sun, among other factors. More importantly, a series of corrections in these models over several years gradually removed the apparent cooling trend, finally revealing a warming trend, as predicted by the theory (see Karl et al for a more complete account). The key point is that the data needed for meteorology and climate science is so vast and so complex that it’s no longer possible to disentangle models from data. The data depends on models to make it useful, and the models are sophisticated tools for turning one kind of data into another.

While the vast infrastructure for collecting and sharing data has become largely invisible to many working meteorologists, but must be continually inverted by climate scientists, in order to use it for analysis of longer term trends. The project to develop a new global surface temperature record that I described yesterday is one example of such inversion – it will involve a painstaking process of search and rescue on original data records dating back more than a century, because of the needs for a more complete, higher resolution temperature record than is currently available.

So far, I’ve only described constructive uses of infrastructural inversion, performed in the pursuit of science, to improve our understanding of how things work, and to allow us to re-adapt an infrastructure for new purposes. But there’s another use of infrastructural inversion, applied as a rhetorical technique to undermine scientific research. It has been applied increasingly in recent years in an attempt to slow down progress on enacting climate change mitigation policies, by sowing doubt and confusion about the validity of our knowledge about climate change. The technique is to dig down into the vast infrastructure that supports climate science, identify weaknesses in this infrastructure, and tout them as reasons to mistrust scientists’ current understanding of the climate system. And it’s an easy game to play, for two reasons: (1) all infrastructures are constructed through a series of compromises (e.g. standards are never followed exactly), and communities of practice develop workarounds that naturally correct for infrastructural weaknesses and (2) as described above, the data collection for weather forecasting frequently does fail to serve the needs of climate scientists. The climate scientists are painfully aware of these infrastructural weaknesses and have to deal with them every day, while those playing this rhetorical game ignore this, and pretend instead that there’s a vast conspiracy to lie about the science.

The problem is that, at first sight, many of these attempts at infrastructural inversion look like honest citizen-scientist attempt to increase transparency and improve the quality of the science (e.g. see Edwards, p421-427). For example, Anthony Watt’s SurfaceStations.org project is an attempt to document the site details of a large number of surface weather measuring stations, to understand how problems in their siting (e.g. growth of surrounding buildings) and placement of instruments might create biases in the long term trends constructed from their data. At face value, this looks like a valuable citizen-science exercise in infrastructural inversion. However, Watts wraps the whole exercise in the rhetoric of conspiracy theory, frequently claiming that climate scientists are dishonest, that they are covering up these problems, and that climate change itself is a myth. This not only ignores the fact that climate scientists themselves routinely examine such weaknesses in the temperature record, but also has the effect of biasing the entire exercise, as Watts’ followers are increasingly motivated to report only those problems that would cause a warming bias, and ignore those that do not. Recent independent studies that have examined the data collected by the SurfaceStations.org project demonstrate that the corrections demanded by Watts are irrelevant.

The recent project launched by the UK Met Office might look to many people like it’s a desperate response to “ClimateGate“, a mea culpa, an attempt to claw back some credibility. But, put into the context of the history of continual infrastructural inversion performed by climate scientists throughout the history of the field, it is nothing of the sort. It’s just one more in a long series of efforts to build better and more complete datasets to allow climate scientists to answer new research questions. This is what climate scientists do all the time. In this case, it is an attempt to move from monthly to daily temperature records, to improve our ability to understand the regional effects of climate change, and especially to address the growing need to understand the effect of climate change on extreme weather events (which are largely invisible in monthly averages).

So, infrastructural inversion is a fascinating process, used by at least three different groups:

  • Researchers who study scientific work (e.g. Star, Bowker, Edwards) use it to understand the interplay between the infrastructure and the scientific work that it supports;
  • Climate scientists use it all the time to analyze and improve the weather data collection systems that they need to understand longer term climate trends;
  • Climate change denialists use it to sow doubt and confusion about climate science, to further a political agenda of delaying regulation of carbon emissions.

And unfortunately, sorting out constructive uses of infrastructural inversion from its abuses is hard, because in all cases, it looks like legitimate questions are being asked.

Oh, and I can’t recommend Edward’s book highly enough. As Myles Allen writes in his review: “A Vast Machine […] should be compulsory reading for anyone who now feels empowered to pontificate on how climate science should be done.”

Susan Leigh Star passed away in her sleep this week, coincidently on Ada Lovelace day. As I didn’t get a chance to do a Lovelace post, I’m writing this one belatedly, as a tribute to Leigh.

Leigh Star (sometimes also known as L*) had a huge influence on my work back in the early 90’s. I met her when she was in the UK, at a time when there was a growing community of folks at Sussex, Surrey, and Xerox Europarc, interested in CSCW. We organised a series of workshops on CSCW in London, at the behest of the UK funding councils. Leigh spoke at the the workshop that I chaired, and she subsequently contributed a chapter entitled “Cooperation Without Consensus in Scientific Problem Solving” to our book, CSCW: Cooperation of Conflict. Looks like the book is out of print, and I really want to read Leigh’s chapter again, so I hope I haven’t lost my copy – the only chapter I still have electronically is our introduction.

Anyway, Leigh pioneered a new kind of sociology of scientific work practices, looking at the mechanisms by which coordination and sharing occurs across disciplinary boundaries. Perhaps one of her most famous observations is the concept of boundary objects, which I described in detail last year in response to seeing coordination issues arise between geophysicists trying to consolidate their databases. The story of the geologists realizing they didn’t share a common definition of the term “bedrock” would have amused and fascinated her.

It was Leigh’s work on this that first switched me on to the value of sociological studies as a way of understanding the working practices of scientists, and she taught me a lot about how to use ethnographic techniques to study how people use and develop technical infrastructures. I’ve remained fascinated by her ideas ever since. For those wanting to know more about her work, I could suggest this interview with her from 2008, or better yet, buy her book on how classification schemes work, or perhaps read this shorter paper on the Ethnography of Infrastructure. She had just moved to the i-school at U Pittsburgh last year, so I assumed she still had many years of active research ahead of her. I’m deeply saddened that I didn’t get another chance to meet with her.

Leigh – we’ll miss you!

For sociologists, a strong call to action in the report of an NSF sponsored workshop on Sociological Perspectives on Global Climate Change. Like the APA report I wrote about earlier, it’s a call to action, covering the key research challenges for the field, and addressing the barriers that might prevent researchers participating in such research. Among the recommendations are better data collection on organisational and community behaviour relevant to climate change, and better inter-disciplinary links:

“…social scientists are seldom consulted except as an afterthought in natural science and engineering research projects […and…] social scientists tend not to seek out collaborations with natural scientists and engineers and often are uninformed about major research programs on climate change. The result is that the research of each community does not tend to be informed by the insights and resources available from the others. This is true not only between the social sciences and the natural sciences, but among the social sciences themselves. For instance, sociological research projects seldom incorporate spatial processes, behavioral analyses, or economic models.

For a short summary, read the article “The Wisdom of Crowds” in this week’s Nature Reports, and indeed, the editorial that goes with it.