A high school student in Ottawa, Jin, writes to ask me for help with a theme on the question of whether global warming is caused by human activities. Here’s my answer:

The simple answer is ‘yes’, global warming is caused by human activities. In fact we’ve known this for over 100 years. Scientists in the 19th Century realized that some gases in the atmosphere help to keep the planet warm by stopping the earth losing heat to outer space, just like a blanket keeps you warm by trapping heat near your body. The most important of these gases is Carbon Dioxide (CO2). If there were no CO2 in the atmosphere, the entire earth would be a frozen ball of ice. Luckily, that CO2 keeps the planet at the temperatures that are suitable for human life. But as we dig up coal and oil and natural gas, and burn them for energy, we increase the amount of CO2 in the atmosphere and hence we increase the temperature of the planet. Now, while scientists have known this since the 19th century, it’s only in the last 30 years that scientists were able to calculate precisely how fast the earth would warm up, and which parts of the planet would be affected the most.

Here are three really good explanations, which might help you for your theme:

  1. NASA’s Climate Kids website:
    It’s probably written for kids younger than you, but has really simple explanations, in case anything isn’t clear.
  2. Climate Change in a Nutshell – a set of short videos that I really like:
  3. The IPCC’s frequently asked question list. The IPCC is the international panel on climate change, whose job is to summarize what scientists know, so that politicians can make good decisions. Their reports can be a bit technical, but have a lot more detail than most other material:

Also, you might find this interesting. It’s a list of successful predictions by climate scientists. One of the best ways we know that science is right about something is that we are able to use our theories to predict what while happen in the future. When those predictions turn out to be correct, it gives us a lot more confidence that the theories are right: http://www.easterbrook.ca/steve/?p=3031

By the way, if you use google to search for information about global warming or climate change, you’ll find lots of confusing information, and different opinions. You might wonder why that is, if scientists are so sure about the causes of climate change. There’s a simple reason. Climate change is a really big problem, one that’s very hard to deal with. Most of our energy supply comes from fossil fuels, in one way or another. To prevent dangerous levels of warming, we have to stop using them. How we do that is hard for many people to think about. We really don’t want to stop using them, because the cheap energy from fossil fuels powers our cars, heats our homes, gives us cheap flights, powers our factories, and so on.

For many people it’s easier to choose not to believe in global warming than it is to think about how we would give up fossil fuels. Unfortunately, our climate doesn’t care what we believe – it’s changing anyway, and the warming is accelerating. Luckily, humans are very intelligent, and good at inventing things. If we can understand the problem, then we should be able to solve it. But it will require people to think clearly about it, and not to fool themselves by wishing the problem away.

A few weeks back, Randall Munroe (of XKCD fame) attempted to explain the parts of a Saturn V rocket (“Up Goer Five”) using only the most common one thousand words of English. I like the idea, but found many of his phrasings awkward, and some were far harder to understand than if he’d used the usual word.

Now there’s a web-based editor that let’s everyone try their hand at this, and a tumblr of scientists trying to explain their work this way. Some of them are brilliant, but many almost unreadable. It turns out this is much harder than it looks.

Here’s mine. I cheated once, by introducing one new word that’s not on the list, although it’s not really cheating because the whole point of science education is to equip people the right words and concepts to talk about important stuff:

If the world gets hotter or colder, we call that ‘climate’ change. I study how people use computers to understand such change, and to help them decide what we should do about it. The computers they use are very big and fast, but they are hard to work with. My job is to help them check that the computers are working right, and that the answers they get from the computers make sense. I also study what other things people want to know about how the world will change as it gets hotter, and how we can make the answers to their questions easier to understand.

[Update] And here’s a few others that I think are brilliant:

Emily S. Cassidy, Environmental Scientist at University of Minnesota:

In 50 years the world will need to grow two times as much food as we grow today. Meeting these growing needs for food will be hard because we need to make sure meeting these needs doesn’t lead to cutting down more trees or hurting living things. In the past when we wanted more food we cut down a lot of trees, so we could use the land. So how are we going to grow more food without cutting down more trees? One answer to this problem is looking at how we use the food we grow today. People eat food, but food is also used to make animals and run cars. In fact, animals eat over one-third of the food we grow. In some places, animals eat over two-thirds of the food grown! If the world used all of the food we grow for people, instead of animals and cars, we could have 70% more food and that would be enough food for a lot of people!

Anthony Finkelstein, at University College London, explaining requirements analysis:

I am interested in computers and how we can get them to do what we want. Sometimes they do not do what we expect because we got something wrong. I would like to know this before we use the computer to do something important and before we spend too much time and money. Sometimes they do something wrong because we did not ask the people who will be using them what they wanted the computer to do. This is not as easy as it sounds! Often these people do not agree with each other and do not understand what it is possible for the computer to do. When we know what they want the computer to do we must write it down in a way that people building the computer can also understand it.

This week, I start teaching a new grad course on computational models of climate change, aimed at computer science grad students with no prior background in climate science or meteorology. Here’s my brief blurb:

Detailed projections of future climate change are created using sophisticated computational models that simulate the physical dynamics of the atmosphere and oceans and their interaction with chemical and biological processes around the globe. These models have evolved over the last 60 years, along with scientists’ understanding of the climate system. This course provides an introduction to the computational techniques used in constructing global climate models, the engineering challenges in coupling and testing models of disparate earth system processes, and the scaling challenges involved in exploiting peta-scale computing architectures. The course will also provide a historical perspective on climate modelling, from the early ENIAC weather simulations created by von Neumann and Charney, through to today’s Earth System Models, and the role that these models play in the scientific assessments of the UN’s Intergovernmental Panel on Climate Change (IPCC). The course will also address the philosophical issues raised by the role of computational modelling in the discovery of scientific knowledge, the measurement of uncertainty, and a variety of techniques for model validation. Additional topics, based on interest, may include the use of multi-model ensembles for probabilistic forecasting, data assimilation techniques, and the use of models for re-analysis.

I’ve come up with a draft outline for the course, and some possible readings for each topic. Comments are very welcome:

  1. History of climate and weather modelling. Early climate science. Quick tour of range of current models. Overview of what we knew about climate change before computational modeling was possible.
  2. Calculating the weather. Bjerknes’ equations. ENIAC runs. What does a modern dynamical core do? [Includes basic introduction to thermodynamics of atmosphere and ocean]
  3. Chaos and complexity science. Key ideas: forcings, feedbacks, dynamic equilibrium, tipping points, regime shifts, systems thinking. Planetary boundaries. Potential for runaway feedbacks. Resilience & sustainability. (way too many readings this week. Have to think about how to address this – maybe this is two weeks worth of material?)
    • Liepert, B. G. (2010). The physical concept of climate forcing. Wiley Interdisciplinary Reviews: Climate Change, 1(6), 786-802.
    • Manson, S. M. (2001). Simplifying complexity: a review of complexity theory. Geoforum, 32(3), 405-414.
    • Rind, D. (1999). Complexity and Climate. Science, 284(5411), 105-107.
    • Randall, D. A. (2011). The Evolution of Complexity In General Circulation Models. In L. Donner, W. Schubert, & R. Somerville (Eds.), The Development of Atmospheric General Circulation Models: Complexity, Synthesis, and Computation. Cambridge University Press.
    • Meadows, D. H. (2008). Chapter One: The Basics. Thinking In Systems: A Primer (pp. 11-34). Chelsea Green Publishing.
    • Randers, J. (2012). The Real Message of Limits to Growth: A Plea for Forward-Looking Global Policy, 2, 102-105.
    • Rockström, J., Steffen, W., Noone, K., Persson, Å., Chapin, F. S., Lambin, E., Lenton, T. M., et al. (2009). Planetary boundaries: exploring the safe operating space for humanity. Ecology and Society, 14(2), 32.
    • Lenton, T. M., Held, H., Kriegler, E., Hall, J. W., Lucht, W., Rahmstorf, S., & Schellnhuber, H. J. (2008). Tipping elements in the Earth’s climate system. Proceedings of the National Academy of Sciences of the United States of America, 105(6), 1786-93.
  4. Typology of climate Models. Basic energy balance models. Adding a layered atmosphere. 3-D models. Coupling in other earth systems. Exploring dynamics of the socio-economic system. Other types of model: EMICS; IAMS.
  5. Earth System Modeling. Using models to study interactions in the earth system. Overview of key systems (carbon cycle, hydrology, ice dynamics, biogeochemistry).
  6. Overcoming computational limits. Choice of grid resolution; grid geometry, online versus offline; regional models; ensembles of simpler models; perturbed ensembles. The challenge of very long simulations (e.g. for studying paleoclimate).
  7. Epistemic status of climate models. E.g. what does a future forecast actually mean? How are model runs interpreted? Relationship between model and theory. Reproducibility and open science.
    • Shackley, S. (2001). Epistemic Lifestyles in Climate Change Modeling. In P. N. Edwards (Ed.), Changing the Atmosphere: Expert Knowledge and Environmental Government (pp. 107-133). MIT Press.
    • Sterman, J. D., Jr, E. R., & Oreskes, N. (1994). The Meaning of Models. Science, 264(5157), 329-331.
    • Randall, D. A., & Wielicki, B. A. (1997). Measurement, Models, and Hypotheses in the Atmospheric Sciences. Bulletin of the American Meteorological Society, 78(3), 399-406.
    • Smith, L. a. (2002). What might we learn from climate forecasts? Proceedings of the National Academy of Sciences of the United States of America, 99 Suppl 1, 2487-92.
  8. Assessing model skill – comparing models against observations, forecast validation, hindcasting. Validation of the entire modelling system. Problems of uncertainty in the data. Re-analysis, data assimilation. Model intercomparison projects.
  9. Uncertainty. Three different types: initial state uncertainty, scenario uncertainty and structural uncertainty. How well are we doing? Assessing structural uncertainty in the models. How different are the models anyway?
  10. Current Research Challenges. Eg: Non-standard grids – e.g. non-rectangular, adaptive, etc; Probabilistic modelling – both fine grain (e.g. ECMWF work) and use of ensembles; Petascale datasets; Reusable couplers and software frameworks. (need some more readings on different research challenges for this topic)
  11. The future. Projecting future climates. Role of modelling in the IPCC assessments. What policymakers want versus what they get. Demands for actionable science and regional, decadal forecasting. The idea of climate services.
  12. Knowledge and wisdom. What the models tell us. Climate ethics. The politics of doubt. The understanding gap. Disconnect between our understanding of climate and our policy choices.

Last week I was at the 2012 AGU Fall Meeting. I plan to blog about many of the talks, but let me start with the Tyndall lecture given by Ray Pierrehumbert, on “Successful Predictions”. You can see the whole talk on youtube, so here I’ll try and give a shorter summary.

Ray’s talk spanned 120 years of research on climate change. The key message is that science is a long, slow process of discovery, in which theories (and their predictions) tend to emerge long before they can be tested. We often learn just as much from the predictions that turned out to be wrong as we do from those that were right. But successful predictions eventually form the body of knowledge that we can be sure about, not just because they were successful, but because they build up into a coherent explanation of multiple lines of evidence.

Here are the sucessful predictions:

1896: Svante Arrhenius correctly predicts that increases in fossil fuel emissions would cause the earth to warm. At that time, much of the theory of how atmospheric heat transfer works was missing, but nevertheless, he got a lot of the process right. He was right that surface temperature is determined by the balance between incoming solar energy and outgoing infrared radiation, and that the balance that matters is the radiation budget at the top of the atmosphere. He knew that the absorption of infrared radiation was due to CO2 and water vapour, and he also knew that CO2 is a forcing while water vapour is a feedback. He understood the logarithmic relationship between CO2 concentrations in the atmosphere and surface temperature. However, he got a few things wrong too. His attempt to quantify the enhanced greenhouse effect was incorrect, because he worked with a 1-layer model of the atmosphere, which cannot capture the competition between water vapour and CO2, and doesn’t account for the role of convection in determining air temperatures. His calculations were incorrect because he had the wrong absorption characteristics of greenhouse gases. And he thought the problem would be centuries away, because he didn’t imagine an exponential growth in use of fossil fuels.

Arrhenius, as we now know, was way ahead of his time. Nobody really considered his work again for nearly 50 years, a period we might think of as the dark ages of climate science. The story perfectly illustrates Paul Hoffman’s tongue-in-cheek depiction of how scientific discoveries work: someone formulates the theory, other scientists then reject it, ignore it for years, eventually rediscover it, and finally accept it. These “dark ages” weren’t really dark, of course – much good work was done in this period. For example:

  • 1900: Frank Very worked out the radiation balance, and hence the temperature, of the moon. His results were confirmed by Pettit and Nicholson in 1930.
  • 1902-14: Arthur Schuster and Karl Schwarzschild used a 2-layer radiative-convective model to explain the structure of the sun.
  • 1907: Robert Emden realized that a similar radiative-convective model could be applied to planets, and Gerard Kuiper and others applied this to astronomical observations of planetary atmospheres.

This work established the standard radiative-convective model of atmospheric heat transfer. This treats the atmosphere as two layers; in the lower layer, convection is the main heat transport, while in the upper layer, it is radiation. A planet’s outgoing radiation comes from this upper layer. However, up until the early 1930’s, there was no discussion in the literature of the role of carbon dioxide, despite occasional discussion of climate cycles. In 1928, George Simpson published a memoir on atmospheric radiation, which assumed water vapour was the only greenhouse gas, even though, as Richardson pointed out in a comment, there was evidence that even dry air absorbed infrared radiation.

1938: Guy Callendar is the first to link observed rises in CO2 concentrations with observed rises in surface temperatures. But Callendar failed to revive interest in Arrhenius’s work, and made a number of mistakes in things that Arrhenius had gotten right. Callendar’s calculations focused on the radiation balance at the surface, whereas Arrhenius had (correctly) focussed on the balance at the top of the atmosphere. Also, he neglected convective processes, which astrophysicists had already resolved using the radiative-convective model. In the end, Callendar’s work was ignored for another two decades.

1956: Gilbert Plass correctly predicts a depletion of outgoing radiation in the 15 micron band, due to CO2 absorption. This depletion was eventually confirmed by satellite measurements. Plass was one of the first to revisit Arrhenius’s work since Callendar, however his calculations of climate sensitivity to CO2 were also wrong, because, like Callendar, he focussed on the surface radiation budget, rather than the top of the atmosphere.

1961-2: Carl Sagan correctly predicts very thick greenhouse gases in the atmosphere of Venus, as the only way to explain the very high observed temperatures. His calculations showed that greenhouse gasses must absorb around 99.5% of the outgoing surface radiation. The composition of Venus’s atmosphere was confirmed by NASA’s Venus probes in 1967-70.

1959: Burt Bolin and Erik Eriksson correctly predict the exponential increase in CO2 concentrations in the atmosphere as a result of rising fossil fuel use. At that time they did not have good data for atmospheric concentrations prior to 1958, hence their hindcast back to 1900 was wrong, but despite this, their projection for changes forward to 2000 were remarkably good.

1967: Suki Manabe and Dick Wetherald correctly predict that warming in the lower atmosphere would be accompanied by stratospheric cooling. They had built the first completely correct radiative-convective implementation of the standard model applied to Earth, and used it to calculate a +2C equilibrium warming for doubling CO2, including the water vapour feedback, assuming constant relative humidity. The stratospheric cooling was confirmed in 2011 by Gillett et al.

1975: Suki Manabe and Dick Wetherald correctly predict that the surface warming would be much greater in the polar regions, and that there would be some upper troposphere amplification in the tropics. This was the first coupled general circulation model (GCM), with an idealized geography. This model computed changes in humidity, rather than assuming it, as had been the case in earlier models. It showed polar amplification, and some vertical amplification in the tropics. The polar amplification was measured, and confirmed by Serreze et al in 2009. However, the height gradient in the tropics hasn’t yet been confirmed (nor has it yet been falsified – see Thorne 2008 for an analysis)

1989: Ron Stouffer et. al. correctly predict that the land surface will warm more than the ocean surface, and that the southern ocean warming would be temporarily suppressed due to the slower ocean heat uptake. These predictions are correct, although these models failed to predict the strong warming we’ve seen over the antarctic peninsula.

Of course, scientists often get it wrong:

1900: Knut Angström incorrectly predicts that increasing levels of CO2 would have no effect on climate, because he thought the effect was already saturated. His laboratory experiments weren’t accurate enough to detect the actual absorption properties, and even if they were, the vertical structure of the atmosphere would still allow the greenhouse effect to grow as CO2 is added.

1971: Rasool and Schneider incorrectly predict that atmospheric cooling due to aerosols would outweigh the warming from CO2. However, their model had some important weaknesses, and was shown to be wrong by 1975. Rasool and Schneider fixed their model and moved on. Good scientists acknowledge their mistakes.

1993: Richard Lindzen incorrectly predicts that warming will dry the troposphere, according to his theory that a negative water vapour feedback keeps climate sensitivity to CO2 really low. Lindzen’s work attempted to resolve a long standing conundrum in climate science. In 1981, the CLIMAP project reconstructed temperatures at the last Glacial maximum, and showed very little tropical cooling. This was inconsistent the general circulation models (GCMs), which predicted substantial cooling in the tropics (e.g. see Broccoli & Manabe 1987). So everyone thought the models must be wrong. Lindzen attempted to explain the CLIMAP results via a negative water vapour feedback. But then the CLIMAP results started to unravel, and newer proxies demonstrated that it was the CLIMAP data that was wrong, rather than the models. It eventually turns out the models were getting it right, and it was the CLIMAP data and Lindzen’s theories that were wrong. Unfortunately, bad scientists don’t acknowledge their mistakes; Lindzen keeps inventing ever more arcane theories to avoid admitting he was wrong.

1995: John Christy and Roy Spencer incorrectly calculate that the lower troposphere is cooling, rather than warming. Again, this turned out to be wrong, once errors in satellite data were corrected.

In science, it’s okay to be wrong, because exploring why something is wrong usually advances the science. But sometimes, theories are published that are so bad, they are not even wrong:

2007: Courtillot et. al. predicted a connection between cosmic rays and climate change. But they couldn’t even get the sign of the effect consistent across the paper. You can’t falsify a theory that’s incoherent! Scientists label this kind of thing as “Not even wrong”.

Finally, there are, of course, some things that scientists didn’t predict. The most important of these is probably the multi-decadal fluctuations in the warming signal. If you calculate the radiative effect of all greenhouse gases, and the delay due to ocean heating, you still can’t reproduce the flat period in the temperature trend in that was observed in 1950-1970. While this wasn’t predicted, we ought to be able to explain it after the fact. Currently, there are two competing explanations. The first is that the ocean heat uptake itself has decadal fluctuations, although models don’t show this. However, it’s possible that climate sensitivity is at the low end of the likely range (say 2°C per doubling of CO2), it’s possible we’re seeing a decadal fluctuation around a warming signal. The other explanation is that aerosols took some of the warming away from GHGs. This explanation requires a higher value for climate sensitivity (say around 3°C), but with a significant fraction of the warming counteracted by an aerosol cooling effect. If this explanation is correct, it’s a much more frightening world, because it implies much greater warming as CO2 levels continue to increase. The truth is probably somewhere between these two. (See Armour & Roe, 2011 for a discussion)

To conclude, climate scientist have made many predictions about the effect of increasing greenhouse gases that have proven to be correct. They have earned a right to be listened to, but is anyone actually listening? If we fail to act upon the science, will future archaeologists wade through AGU abstracts and try to figure out what went wrong? There are signs of hope – in his re-election acceptance speech, President Obama revived his pledge to take action, saying “We want our children to live in an America that …isn’t threatened by the destructive power of a warming planet.”

14. November 2012 · 2 comments · Categories: cities

Well here’s an interesting example of how much power a newspaper editor has to change the political discourse. And how powerless actual expertise and evidence is when stacked up against emotive newspaper headlines.

This week, Toronto is removing the bike lanes on Jarvis Street. The removal will a cost around $275,000. These bike lanes were only installed three years ago, after an extensive consultation exercise and environmental assessment that cost $950,000, and a construction cost of $86,000. According to analysis by city staff, the bike lanes are working well, with minimal impact on motor traffic travel times, and a significant reduction of accidents. Why would a city council that claims it’s desperately short of funding, and a mayor who vowed to slash unnecessary spending, suddenly decide to spend this much money removing a successful exercise in urban redesign, against the advice of city staff, against the recommendations of their environmental assessment, and against the wishes of local residents?

The answer is that the bike lanes on Jarvis have become a symbol of an ideological battle.

Up until 2009, Jarvis street had five lanes for motor traffic, with the middle lane working as a ‘tidal’ lane – north in the morning, to accommodate cars entering the city from the Gardiner expressway, and south in the evening when they were leaving. The design never worked very well, was confusing to motorists, and dangerous to cyclists and pedestrians. There was widespread agreement that the fifth lane had to be removed, as part of a much larger initiative to rejuvenate the downtown neighbourhoods along Jarvis Street. The main issue in the public consultation was the question of whether the new design should go for wider sidewalks or bike lanes. After an extensive consultation the city settled on bike lanes, and the vote sailed through council by a large majority.

A few days before the vote, the Toronto Sun, a rightwing and rather trashy tabloid newspaper printed a story under the front page headline “Toronto’s War on the Car“, picking up on a framing for discussions of urban transport that seems to have started with a rather silly rant two years previously in the National Post. The original piece in the National Post was a masterpiece of junk journalism: a story of about a local resident who refuses to take the subway and thinks his commute by car takes too long. Add a clever soundbite headline, avoid any attempt to address the issues seriously, and you’ve manufactured a shock horror story to sell more papers.

The timing of the article in the Toronto Sun was unfortunate – a handful of rightwing councillors picked up the soundbite to and made it a key talking point  in the debate on the Jarvis bike lanes in May 2009. The rhetoric on this supposed ‘war’ quickly replaced any rational discussion of how we accommodate multiple modes of transport, and how we solve urban congestion, and the debate descended into a nasty slanging match about cyclists, with our current mayor (then a councillor), even going so far as to say “bikers are a pain in the ass”.

The National Post upped the rhetoric in its news report the next day:

What started out five years ago as a local plan to beautify Jarvis Street yesterday became the front line in Toronto’s war on the car, with Mayor David Miller leading the charge…

The article never explains what’s wrong with building more bike lanes, but that really doesn’t matter when you have such a great soundbite at your disposal. The idea of a war on the car seems to be a peculiar ‘made in Toronto’ phenomenon, designed to get suburban drivers all fired up and ready to vote for firebrand rightwing politicians, who would then defend their rights to drive whereever and whenever they want. This rhetoric shuts down any sensible discussion about urban planning, transit, and sustainability.

Having seen how well the message played to suburban voters, our current mayor picked up the phrase as a major part of his election campaign, making a pledge to “end Toronto’s war on the car” a key part of his election platform. Nobody was ever clear what that meant, but to the voters in the suburbs, frustrated by their long commutes, it sounded good. Ford evidently believed that it meant cancelling every above-ground transit project currently underway, no matter how much such projects might help to reduce congestion. After his successful election, he declared “We will not build any more rail tracks down the middle of our streets.” Never mind that cities all over the world are turning to surface light rail to reduce congestion and pollution and to improve mobility and liveability. For Ford and his suburban voters, anything that threatens the supremacy of the car as his transport of choice must be stopped.

For a while, the argument transmuted into a debate over subways versus surface-level light rail. Subways, of course, have the benefit that they’re hidden away, so people who dislike mass transit never have to see them, and they don’t take precious street-level space away from cars. Unfortunately, subways are dramatically more expensive to build, and are only cost effective in very dense downtown environments, where they can be justified by a high ridership. Street-level light rail can move many more people at a small fraction of the price, and have the added benefit of integrating transit more tightly with existing streetscapes, making shops and restaurants much more accessible. Luckily for Toronto, sense prevailed, and Mayor Ford’s attempts to cancel Toronto’s plan to build an extensive network of light rail failed earlier this year.

Unfortunately, the price of that embarrassing defeat for Mayor Ford was that something else had to be sacrificed. Politicians need to be able to argue that they delivered on their promises. Having failed to kill Transit City, what else could Ford do but look for an easier win? And so the bike lanes on Jarvis had to go. Their removal will make no noticeable difference to drivers using Jarvis for their commute, and will make the street dramatically less safe for bikes. But Ford gets his symbolic victory. Removing a couple of urban bike lanes is now all that’s left of his promise to end the war on cars.

As Eric de Place points out:

“There’s something almost laughably overheated about the ‘war on cars’ rhetoric. It’s almost as if the purveyors of the phrase have either lost their cool entirely, or else they’re trying desperately to avoid a level-headed discussion of transportation policy.”

Removing downtown bike lanes certainly smacks of a vindictiveness born of desperation.

For a talk earlier this year, I put together a timeline of the history of climate modelling. I just updated it for my course, and now it’s up on Prezi, as a presentation you can watch and play with. Click the play button to follow the story, or just drag and zoom within the viewing pane to explore your own path.

Consider this a first draft though – if there are key milestones I’ve missed out (or misrepresented!) let me know!


We spent some time in my climate change class this week talking about Hurricane Sandy – it’s a fascinating case study of how climate change alters things in complex ways. Some useful links I collected:

In class we looked in detail about the factors that meteorologists look at as a hurricane approaches to forecast likely damage:

  • When will it make landfall? If it coincides with a high tide, that’s far worse than it it comes ashore during low tide.
  • Where exactly will it come ashore? Infrastructure to the north of the storm takes far more damage than infrastructure to the south, because the winds drive the storm surge in an anti-clockwise direction. For Sandy, New York was north of the landfall.
  • What about astronomical conditions? There was a full moon on Monday, which means extra high tides because of the alignment of the moon, earth and sun. That adds inches to the storm surge.

All these factors, combined with the rising sea levels, affected the amount of damage from Sandy. I already wrote about the non-linearity of hurricane damage back in December. After hurricane Sandy, I started thinking about another kind of non-linearity, this time in the impacts of sea level rise. We know that as the ocean warms it expands, and as glaciers around the world melt, the water ends up in the ocean. And sea level sea level rise is usually expressed in measures like: “From 1993 to 2009, the mean rate of SLR amounts to 3.3 ± 0.4 mm/year“. Such measures conjure up images of the sea slowly creeping up the beach, giving us plenty of time to move out of the way. But that’s not how it happens.

We’re used to the idea that an earthquake is a sudden release of the pressure that slowly builds up over a long period of time. Maybe that’s a good metaphor for sea level rise too – it is non-linear in the same way. What really matters about sea level rise isn’t its effects on average low and high tides. What matters is its effect on the height of storm surges. For example, the extra foot added to sea level in New York over the last century was enough to make the difference between the storm surge from Hurricane Sandy staying below the sea walls or washing into the subway tunnels. If you keep adding to sea level rise year after year, what you should expect is, sooner or later, a tipping point where a storm that you could survive previously suddenly become disastrous. Of course, it doesn’t help that Sandy was supersized by warmer oceans, fed by the extra moisture in a warmer atmosphere, and pushed in directions that it wouldn’t normally go by unusual weather conditions over Greenland. But still, it was the exact height of the storm surge that made all the difference, when you look at the bulk of the damage.


The second speaker at our Workshop on City Science was Andrew Wisdom from Arup, talking about Cities as Systems of Systems. Andrew began with the observation that cities are increasingly under pressure, as the urban population continues to grow, and cities struggle to provide adequate infrastructure for their populations to thrive. But a central part of his message is that the way we think about things tends to create the way they are, and this is especially so with how we think about our cities.

As an exercise, he first presented a continuum of worldviews, from Technocentric at one end, to Ecocentric at the other end:

  • In the Techno-centric view, humans are dissociated from the earth. Nature has no inherent value, and we can solve everything with ingenuity and technology. This worldview tends to view the earth as an inert machine to be exploited.
  • In the Eco-centric view, the earth is alive and central to the web of life. Humans are an intrinsic part of nature, but human activity is already exceeding the limits of what the planet can support, to the point that environmental problems are potentially catastrophic. Hence, we need to get rid of materialism, eliminate growth, and work to restore balance.
  • Somewhere in the middle is a Sustain-centric view, which accepts that the earth provides an essential life support system, and that nature has some intrinsic value. This view accepts that limits are being reached, that environmental problems tend to take decades to solve, and that more growth is not automatically good. Humans can replace some but not all natural processes, and we have to focus more on quality of life as a measure of success.

As an exercise, Andrew asked the audience to imagine this continuum spread along one wall of the room, and asked us each to go and stand where we felt we fit on the spectrum. Many of the workshop participants positioned themselves somewhere between the eco-centric and sustain-centric views, with a small cluster at the extreme eco-centric end, and another cluster just to the techno-centric side of sustain-centric. Nobody stood at the extreme techno-centric end of the room!

Then, he asked us to move to where we think the city of Toronto sits, and then where we think Canada sits, and finally where we feel the world sits. For the first two of these, everyone shifted a long way towards the technocentric end of the spectrum (and some discussion ensued to the effect that both our mayor and our prime minister are a long way off the chart altogether – they are both well known for strong anti-environmentalist views). For the whole world, people didn’t move much from the “Canada” perspective. An immediate insight was that we (workshop attendees) are far more towards the ecocentric end of the spectrum than either our current city or federal governments, and perhaps the world in general. So if our governments (and by extension the voters who elect them) are out of step with our own worldviews, what are the implications? Should we, as researchers, be aiming to shift people’s perspectives?

One problem that arises from one’s worldview is how people understand messages about environmental problems. For example, people with a technocentric perspective tend to view discussions of sustainability as being about sacrifice – ‘wearing a hair shirt’, consume less, etc. Which then leads to a waning interest in these topics. For example, analysis of google trends on terms like global warming and climate change show spikes in 2007 around the release of Al Gore’s movie and the IPCC assessment, but declining interest since then.

Jeb Brugmann, the previous speaker, talked about the idea of a Consumptive city versus a Generative city, which is a change in perspective that alters how we view cities, changes what we choose to measure, and hence affects the way our cities evolve.

Changes in the indices we pay attention to can have a dramatic impact. For example, a study in Melbourne created that VAMPIRE index (Vulnerability Assessment for Mortgage, Petroleum and Inflation Risks and Expenses), which shows the relative degree of socio-economic stress in suburbs in Brisbane, Sydney, Melbourne, Adelaide and Perth. The pattern that emerges is that in the Western suburbs of Melbourne, there are few jobs, and many people paying off mortgages, all having to commute and hour and a half to the east of the city for work.

Our view of a city tend to create structures that compartmentalize different systems into silos, and then we attempt to optimize within these silos. For example, zoning laws create chunks of land with particular prescribed purposes, and then we end up trying to optimize within each zone. When zoning laws create the kind of problem indicated by the Melbourne VAMPIRE index, there’s little the city can do about it if they continue to think in terms of zoning. The structure of these silos has become fossilized into the organizational structure of government. Take transport, for example. We tend to look at existing roads, and ask how to widen them to handle growth in traffic; we rarely attempt to solve traffic issues by asking bigger questions about why people choose to drive. Hence, we miss the opportunity to solve traffic problems by changing the relationship between where people live and where they work. Re-designing a city to provide more employment opportunities in neighbourhoods that are suffering socio-economic stress is far more likely to help than improving the transport corridors between those neighbourhoods and other parts of the city.

Healthcare is another example. The outcome metrics typically used for hospital use include average length of stay, 30-day unplanned readmission rate, cost of readmission, etc. Again, these metrics create a narrow view of the system – a silo – that we then try to optimize within. However, if you compare European and American healthcare systems, there are major structural difference. The US system is based on formula funding, in which ‘clients’ are classified in terms of type of illness, standard interventions for that illness, and associated costs. Funding is then allocated to service providers based on this classification scheme. In Europe, service provides are funded directly, and are able to decide at the local level how best to allocate that funding to serve the needs of the population they care for. The European model is a much more flexible system that treats patients real needs, rather than trying to fit each patient into a pre-defined category. In the US, the medical catalogue of disorders becomes an accounting scheme for allocating funds, and the result is that in the US, medical care costs going up faster than any other country. If you plot life expectancy against health spending, the US is falling far behind:

The problem is that the US health system views illness as a problem to be solved. If you think in terms of wellbeing rather than illness, you broaden the set of approaches you can use. For example, there are significant health benefits to pet ownership, providing green space within cities, and so on, but these are not fundable with the US system. There are obvious connections between body mass index and the availability of healthy foods, the walkability of neighbourhoods, and so on, but these don’t fit into a healthcare paradigm that allocates resources according to disease diagnosis.

Andrew then illustrated the power of re-thinking cities as systems-of-systems through several Arup case studies:

  • Dongtan eco-city. This city was designed from the ground up to be food positive, and energy positive (ie. intended to generate more food and more clean energy than it uses). The design makes it more preferable to walk or bike than to drive a car. A key design tool was the use of an integrated model that captures the interactions of different systems within the city. [Dongtan is, incidentally, a classic example of how the media alternately overhypes and then trashtalks major sustainability initiatives, when the real story is so much more interesting].
  • Low2No, Helsinki, a more modest project that aims to work within the existing city to create carbon negative buildings and energy efficient neighbourhoods step by step.
  • Werribee, a suburb of Melbourne, which is mainly an agricultural town, particularly known for its broccoli farming. But with fluctuating prices, farmers have had difficulty selling their broccoli. In an innovative solution that turns this problem into an opportunity, Arup developed a new vision that uses local renewable energy, water and waste re-processing to build a self-sufficient hothouse food production and research facility that provides employment and education along with food and energy.

In conclusion, we have to understand how our views of these systems constrain us to particular pathways, and we have to understand the connections between multiple systems if we want to understand the important issues. In many cases, we don’t do well at recognizing good outcomes, because our worldviews lead us to the wrong measures of success, and then we use these measures to create silos, attempting to optimize within them, rather than seeing the big picture. Understanding the systems, and understanding how these systems shape our thinking is crucial. However, the real challenges then lie in using this understanding to frame effective policy and create effective action.

After Andrew’s talk, we moved into a hands-on workshop activity, using a set of cards developed by Arup called Drivers of Change. The cards are fascinating – there are 189 cards in the deck, each of which summarizes a key issue (e.g. urban migration, homelessness, clean water, climate change, etc), and on the back, distills some key facts and figures. Our exercise was to find connections between the cards – each person had to pick one card that interested him or her, and then team up with two other people to identify how their three cards are related. It was a fascinating and thought-provoking exercise, that really got us thinking about systems-of-systems. I’m now a big fan of the cards and plan to use them in the classroom. (I bought a deck at Indigo for $45, although I note that, bizarrely, Amazon has them selling for over $1000!).

We held a 2-day workshop at U of T last week entitled “Finding Connections – Towards a Holistic View of City Systems“. The workshop brought together a multi-disciplinary group of people from academia, industry, government, and the non-profit sector, all of whom share a common interest in understanding how cities work as systems-of-systems, and how to make our cities more sustainable and more liveable. A key theme throughout the workshop was how to make sure the kinds of research we do in universities does actually end up being useful to decision-makers – i.e. can we strengthen evidence-based policymaking (and avoid, as one of the participants phrased it, “policy-based evidence-making”).

I plan to blog some of the highlights of the workshop, starting with the first keynote speaker.

The workshop kicked off with an inspiring talk by Jeb Brugmann, entitled “The Productive City”. Jeb is an expert in urban sustainability and climate change mitigation, and has a book out called “Welcome to the Urban Revolution: How Cities are Changing the World“. (I should admit the book’s been sitting on the ‘to read’ pile on my desk for a while – now I have to read it!).

Jeb’s central message was that we need to look at cities and sustainability in a radically different way. Instead of thinking of sustainability as about saving energy, living more frugally, and making sacrifices, we should be looking out how we re-invent cities as places that produce resources rather than consume them. And he offered a number of case studies that demonstrate how this is already possible.

Jeb started his talk with the question: How will 9 billion people thrive on Earth? He then took us back to a UN meeting in 1990, the World Congress of Local Governments for a Sustainable Future. This meeting was the first time that city governments around the world came together to grapple with the question of sustainable development. To emphasis how new this was, Jeb recollected lengthy discussions at the meeting on basic questions such as how to translate the term ‘sustainable development’ into French, German, etc.

The meeting had two main outcomes:

  • Initial work on Agenda 21, getting communities engaged in collaborative sustainable decision making. [Note: Agenda 21 was subsequently adopted by 178 countries at the Rio Summit in 1992. More interestingly, if you google for Agenda 21 these days, you’re likely to find a whole bunch of nutball right-wing conspiracy theories about it being an agenda to destroy American freedom.]
  • A network of city governments dedicated to developing action on climate change [This network became ICLEI – Local Governments for Sustainability]. Jeb noted how the ambitions of the cities participating in ICLEI have grown over the years. Initially, many of these cities set targets around 20% reduction in greenhouse gas emissions. Over the years since, these target have grown. For example, Chicago now has a target of 80% reduction. This is significant because these targets have been through city councils, and have been discussed and agreed on by those councils.

An important idea arising out of these agreements is the concept of the ecological footprint – sometimes expressed as how many earths are needed to support us if everyone had the same resource consumption as you. The problem is that you get into definitional twists on how you measure this, and that gets in the way of actually using it as a productive planning tool.

Here’s another way of thinking about the problem. Cities currently have hugely under-optimized development patterns. For example, cities with seven times more outspill growth (suburban sprawl) compared to infill growth. But there are emergent pressures on industry to optimize use of urban space and urban geography. Hence, we should start to examine under-used urban assets. If we can identify space within the city that doesn’t generate value, we can reinvent it. For example, the laneways of Melbourne, which in the 1970’s and 80’s were derelict, have now been regenerated for a rich network of local stores and businesses, and ended up as a major tourist attraction.

We also tend to dramatically underestimate the market viability of energy efficient, sustainable buildings. For example, in Hannover, a successful project built an entire division of eco-homes using Passivhaus standards at similar rental price to the old 1960s apartment buildings.

The standard view of cities, built into the notion of ecological footprint, is that cities are extraction engines – the city acts as a machine that extracts resources from the surrounding environment, processes these resources to generate value, and produces waste products that must be disposed of. Most work on sustainable cities frames the task as an attempt to reduce the impact of this process, by designing eco-efficient cities. For example, the use of secondary production (e.g. recycling) and designed dematerialization (reduction of waste in the entire product lifecycle) to reduce the inflow of resources and the outflow of wastes.

Jeb argues a more audacious goal is needed: We should transform our cities into net productive systems. Instead of focussing on reducing the impact of cities, we should use urban ecology and secondary production so that the city becomes a net positive resource generator. This is far more ambitious than existing projects that aim to create individual districts that are net zero (e.g. that produce as much energy as they consume, through local solar and wind generation). The next goal should be productive cities: cities that produce more resources than they consume; cities that process more waste than they produce.

Jeb then went on to crunch the numbers for a number of different types of resource (energy, food, metals, nitrogen), to demonstrate how a productive city might fill the gap between rising demand and declining supply:

Energy demand. Current European consumption is around 74GJ/capita. Imagine by 2050, we have 9 billion people on the planet, all living like Europeans do now – we’ll need 463EJ to supply them all. Plot this growth in demand over time, and you have a wedge analysis. Using IEA numbers of projected growth in renewable energy supply, to provide the wedges, there’s still a significant shortfall. We’ll need to close the gap via urbanrenewable energy generation, using community designs of the type piloted in Hannover. Cities have to become net producers of energy.

Here’s the analysis (click each chart for full size):

Food. We can do a similar wedge analysis for food. Current food production globally produces around 2,800kcal/captia. But as the population grows, this current level of production produces steadily less food per person. Projected increases in crop yields, crop intensity, and conversion of additional arable land, and reduction of waste would still leave a significant gap if we wish to provide a comfortable 3100kcal/capita. While urban agriculture is unlikely to displace rural farm production, it can play a crucial role in closing the gap between production and need, as the population grows. For example, Havana has a diversified urban agriculture that supplies close to 75% of vegetables from within the urban environment. Vancouver has been very strategic about building its urban agricultural production, with one out of every seven jobs in Vancouver in food production.

Other examples include landfill mining to produce iron and other metals, and urban production of nitrogen fertilizer from municipal biosolids.

In summary, we’ve always underestimated just how much we can transform cities. While we remain stuck in a mindset that cities are extraction engines, we will miss opportunities for more radical re-imagings of the role of global cities. So a key research challenge is to develop a new post-“ecological footprint” analysis. There are serious issues of scaling and performance measurement to solve, and at every scale there are technical, policy, and social challenges. But as cities house ever more of the growing population, we need this kind of bold thinking.

My first year seminar course, PMU199 Climate Change: Software, Science and Society is up and running again this term. The course looks at the role of computational models in both the science and the societal decision-making around climate change. The students taking the course come from many different departments across arts and science, and we get to explore key concepts in a small group setting, while developing our communication skills.

As an initial exercise, this year’s cohort of students have written their first posts for the course blog (assignment: write a blog post on any aspect of climate change that interests you). Feel free to comment on their posts, but please keep it constructive – the students get a chance to revise their posts before we grade them (and if you’re curious, here’s the rubric).

Incidentally, for the course this year, I’ve adopted Andrew Dessler’s new book, Introduction to Modern Climate Change as the course text. The book was just published earlier this year, and I must say, it’s by far the best introductory book on climate science that I’ve seen. My students tell me they really like the book (despite the price), as it explains concepts simply and clearly, and they especially like the fact that it covers policy and society issues as well as the science. I really like the discussion in chapter 1 on who to believe, in which the author explains that readers ought to be skeptical of anyone writing on this topic (including himself), and then lays out some suggestions for how to decide who to believe. Oh, and I love the fact that there’s an entire chapter later in the book devoted to the idea of exponential growth.

I’ve been using the term Climate Informatics informally for a few years to capture the kind of research I do, at the intersection of computer science and climate science. So I was delighted to be asked to give a talk at the second annual workshop on Climate Informatics at NCAR, in Boulder this week. The workshop has been fascinating – an interesting mix of folks doing various kinds of analysis on (often huge) climate datasets, mixing up techniques from Machine Learning and Data Mining with the more traditional statistical techniques used by field researchers, and the physics-based simulations used in climate modeling.

I was curious to see how this growing community defines itself – i.e. what does the term “climate informatics” really mean? Several of the speakers offered definitions, largely drawing on the idea of the Fourth Paradigm, a term coined by Jim Gray, who explained it as follows. Originally, science was purely empirical. In the last few centuries, theoretical science came along, using models and generalizations, and in the latter half of the twentieth century, computational simulations. Now, with the advent of big data, we can see a fourth scientific research paradigm emerging, sometimes called eScience, focussed on extracting new insights from vast collections of data. By this view, climate informatics could be defined as data-driven inquiry, and hence offers a complement to existing approaches to climate science.

However, there’s still some confusion, in part because the term is new, and crosses disciplinary boundaries. For example, some people expected that Climate Informatics would encompass the problems of managing and storing big data (e.g. the 3 petabytes generated by the CMIP5 project, or the exabytes of observational data that is now taxing the resources of climate data archivists). However, that’s not what this community does. So, I came up with my own attempt to define the term:

I like this definition for three reasons. First, by bringing Information Science into the mix, we can draw a distinction between climate informatics and other parts of computer science that are relevant to climate science (e.g. the work of building the technical infrastructure for exascale computing, designing massively parallel machines, data management, etc). Secondly, information science brings with it a concern for the broader societal and philosophical questions of the nature of information and why people need it, a concern that’s often missing from computer science. Oh, and I also like this definition because I also work at the intersection of the three fields, even though I don’t really do data-driven inquiry (although I did, many years ago, write an undergraduate thesis on machine learning). Hence, it creates a slightly broader definition than just associating the term with the ‘fourth paradigm’.

Having defined the field this way, it immediately suggests that climate informatics should also concern itself with the big picture of how we get get beyond mere information, and start to produce knowledge and (hopefully) wisdom:

This diagram is adapted from a classic paper by Russ Ackoff “From Data to Wisdom”, Journal of Applied Systems Analysis, Volume 16, 1989 p 3-9. Ackoff originally had Understanding as one of the circles, but subsequent authors have pointed out that it makes more sense as one of two dimensions you move along as you make sense of the data, the other being ‘context’ or ‘connectedness’.

The machine learning community offers a number of tools primarily directed at moving from Data towards Knowledge, by finding patterns in complex datasets. The output of a machine learner is a model, but it’s a very different kind of model from the computational models used in climate science: it’s a mathematical model that describes the discovered relationships in the data. In contrast, the physics-based computational models that climate scientists build are more geared towards moving in the opposite direction, from knowledge (in the form of physical theories of climactic processes) towards data, as a way of exploring how well current theory explains the data. Of course, you can also run a climate model to project the future (and, presumably, help society choose a wise path into the future), but only once you’ve demonstrated it really does explain the data we already have about the past. Clearly the two approaches are complementary, and ought to be used together to build a strong bridge between data and wisdom.

Note: You can see all the other invited talks (including mine), at the workshop archive. You can also explore the visuals I used (with no audio) at Prezi (hint: use full screen for best viewing).

It’s AGU abstract submission day, and I’ve just submitted one to a fascinating track organised by John Cook, entitled “Social Media and Blogging as a Communication Tool for Scientists”. The session looks like it will be interesting, as there are submissions from several prominent climate bloggers. I decided to submit an abstract on moderation policies for climate blogs:

Don’t Feed the Trolls: An analysis of strategies for moderating discussions on climate blogs
A perennial problem in any online discussion is the tendency for discussions to get swamped with non-constructive (and sometimes abusive) comments. Many bloggers use some form of moderation policy to filter these out, to improve the signal to noise ratio in the discussion, and to encourage constructive participation. Unfortunately, moderation policies have disadvantages too: they are time-consuming to implement, introduce a delay in posting contributions, and can lead to accusations of censorship and anger from people whose comments are removed.

In climate blogging, the problem is particularly acute because of the politicization of the discourse. The nature of comments on climate blogs vary widely. For example, on a blog focussed on the physical science of climate, comments on posts might include personal abuse, accusations of misconduct and conspiracy, repetition of political talking points, dogged pursuit of obscure technical points (whether related or not to the original post), naive questions, concern trolling (negative reactions posing as naive questions), polemics, talk of impending doom and catastrophe, as well as some honest and constructive questions about the scientific topic being discussed. How does one decide which of these comments to allow? And if some comments are to be removed, what should be done with them?

In this presentation, I will survey a number of different moderation strategies used on climate blogs (along with a few notable examples from other kinds of blogs), and identify the advantages and disadvantages of each. The nature of the moderation strategy has an impact on the size and kind of audience a blog attracts. Hence, the choice of moderation strategy should depend on the overall goals for the blog, the nature of the intended audience, and the resources (particularly time) available to implement the strategy.

05. June 2012 · 3 comments · Categories: philosophy

There’s been plenty of reaction across the net this week in response to a daft NYT article that begins “Men invented the internet”. At BoingBoing, Xeni is rightly outraged at the way the article is written, and in response comes up with plenty of examples of the contributions women have made to the development of computing. Throughout the resulting thread, many commentators chip in with more examples. Occasionally a (male) commentator shows up and tries to narrow down the definition of “invented the internet” to show that, yes, it was a man who invented some crucial piece of technology that makes it work. These comments are very revealing of a particular (male!) mindset towards technology and invention.

The central problem in the discussion seems to have been missed entirely. The real problem word in that opening sentence isn’t the word “men”, it’s the word “invented”. The internet is an incredibly complex socio-technical system. The notion that any one person (or any small group of people) invented it is ludicrous. Over a period of decades, the various technologies, protocols and conventions that make the internet work gradually evolved, through the efforts of a huge number of people (men and women), through a remarkable open design process. The people engaged in this endeavour had to invent new social processes for sharing and testing design ideas and getting feedback (for example, the RFC). It was these social processes, as much as any piece of technology, that made the internet possible.

But we should go further, because the concept of “invented” is even more problematic. If you study how any modern device came to be, the idea that there is a unique point in space and time that can be called its “invention” is really just a fiction. Henry Petroski does a great job of demonstrating this, through his histories of every day objects such as pencils, cutlery, and so on. The technologies we rely on today all passed though a long history of evolution in the same way. Each new form is a variant of ones that have gone before, created to respond to a perceived flaw in its predecessors. Some of these new variants are barely different from others, others represent larger modifications. Many of these modifications are worse than the original, some are better for specific purposes (and hence may start a new niche), and occasionally a more generally useful improvement is made.

The act of pointing to these occasional, larger modifications, and choosing to label them as “the birth of the modern X”, or the “first X”, or “the invention of X”, is a purely a social construct. We do it because we’re anchored in the present, seeing only the outcomes of these evolutionary processes, and we make the same mistake that creationists make, of being unable to conceive of the huge variety of intermediate forms that came before, and the massive process of trial and error that selected a particular form to survive and prosper. And, through continued operation of that bias, we’ve been conditioned to think in terms of unique moments of “invention” (often accompanied by a caricature of the lonely inventor working late at night in the lab).

And one of the biggest differences between men and women, in terms of social behaviour, is that men tend to boast about their successes and identify winners, while women tend to acknowledge group contributions and downplay their own efforts. So it’s hardly surprising that our history books are more full of male “inventors” than female inventors – the very idea of looking for a unique person to call the “inventor” is largely a male concept. Not only that, but it’s overwhelmingly a rich white guys’ way of looking at the world. The rich and powerful get to make decisions about who gets the credit for stuff. Not surprisingly, rich and powerful white men tend to pick other white men to designate as the “inventor”, and marginalize the contributions of others, no matter who else contributed to the idea during its gestation.

Update: Jan 9, 2014: Thanks to my student, Elizabeth, I now know the term for this: it’s the Matthew Effect. The wikipedia page has lots of examples.

29. May 2012 · 5 comments · Categories: education

A few people today have pointed me at the new paper by Dan Kahan & colleagues (1), which explores competing explanations for why lots of people don’t regard climate change as a serious problem. I’ve blogged about Dan’s work before – their earlier studies are well designed, and address important questions. If you’re familiar with their work, the new study isn’t surprising. They find that people’s level of concern over climate change doesn’t correlate with their level of science literacy, but does correlate with their cultural philosophies. In this experiment, there was no difference in science literacy between people who are concerned about climate change and those who are not. They use this to build a conclusion that giving people more facts is not likely to change their minds:

A communication strategy that focuses only on transmission of sound scientific information, our results suggest, is unlikely to do that.

Which is reasonable advice, because science communication must address how different people see the world, and how people filter information based on their existing worldview:

Effective strategies include use of culturally diverse communicators, whose affinity with different communities enhances their credibility, and information-framing techniques that invest policy solutions with resonances congenial to diverse groups

Naturally, some disreputable newsites spun the basic finding as a “Global warming skeptics as knowledgeable about science as climate change believers, study says“. Which is not what the study says at all, because it didn’t measure what people know about the science.

The problem is that there’s an unacknowledged construct validity problem in the study. At the beginning of the paper, the authors talk about “the science comprehension thesis (SCT)”:

As members of the public do not know what scientists know, or think the way scientists think, they predictably fail to take climate change as seriously as scientists believe they should.

…which they then claim their study disproves. But when you get into the actual study design, they quickly switch to talking about science literacy:

We measured respondents’ science literacy with National Science Foundation’s (NSF) Science and Engineering Indicators. Focused on physics and biology (for example, ‘Electrons are smaller than atoms [true/false]’; ‘Antibiotics kill viruses as well as bacteria [true/false]’), the NSF Indicators are widely used as an index of public comprehension of basic science.

But the problem is, science comprehension cannot be measured by asking a whole bunch of true/false questions about scientific “facts”! All that measures is the ability to do well in a pub trivia quizzes.

Unfortunately, this mistake is widespread, and leads to an education strategy that fills students’ heads with a whole bunch of disconnected science trivia, and no appreciation for what science is really all about. When high school students learn chemistry, for example, they have to follow recipes from a textbook, and get the “right” results. If their results don’t match the textbooks, they get poor marks. When they’re given science tests (like the NSF one used in this study), they’re given the message that there’s a right and wrong answer to each question, and you just gotta know it. But that’s about as far from real science as you can get! It’s when the experiment gives surprising results that the real scientific process kicks in. Science isn’t about getting the textbook answer, it’s about asking interesting questions, and finding sound methods for answering them. The myths about science are ground into kids from an early age by people who teach science as a bunch of facts (3).

At the core of the problem is a failure to make the distinction that Maienschein points out between science literacy and scientific literacy (2). The NSF instrument measures the former. But science comprehension is about the latter – it…

…emphasizes scientific ways of knowing and the process of thinking critically and creatively about the natural world.

So, to Kahan’s interpretation of the results, I would add another hypothesis: we should actually start teaching people about what scientists do, how they work, and what it means to collect and analyze large bodies of evidence. How results (yes, even published ones) often turn out to be wrong, and what matters is the accumulation of evidence over time, rather than any individual result. After all, with Google, you can now quickly find a published result to support just about any crazy claim. We need to teach people why that’s not science.

Update (May 30, 2012): Several people suggested I should also point out that the science literacy test they used is for basic science questions across physics and biology; they did not attempt to test in any way people’s knowledge of climate science. So that seriously dents their conclusion: The study says nothing about whether giving people more facts about climate science is likely to make a difference.

Update2 (May 31, 2012): Some folks on Twitter argued with my statement “concern over climate change doesn’t correlate with … level of science literacy”. Apparently none of them have a clue how to interpret statistical analysis in the behavioural sciences. Here’s Dan Kahan on the topic. (h/t to Tamsin).


(1) Kahan, D., Peters, E., Wittlin, M., Slovic, P., Ouellette, L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks Nature Climate Change DOI: 10.1038/nclimate1547
(2) Maienschein, J. (1998). Scientific Literacy Science, 281 (5379), 917-917 DOI: 10.1126/science.281.5379.917
(3) William F. McComas (1998). The Principle Elements of the Nature of Science: Dispelling the Myths The Nature of Science in Science Education

I’ve been following a heated discussion on twitter this past week about a planned protest on Sunday in the UK, in which environmentalists plan to destroy a crop of genetically modified wheat being grown as part of a scientific experiment at Rothamsted, in Hertfordshire (which is, incidentally, close to where I grew up). Many scientists I follow on twitter are incensed, calling the protest anti-science. And some worry that it’s part of a larger anti-science trend in which the science on issues such as climate change gets ignored too. In return, the protesters are adamant that the experiment should not be happening, no matter what potential benefits the research might bring.

I’m fascinated by the debate, because it seems to be a classic example of the principle of complementarity in action, with each group describing things in terms of different systems, and rejecting the others’ position because it makes no sense within their own worldview. So, it should make a great case study for applying boundary critique, in which we identify the system that each group is seeing, and then explore where they’ve chosen to draw the boundaries of that system, and why. I think this will make a great case study for my course next month.

I’ve identified eight different systems that people have talked about in the debate. This is still something of a work in progress (and I hope my students can extend the analysis). So here they are, and for each some initial comments towards a boundary critique:

  1. A system of scientists doing research. Many scientists see the protests as nothing more than irrational destruction of research. The system that motivates this view is a system of scientific experimentation, in which expert researchers choose problems to work on, based on their expectation that the results will be interesting and useful in some way. In this case, the GM trials are applied research – there is an expectation that the modified wheat might lead to agricultural improvements (e.g. through improved yield, or reduced need for fertilizer or pesticide). Within this system, science is seen as a neutral pursuit of knowledge, and therefore, attempts to disrupt experiments must be “anti-knowledge”, or “anti-science”. People who operate within this system tend to frame the discussion in terms of an attack on a particular group of researchers (on twitter, they’ve been using the hashtag #dontdestroyresearch), and they ask, pointedly, whether green politicians and groups condone or condemn the destruction. (The irony here is that the latter question itself is, itself, unscientific – it’s a rhetorical device used in wedge politics – but few of the people using it acknowledge this). Questions about whether certain kinds of research are ethical, or who might yield the benefits from this research lie outside the boundary of this system, and so are not considered. It is assumed that the researchers themselves, as experts, have made those judgments well, and that the research itself is not, and cannot be, a political act.
  2. A system of research ethics and risk management. If we expand the boundaries of system 1 a little, we see a system of processes by which scientific experiments are assessed for how they manage the risks they pose to be public. Scientific fields differ in their sophistication for how they arrange this system. In the physical sciences, the question often doesn’t arise, because the the research itself carries no risk. But in medical and social sciences, processes have arisen for making this judgement, sometimes in response to a disaster or a scandal. Most research institutions have set up Internal Review Boards (IRBs) who must approve (or prevent) research studies that poses a risk to people or ecosystems. My own research often strays into behavioural science, so I frequently have to go though our ethics approval process. The approvals process is usually frustrating, and I’m often surprised at some of the modifications the ethics board asks me to make, because my assessment of the risk is different to theirs. However, if I take a step back, I can see that both the process and the restrictions it places on me are necessary, and that I’m definitely not the right person to make judgements about the risks I might impose on others in my research. The central question is usually one of beneficence: does the value of the knowledge gained outweigh any potential risk to participants or others affected by the study? Some research clearly should not happen, because the argument for beneficence is too weak. In this view, the Rothamsted protest is really about democratic control of the risk assessment process. If all stakeholders aren’t included, and the potential impact on them is not taken seriously, they lose faith in the scientific enterprise itself. In the case of GMOs, there’s a widespread public perception (in the UK) that the interests of large corporations who stand to profit from this research are being allowed to drive the approvals process, and that the researchers themselves are unable to see this because they’re stuck in system 1. I’ve no idea how true this is for GMO research, but there’s plenty of evidence that’s it’s become a huge problem in pharmaceutical research. Medical research organizations have, in the last few years, taken significant steps to reduce the problem, e.g by creating registers of trials to ensure negative results don’t get hidden. The biotech research community appear to be way behind on this, and much research still gets done behind the veil of corporate secrecy. (The irony here is that the Rothamsted trials are publicly funded, and results will be publicly available, making it perhaps the least troublesome biotech research with respect to corporate control. However, that visibility makes it an easy target, and hence, within this system, the protest is really an objection to how the government ran the risk assessment and approval process for this experiment).
  3. A system of ecosystems and contaminants that weaken them. Some of the protesters are focused more specifically on the threat that this and similar experiments might pose on neighbouring ecosystems. In this view, GMOs are a potential contaminant, which, once released into the wild cannot ever be recalled. Ecosystems are complex systems, and we still don’t understand all the interactions that take place within them, and how changing conditions can damage them. Previous experimentation (e.g. the introduction of non-native species, culls of species regarded as pests, etc), have often been disastrous, because of unanticipated system interactions. Within this system, scientists releasing GMOs into the wild are potentially repeating these mistakes of the past, but on a grander scale, because a GMO represents a bigger step change within the system than, say, selective breeding. Because these ecosystems have non-linear dynamics, bigger step changes aren’t just a little more risky than small step changes; they risk hitting a threshold and causing ecosystem collapse. People who see this system tend to frame the discussion in terms of the likelihood of cross-contamination by the GMO, and hence worry that no set of safeguards by the researchers is sufficient to guarantee the GMO won’t escape. Hence, they object to the field trials on principle. This trial is therefore, potentially, the thin end of the wedge, a step towards lifting the wider ban on such trials. If this trial is allowed to go ahead, then others will surely follow, and sooner or later, various GMOs will escape with largely unpredictable consequences for ecosystems. As the GMOs are supposed to have a competitive advantage of other related species, once they’ve escaped, they’re likely to spread, in the same way that invasive species did. So, although the researchers in this experiment may have taken extensive precautions to prevent cross-contamination, such measures will never be sufficient to guarantee protection, and indeed, there’s already a systematic pattern of researchers underestimating the potential spread of GMO seeds (e.g. through birds and insects), and of course, they routinely underestimate the likelihood of human error. Part of the problem here is that the researchers themselves are biased in at least two ways: they designed the protection measures themselves, so they tend to overestimate their effectiveness, and they believe their GMOs are likely to be beneficial (otherwise they wouldn’t be working on them), so they downplay the risk to ecosystems if they do escape. Within this system, halting this trial is equivalent to protecting the ecosystems from risky contamination. (The irony here is that a bunch of protesters marching into the field to destroy the crop is likely to spread the contamination anyway. The protesters might rationalize it by saying this particular trial is more symbolic, because the risk from any one trial is rather low; instead the aim is to make it impossible for future trials to go ahead)
  4. A system of intellectual property rights and the corresponding privatization of public goods. Some see GMO research as part of a growing system of intellectual property rights, in which large corporations gain control of who can grow which seeds and when. In Canada, this issue became salient when Monsanto tried suing farmers who were found to have their genetically modified corn planted in their fields, despite the fact that those farmers had never planted them (it turned out the seeds were the result of cross-contamination from other fields, something that Monsanto officially denies is possible). By requiring farmers to pay a licence fee each year to re-plant their proprietary seeds, these companies create a financial dependency that didn’t exist when farmers were able to save seeds to be replanted. Across developing countries, there is growing concern that agribusiness is gaining too much control of local agriculture, creating a market in which only their proprietary seeds can be planted, and hence causing a net outflow of wealth from countries that can least afford it to large multi-national corporations. I don’t see this view playing a major role in the UK protests this week, although it does come up in the literature from the protest groups, and is implicit in the name of the protest group: Take The Flour Back.
  5. An economic system in which investment in R&D is expected to boost the economy. This is the basic capitalist system. Companies that have the capital invest in research into new technologies (GMOs) that can potentially bring big returns on investment for biotech corporations. This is almost certainly the UK government’s perspective on the trials at Rothamsted – the research should be good for the economy. It’s also perhaps the system that motivates some of the protesters, especially where they see this system exacerbating current inequalities (big corporations get richer, everyone else pays more for their food). Certainly, economic analysis of the winners and losers from GM technology demonstrate that large corporations gain, and small-scale farmers lose out.
  6. A system of global food supply and demand, in which a growing global population, and a fundamental limit on the land available for agriculture, place serious challenges on how to achieve a better match of food consumption to food production. In the past, we solved this problem through two means: expanding the amount of land under cultivation, and through the green revolution, in which agricultural yields were increased by industrialization of the agricultural system and the wide-scale use of artificial fertilizers. GMOs are (depending on who you ask) either the magic bullet that will allow us to feed 9 billion people by mid-century, or, more modestly, one of many possible solutions that we should investigate. In this system, the research at Rothamsted is seen as a valuable step towards solving world hunger, and so protesting against it is irrational. The irony here is that improving agricultural yields is probably the least important part of the challenge of feeding 9 billion people: there is much more leverage to be had in solving problems of food distribution, reducing wastage, and reducing the amount of agricultural land devoted to non-foods.
  7. A system of potential threats to human health and well-being. Some see GMOs as a health issue. Potential human health effects include allergies, and cross-species genetic transfer, although scientists dismiss both, citing a lack of evidence. While there is some (disputed) evidence of such health risks already occurring, on balance this is more a concern about unpredictable future impacts, rather than what has already happened, which means an insistence on providing evidence is irrelevant: a bad outcome doesn’t have to have already occurred for us to take the risk seriously. If we rely on ever more GMOs to drive the global agricultural system, sooner or later we will encounter such health problems, most likely through increased allergic reaction. Allergies themselves have interesting systemic properties – they arise when the body’s normal immune system, doing it’s normal thing, ends up over-reacting to a stimulus (e.g. new proteins) that is otherwise harmless. The concern here, then, is that the reinforcing feedback loop of ever more GM plant variants means that, sooner or later, we will cross a threshold where there is an impact on human health. People who worry about this system tend to frame the discussion using terms such as “Frankenfoods“, a term that is widely derided by biotech scientists. The irony here is that by dismissing such risks entirely, the scientists reduce their credibility in the eyes of the general public, and end up seeming even more like Dr Frankenstein, oblivious to their own folly.
  8. A system of sustainable agriculture, with long time horizons. In this system, short term improvements in agricultural yield are largely irrelevant, unless the improvement can be demonstrated to be sustainable indefinitely without further substantial inputs to the system. In general, most technological fixes fail this test. The green revolution was brought about by a massive reliance on artificial fertilizer, derived from fossil fuels. As we hit peak oil, this approach cannot be sustained. Additionally, the approach has brought its own problems, including a massive nitrogen pollution of lakes and coastal waters, and poorer quality soils, and of course, the resulting climate change from the heavy use of fossil fuels. In this sense, technological fixes provide short term gains in exchange for a long term debt that must be paid by future generations. In this view, GMOs are seen as an even bigger step in the wrong direction, as they replace an existing diversity in seed-stocks and farming methods with industrialized mono-cultures, and divert attention away from the need for soil conservation, and long-term sustainable farming practices. In this system, small scale organic farming is seen as the best way of improving the resilience of the global food production. While organic farming sometime (but not always!) means lower yields, it reduces dependency on external inputs (e.g. artificial fertilizers and pesticides), and increases diversity. Systems with more diverse structures tend to be more resilient in the face of new threats, and the changing climates over the next few decades will severely test the resilience of our farming methods in many regions of the world.  The people who worry about this system point to failures of GMOs to maintain their resistance to pests. Here, you get a reinforcing feedback loop in which you need ever more advances in GMO technology to keep pace with the growth of resistance within the ecosystem, and with each such advance, you make it harder for non-GMO food varieties to survive. So while most proponents of GMOs see them as technological saviours, in the long term it’s likely they actually reduce the ability of the global agricultural system to survive the shocks of climate change.

Systems theory leads us to expect that these systems will interact in interesting ways, and indeed they do. For example, systems 6 and 8 can easily be confused as having the same goal, but in fact, because the systems have very different temporal scales, they can end up being in conflict: short-term improvements to agricultural yield can lead to long term reduction of sustainability and resilience. Systems 6 and 7 can also interfere – it’s been argued that the green revolution reduced world starvation and replaced it with widespread malnutrition, as industrialization of food production gives us fewer healthy food choices. Systems 1 and 4 are often in conflict, and are leading to ever more heated debates over open access to research results. And of course, one of the biggest worries of some of the protest groups is the interaction between systems 2 and 5: the existence of a large profit motive tends to weaken good risk management practices in biotech research.

Perhaps the most telling interaction is the opportunity cost. While governments and corporations, focusing on systems 5 & 6, pour funding and effort into research into GMOs, other, better solutions to long term sustainability and resilience, required in system 8, become under-invested. More simply: if we’re asking the wrong question about the benefit of GMOs, we’ll make poor decisions about whether to pursue them. We should be asking different questions about how to feed the world, and resources put into publicly funded GMO research tend to push us even further in the wrong direction.

So where does that leave the proposed protests? Should the trials at Rothamsted be allowed to continue, or do the protesters have the right to force an end to the experiment, by wilful destruction if necessary? My personal take is that the experiment should be halted immediately, preferably by Rothamsted itself, on the basis that it hasn’t yet passed the test for beneficence in a number of systems. The knowledge gain from this one trial is too small to justify creating this level of societal conflict. I’m sure some of my colleague will label me anti-science for this position, but in fact, I would argue that my position here is strongly pro-science: an act of humility by scientists is far more likely to improve the level of trust that the public has in the scientific community. Proceeding with the trial puts public trust in scientists further at risk.

Let’s return to that question of whether there’s an analogy between people attacking the biotech scientists and people attacking climate scientists. If you operate purely within system 1, the analogy seems compelling. However, it breaks down as soon as you move to system 2, because the risks have opposite signs. In the case of GMO food trials, the research itself creates a risk; choosing not to do the research at all (or destroying it if someone else tries it) is an attempt to reduce risk. In the case of climate science, the biggest risks are on the business-as-usual scenario. Choosing to do the research itself poses no additional risk, and indeed reduces it, because we come to understand more about how the climate system works.

The closest analogy in climate science I can think of is the debate over geo-engineering. Many climate scientists objected to any research being done on geo-engineering for many years, for exactly the reason many people object to GMO research – because it diverts attention away from more important things we should be doing, such as reducing greenhouse gas emissions. A few years back, the climate science community seems to have shifted perspective, towards the view that geo-engineering is a desperate measure that might buy us more time  to get emissions under control, and hence research is necessary to find out how well it works. A few geo-engineering field trials have already happened. As these start to gain more public attention, I would expect the protests to start in earnest, along with threats to destroy the research. And it will be for all the same reasons that people want to destroy the GM wheat trials at Rothamsted. And, unless we all become better systems thinkers, we’ll have all the same misunderstandings.

Update (May 29, 2012): I ought to collect links to thought provoking articles on this. Here are some: