Each time you encounter someone trying to claim human-induced global warming is a myth (e.g. because “Mars is warming too!”), you can save a lot of time and energy by just saying, oh yes, that’s myth #16 on the standard list of misunderstandings about climate change. Here’s the list, lovingly and painstakingly put together by John Cook.

Once you’ve got that out of the way, you can then challenge your assailant to identify a safe level of carbon dioxide in the atmosphere, and to get them to give evidence to justify that choice. If they don’t feel qualified to answer this question, then you get to a teachable moment. Take the opportunity to teach your assailant the difference between greenhouse gas emissions and greenhouse gas concentrations. That’s the single most important thing they have to understand. Here’s why:

  • We know that the earth warms by somewhere between 2 and 4.5°C (with a best estimate of about 3°C) for each doubling of CO2 concentrations in the atmosphere (this was first calculated over 100 years ago. The number has been refined a little as we’ve come to understand the physical processes better, but only within a degree or two)
  • CO2 is unlike any other pollutant: once it’s in the atmosphere it stays there for centuries (more specifically, it stays in the carbon cycle, being passed around between plants, soil, oceans, and atmosphere. But anyway, it only ever goes away when it eventually gets laid down as a new fossil layer, e.g. at the bottom of the ocean).
  • The earth’s temperature only responds slowly to changes in the level of greenhouse gases in the atmosphere. That means that even though we’ve seen warming of around 0.7°C over the last century, we’re still owed at least that much again due to the CO2 we have already added to the atmosphere.
  • The temperature is not determined by the amount of CO2 we emit; it’s determined by the total accumulation in the atmosphere – i.e. how thick the “blanket” is.
  • Because the carbon stays there for centuries, all new emissions increase the concentration, thus compounding the problem. The only sustainable level of net greenhouse gas emissions from human activities is zero.
  • If we ever manage to get to the point where net emissions of greenhouse gases from human activities is zero, the planet will eventually (probably, over centuries) return to pre-industrial atmospheric concentration levels (about 270 parts per million), as the carbon gets reburied. During this time, the earth will continue to warm.
  • Net emissions is, of course, the difference between gross emissions and any carbon we manage to remove from the system artificially. As no technology currently exists today for reliably and permanently removing carbon from the system, it would be prudent to aim for zero gross emissions. And the quicker we do it, the less the planet will warm in the meantime.
  • And 3°C global average temperature is about the difference between the last ice age (which ended about 12,000 years ago) and today’s climate. In the last ice age there were ice sheets 0.5km thick over much of North America and Europe. Now imagine how different the earth will be with a further 3°C of warming.

Okay, so that might be a little bit too much for just one teachable moment. What we really need is a simple elegant tool to illustrate all this. Anyone up to building an interactive visualization? John Sterman tried, but I don’t rate his tool high on the usability scale.

This week I’m at OOPSLA, mainly for the workshop on software research and climate change, which went exceedingly well (despite some technical hiccups), and which I will blog once I get my notes together. Now I can relax and enjoy the rest of the conference.

Today, Tom Malone from MIT is giving a keynote talk to kick off the Onward! track. Tom and I chatted over dinner last night about his Climate Collaboratorium project, which is an attempt to meet many of the goals I’ve been discussing about creating tools to foster a constructive public discourse about climate change and its solutions. So I’m keen to hear what he has to say in his keynote.

13:32: Bernd Bruegge is giving an overview of what the Onward! conference (part of Oopsla) is about. This year, Onward! has grown from a track within Oopsla to being a fully fledged co-located conference of its own.

13:36: He’s now introducing Tom Malone. Which reminds me I ought to get his book, The Future of Work. Okay, now Tom’s up, and his talk is entitled “The Future of Collective Intelligence”. His opening question is “who here is happy?” – he got us to raise hands. Looks like the overwhelming majority of the audience are happy. His definition of collective intelligence deliberately dodges the question of what intelligence is: “Groups of individuals doing things collectively that seem intelligent”. Oh, and collective stupidity also happens, and one of the interesting research questions is to figure out why. By this definition, collective intelligence has existed for centuries, but recently new forms have arrived. For example, the way google searches work; and of course, wikipedia. For wikipedia, the key enabler was the organisational design, rather than the technology. More examples: digg, youtube, linux, prediction markets,…

His core research question is: “how can people and computers be connected so that collectively they act more intelligently than any person, group or computer has ever done before?” It’s an aspirational question, but to answer it we need a systematic attempt to understand collective intelligence (rather than just marveling at the various instances). First attempt was to identify and understand different species of collective intelligence. Realised that a more productive metaphor was to look for individual genes that are common across several difference species. Or put another way, what are the design patterns?

Four questions for underlying activity involved in every design pattern: who is doing it? what are they doing? why are they doing it? and how? Tom challenged us to think about what percentage of the intelligence, energy etc of the people in the organisation you are in right now (e.g. the Oopsla conference) are actually available to the organisation. Most people in the audience had low numbers: 30% or less, lots said less than 10%. Then he showed us a video of an experiment in which many people in a large room were collectively driving a simulated airplane. Everyone had a two sided reflective wand (red on one side, green on the other). Half the people controlled up and down, the other half controlled left and right. The video was hilarous, but also surprising in how well the audience did.

So, in this example, the “who” is the crowd. The crowd gene is useful when the locations of knowledge needed for a task are distributed over a crowd, and you’re not sure a priori where it is, but only works when attempts to subvert the task can be controlled in some way.

The why boils down to love, glory, or money. Appealing to love and glory, rather than money, can reduce costs (but not always). E.g. make the task fun and people will chose to do it anyway. More interestingly you can influence the direction of the task by offering money or glory for certain actions. But most people get the motivational factors wrong (or just don’t think about it).

The what often boils down to Create or Decide. Which then gives us four situations depending on whether the crowd does pieces of the task independently or not:

  1. Collection: ‘create’ task where the pieces are independent. Examples include Wikipedia, Sourceforge and Youtube. A special subcategory of the collection pattern is the competition pattern, where you only need a few of the pieces. Eg: TopCoder. and the Netflix prize. In the latter, the offer of a $1 million prize motivated many people to work on this for two years. Eventually, several competing teams combined their solutions, and collectively they met the goal of 10% improvement in Netflix’s movie recommender. Another example: the Matlab programming contests. In this one, the competing algorithms are made available to all teams, so they can take each other’s ideas and incorporate them. This mix of competition and collaboration appears to be strangely addictive to many people. Competition pattern useful when only one (or a few) good solutions are needed, and the motivation is clear.
  2. Collaboration: ‘create’ tasks where the pieces are dependent on one another. Wikipedia is also an example of this, because different edits to the same article are highly inter-dependent. These dependencies are coordinated in wikipedia through the talk pages. In Linux the coordination is through the discussion forums. Tom’s Climate Collaboratorium is another example. In this project, plans are proposed and discussed and voted on, and by combining the plans, the aim is to create better plans than would be available without the collaboration. Managing the inter-dependencies turns out to be the hard part of Collaboration projects. Most existing examples rely on manual coordination mechanisms. Interesting question is what automated support can be provided. Suggestions here include better explicit representations of the interdependencies. The Collaborative pattern works when a large scale task needs doing, there is no satisfatory way of dividing up the task into independent pieces, but there is a way to manage interdependencies.
  3. Group Decision: ‘decide’ tasks where the pieces are inter-dependent. Simple mechanisms include:
    • voting. Interesting example is a baseball team where the fans can do internet voting to decide batting order, pitching rotation, starting line-up, etc. They did this for one season and lost most of their games, possibly because fans of other teams sabotaged the voting. Similar attempt by a UK soccer team, but where you have to be an “owner” of the team (35 pounds per year) to vote. This team seems to be doing well. Another example: Kasparov vs. the world. Expected that Kasparov would win easily, in fact he later said it was the hardest game he ever played. One key was that the crowd could discuss their moves over a 24 hour period before voting on them.
    • consensus. This is what is used in wikipedia when there are disagreements.
    • averaging. Useful in some group estimation tasks. The averages of a large number of individual estimates are often more accurate than individual estimates. Another example: NASA clickworkers, used to get crowds of people to identify craters on photos of astronomical bodies. The averaging of many novices did pretty much the same as experts (and much more cheaply). Another example is prediction markets. Great example: Microsoft used prediction markets to assess the likely release date of an internal product. Quickly found that their expected release date was way earlier than the people participating thought, and the product manager was then alerted to problems with the project that were known among some of the team, but had not been communicated.
  4. Individual Decision: ‘decide’ tasks where the pieces are independent. For example:
    • the market pattern, where everyone makes individual decisions about when to buy things, at what price. The Amazon Mechanical Turk is another example. One of Tom’s students has written a toolkit for iteratively launching and then intergrating mechanical turk tasks.
    • social network pattern – people make individual decisions, but without any money changing hands. For example the amazon recommendation system.

Observations of this analysis: Genes (patterns) don’t occur in isolation, but in particular combinations. For example, across the range of tasks involved in deciding which wikipedia articles to keep, and editing those articles, many of the different patterns across all four quadrants are used. There are also families of similar combinations. E.g.innocentive and threadless are almost identical in terms of the patterns they use, with the only difference being the threadless also includes a crowd vote.

Tom finished with some speculative comments about seeing us at some point in the future as a single global brain, and closed with a quote from Kevin Kelly’s We are the Web:

There is only one time in the history of each planet when its inhabitants first wire up its innumerable parts to make one large Machine. Later that Machine may run faster, but there is only one time when it is born.

You and I are alive at this moment.

PS: most of the ideas in the talk are in the paper Harnessing crowds.

I’ve been invited to give a talk to the Toronto HCI chapter as part of World Usability Day, for which the theme is designing for a sustainable world. Here’s what I have come up with as an abstract for my talk, to be entitled “Usable Climate Science”:

Sustainability is usually defined as “the ability to meet present needs without compromising the ability of future generations to meet their needs”. The current interest in sustainability derives partly from a general concern about environmental degradation and resource depletion, and partly from an awareness of the threat of climate change. But to many people, climate change is only a vague problem, and to some people (e.g. about half the the US population) it isn’t regarded as a problem at all. There is a widespread lack of understanding of the core scientific results of climate science, and the methodology by which those results are obtained. Which in turn means that the public discourse is dominated by ignorance, polarization, and political point scoring. In this environment, lobbyists can propagate misinformation on behalf of various vested interests, and people decide what to believe based on their political worldviews, rather than what the scientific evidence actually says. The chances of getting sound, effective policy in such an environment are slim. In this talk, I will argue that we cannot properly address the challenge of climate change unless this situation is fixed. Furthermore, I’ll argue that the core problem is a usability challenge: how do we make the science itself accessible to the general public? The numerical simulations of climate developed by climatologists are usable only by people with PhDs in climatology. The infographics used to explain climate change in the popular press tend to be high design and low information. What is missing is a concerted attempt to get the core science across to a general audience using software tools and visualizations in which usability is the primary design principle. In short, how do we make climate science usable? Unless we do this, journalists, politicians and the public will be unable to judge whether proposed policy solutions are viable, and unable to distinguish sound science from misinformation. I will illustrate the talk with some suggestions of how we might meet this goal.

Update: talk details have now been announced. It’s on Nov 12 at 7:15pm, in BA1220.