I’ve finally managed to post the results of our workshop on Software Research and Climate Change, held at Onward/Oopsla last month. We did lots of brainstorming, and attempted to cluster the ideas, as you can see in the photos of our sticky notes.
After the workshop, I attempted to boil down the ideas even further, and came up with three clusters of research:
- Green IT (i.e. optimize power consumption of software and all things controlled by software (also known as “make sure ICT is no longer part of the problem”). Examples of research in this space include:
- Power aware computing (better management of power in all devices from mobile to massive installations).
- Green controllers (smart software to optimize and balance power consumption in everything that consumes power).
- Sustainability as a first class requirement in software system design.
- Computer-Supported Collaborative Science (also known as eScience – i.e. software to support and accelerate inter-disciplinary science in climatology and related disciplines). Examples of research in this space include:
- Software engineering tools/techniques for climate modellers
- Data management for data-intensive science
- Open Notebook science (electronic notebooks)
- Social network tools for knowledge finding and expertise mapping
- Smart ontologies
- Software to improve global collective decision making (which includes everything from tools to improve public understanding of science through to decision support at multiple levels: individual, community, government, inter-governmental,…). Examples of research in this space include:
- Simulations, games, educational software to support public understanding of the science (usable climate science)
- massive open collaborative decision support
- carbon accounting for corporate decision making
- systems analysis of sustainability in human activity systems (requires multi-level systems thinking)
- better understanding of the processes of social epistemology
My personal opinion is that (1) is getting to be a crowded field, which is great, but will only yield up to about 15% of the 100% reduction in carbon emissions we’re aiming for. (2) is has been mapped out as part of several initiatives in the UK and US on eScience, but there’s still a huge amount to be done. (3) is pretty much a green field (no pun intended) at the moment. It’s this third area that fascinates me the most.
On the “social epistemology” front, here’s an idea for a study I’ve been kicking around for a while, about how to measure and communicate confidence in scientific theories. It’s based on the observation that people are generally not good at interpreting quantitative confidence measures (e.g., >95% confident vs. >99% confident). On the other hand, people are generally good at interpreting comparisons (e.g., A is better than B). For evidence of this, witness the poor interpretation of DNA evidence by juries, compared to how comfortable people are with interpreting complex sports ladders.
The idea is to try to communicate the confidence in politically controversial scientific theories by comparing them to familiar but non-politically controversial theories.
First, you’d need to select a set of scientific theories that are familiar to the general public, and rank-order them based on the scientific community’s confidence that these theories are correct. The set would contain a very wide range of theories, some of which are politically controversial (e.g., big bang theory, smoking causes lung cancer, vaccines cause autism, plate tectonics, anthropogenic climate change, quantum theory, HIV causes AIDS, electromagnetic fields cause cancer, IQ tests measure intelligence, asteroids killed the dinosaurs, saturated fat causes heart disease, string theory, Darwinian evolution). Poll a bunch of scientists across all fields, and ask them all to rank these in order from most confident to least confident, indicating which areas they have personal expertise in.
Once you’ve done this poll, analyze the data to see how consistent these confidence assessments are across scientists, also checking if personal expertise in an area affected the comparisons. If the rankings are reasonably consistent across scientists, then you now have an ordinal scale for communicating confidence in a scientific theory.
To actually test the theory, you’d have to run some controlled psychology experiments, where you give some information to a subject either as a confidence level (e.g., “scientists are >95% certain that”), or by making reference to other theories, and see whether that affects their interpretation.
You could really do the psychology experiments without the survey, but I’m personally interested in the survey part, to see whether there is large agreement on the relative confidence in scientific theories, especially since there are no experts across all fields.
So, anybody want to collaborate on this?
Lorin,
That’s an interesting idea. I like the use of rankings instead of confidence levels as a way of understanding the status of scientific theories. One of the problems is that, as you suggest, expertise in a specific field often matters a lot. Here’s one survey of scientists which shows that confidence in the theory of anthropogenic global warming drops off the further you get from climatology:
http://tigger.uic.edu/~pdoran/012009_Doran_final.pdf
Pingback: What do we want Climate Informatics Tools to do? | Serendipity
Pingback: What makes software engineering for climate models different? | Serendipity