I posted some initial ideas for projects for our summer students awhile back. I’m pleased to say that the students have been making great progress in the last few weeks (despite, or perhaps because of, the fact that I haven’t been around much). Here’s what they’ve been up to:

Sarah Strong and Ainsley Lawson have been exploring how to take the ideas on visualizing the social network of a software development team (as embodied in tools such as Tesseract), and applying them as simple extensions to code browsers / version control tools. The aim is to see if we can add some value in the form of better awareness of who is working on related code, but without asking the scientists to adopt entirely new tools. Our initial target users are the climate scientists at the UK Met Office Hadley Centre, who currently use SVN/Trac as their code management environment.

Brent Mombourquette has been working on a Firefox extension that will capture the browsing history as a graph (pages and traversed links), which can then be visualized, saved, annotated, and shared with others. The main idea is to support the way in which scientists search/browse for resources (e.g. published papers on a particular topic), and to allow them to recall their exploration path to remember the context in which they obtained these resources. I should mention the key idea goes all the way back to the Vannevar Bush’s memex.

Maria Yancheva has been exploring the whole idea of electronic lab notebooks. She has been exploring the workflows used by the climate scientists when they configure and run their simulation models, and considering how a more structured form of wiki might help them. She has selected OpenWetWare as a good starting point, and is exploring how to add extensions to MediaWiki to make OWW more suitable for computational science, especially to keep track of model runs.

Samar Sabie has also been looking at MediaWiki extensions, specifically to find a way to add visualizations into wiki pages and blogs as simply as possible. The problem is that currently, adding something as simple as a table of data to a page requires extensive work with the markup language. The long term aim is to make the insertion of dynamic visualizations (such as those at ManyEyes), but the starting point is to try to make it as ridiculously simple as possible to insert a data table, link it to a graph, and select appropriate parameters to make the graph look good, with the idea that users can subsequently change the appearance in useful ways (which means cut and paste from Excel Spreadsheets won’t be good enough).

Oh, and they’ve all been regularly blogging their progress, so we’re practicing the whole open notebook science thingy.

Okay, I’ve had a few days to reflect on the session on Software Engineering for the Planet that we ran at ICSE last week. First, I owe a very big thank you to everyone who helped – to Spencer for co-presenting and lots of follow up work; to my grad students, Jon, Alicia, Carolyn, and Jorge for rehearsing the material with me and suggesting many improvements, and for helping advertise and run the brainstorming session; and of course to everyone who attended and participated in the brainstorming for lots of energy, enthusiasm and positive ideas.

First action as a result of the session was to set up a google group, SE-for-the-planet, as a starting point for coordinating further conversations. I’ve posted the talk slides and brainstorming notes there. Feel free to join the group, and help us build the momentum.

Now, I’m contemplating a whole bunch of immediate action items. I welcome comments on these and any other ideas for immediate next steps:

  • Plan a follow up workshop at a major SE conference in the fall, and another at ICSE next year (waiting a full year was considered by everyone to be too slow).
  • I should give my part of the talk at U of T in the next few weeks, and we should film it and get it up on the web. 
  • Write a short white paper based on the talk, and fire it off to NSF and other funding agencies, to get funding for community building workshops
  • Write a short challenge statement, to which researchers can respond with project ideas to bring to the next workshop.
  • Write up a vision paper based on the talk for CACM and/or IEEE Software
  • Take the talk on the road (a la Al Gore), and offer to give it at any university that has a large software engineering research group (assuming I can come to terms with the increased personal carbon footprint 😉
  • Broaden the talk to a more general computer science audience and repeat most of the above steps.
  • Write a short book (pamphlet) on this, to be used to introduce the topic in undergraduate CS courses, such as computers and society, project courses, etc.

Phew, that will keep me busy for the rest of the week…

Oh, and I managed to post my ICSE photos at last.

As a fan of Edward Tufte’s books on the power of beautiful visualizations of qualitative and quantitative data, I’m keen on the idea of exploring new ways of visualizing the climate change challenge. In part because many key policymakers are not likely to ever read the detailed reports on the science, but a few simple, compelling graphics might capture their attention.

I like the visualizations of collected by the UNEP, especially their summary of climate processes and effects, their strategic options curve, the map of political choices, summary of emissions by sector, a guide to emissions assessment, trends in sea level rise, CO2 emissions per capita. I should also point out that the IPCC reports are full of great graphics too, but there’s no easy visual index – you have to read the reports.

Now these are all very nice, and (presumably) the work of professional graphic artists. But they’re all static. The scientist in me wants to play with them. I want to play around with different scales on the axes. I want to select from among different data series. And I want to do this in a web-brower that’s directly linked to the data sources, so that I don’t have to mess around with the data directly, nor worry about how the data is formatted.

What I have in mind is something like Gap Minder. This allows you to play with the data, create new views, and share them with others. Many Eyes is similar, but goes one step further in allowing a community to create entirely new kinds of visualization, and enhance each other’s, in a social networking style. Now, if i can connect up some of these to the climate data sets collected by the IPCC, all sorts of interesting things might happen. Except that the IPCC data sets don’t have enough descriptive metadata for non-experts to make sense of it. But fixing that’s another project.

Oh, and the periodic table of visualization methods is pretty neat as a guide to what’s possible.

Update: (via Shelly): Worldmapper is an interesting way of visualizing international comparisons.

One interesting conversation I had at SciBarCamp was on how to get science fiction writers talking more to climate scientists, so they can take the latest science and turn it into compelling stories. The idea would be to tell it like it is. Instead of techno-optimizism or space opera, stories set in the current century that explain what the climate crisis will really do to us.

Several people talked about the need for some more positive visions, rather than the apocalyptic stuff. So, how about a set of stories from the latter half of the 21st Century, set in the world in which we won the battle. We made it to a completely carbon-neutral world. There were heroic efforts along the way by colourful individuals. There were political battles, and maybe a few bloody revolutions. But we avoided burning the trillionth tonne. The world is a little warmer, and we lost a few coastlines, but we avoided the critical thresholds that trigger runaway warming. I’d like to read stories about how we made it.

Maybe a volume of short stories?

Summer projects: I posted yesterday on social network tools for computational scientists. Greg has posted a whole list of additional suggestions.

Here, I will elaborate another of these ideas: the electronic lab notebook. For computational scientists, wiki pages are an obvious substitute for traditional lab notebooks, because each description of an experiment can then be linked directly with the corresponding datasets, configuration files, visualizations of results, scientific papers, related experiments, etc. (In the most radical version, Open Notebook Science, the lab notebook is completely open for anyone to see. But the toolset would be the same whether it was open to anyone, or just shared with select colleagues)

In my study of the software practices at the UK Met Office last summer, I noticed that some of the scientists carefully document each experiment via a new wiki page, but the process is laborious in a standard wiki, involving a lot of cut-and-paste to create a suitable page structure. For this reason, many scientists don’t keep good records of their experiments. An obvious improvement would be to generate a basic wiki page automatically each time a model run is configured, and populate it with information about the run, and links to the relevant data files. The scientists could then add further commentary via a standard wiki editor.

Of course, an even better solution is to capture all information about a particular run of the model (including subsequent commentary on the results) as meta-data in the configuration file, so that no wiki pages are needed: lab notebook pages are just user-friendly views of the configuration file. I think that’s probably a longer term project, and links in with the observation that existing climate model configuration tools are hard to use anyway and need to be re-invented. Let’s leave that one aside for the moment…

A related problem is better support for navigating and linking existing lab book pages. For example, in the process of writing up a scientific paper, a scientist might need to search for the descriptions of number of individual experiments, select some of the data, create new visualizations for use in the paper, and so on. Recording this trail would improve reproducibility, by capturing the necessary links to source data in case the visualizations used in the paper need to be altered or recreated. Some of requires a detailed analysis of the specific workflows used in a particular lab (which reminds me I need to write up what I know of the Met Office’s workflows), but I think some of this can be achieved by simple generic tools (e.g. browser plugins) that help capture the trail as it happens, and perhaps edit and annotate it afterwards.

I’m sure some of these tools must exist already, but I don’t know of them. Feel free to send me pointers…

This summer, we have a group of undergrad students working with us, who will try building some of the tools we have identified as potentially useful for climate scientists. We’re just getting started this week, so it’s not clear what we’ll actually build yet, but I think I can guarantee we’ll end up with one of two outcomes: either we build something that is genuinely useful, or we learn a lot about what doesn’t work and why not.

Here’s the first project idea. It responds to the observation that large climate models (and indeed any large-scale scientific simulation) undergoes continuous evolution, as a variety of scientists contribute code over a long period of time (decades, in some cases). There is no well-defined specification for the system, and nor do the scientists even know ahead of time exactly what the software should do. Coordinating contributions to this code then becomes a problem. If you want to make a change to some particular routine, it can be hard to know who else is working on related code, what potential impacts your change might have, and sometimes it is hard even to know who to go and ask about these things – who’s the expert?

A similar problem occurs in many other types of software project, and there is a fascinating line of research that exploits the social network to visualize how the efforts of different people interact. It draws on work in sociology on social network analysis – basically the idea that you can treat a large group of people and their social interactions as a graph, which can then be visualized in interesting ways, and analyzed for its structural properties, to identify things like distance (as in six degrees of separation), and structural cohesion. For software engineering purposes, we can automatically construct two distinct graphs:

  1. A graph of social interactions (e.g. who talks to whom). This can be constructed by extracting records of electronic communication from the project database – email records, bug reports, bulletin boards, etc. Of course, this misses verbal interactions, which makes it more suitable for geographically distributed projects, but there are ways of adding some of this missing information if needed (e.g. if we can mine people’s calendars, meeting agendas, etc).
  2. A graph of code dependencies (which bits of code are related). This can include simply which routines call which other routines. More interestingly, it can include information such as which bits of code were checked into the repository at the same time by the same person, which bits of code are linked to the same bug report, etc.

Comparing these two graphs offers insight into socio-technical congruence – how well the social network (who talks to whom) matches the technical dependencies in the code. Which then leads to all sorts of interesting ideas for tools:

For added difficulty, we have to assume that our target users (climate scientists) are programming in Fortran, and are not using integrated programming environments. Although we can assume they have good version control tools (e.g. Subversion) and good bug tracking tools (e.g Trac).

I’m going to SciBarCamp this Saturday. The theme is open science, although we’re free to interpret that as broadly as possible. So here’s my pitch for a session:

Climate Change is the biggest challenge ever faced by humanity. In the last two years, it has become clear that climate change is accelerating, outpacing the IPCC’s 2007 assessment. The paleontological record shows that the planet is “twitchy“, with a number of tipping points at which feedback effects kick in, to take the the planet to a dramatically different climate, which would have disastrous impacts  on the human population. Some climate scientists think we’ve already hit some of these tipping points. However, the best available data suggests that if we can stop the growth of carbon emissions within the next five years, and then then aggressively reduce them to zero over the next few decades, we stand a good chance of averting the worst effects of runaway warming. 

It’s now clear that we can’t tackle this through volunteerism. Asking people to change their lightbulbs and turn off unnecessary appliances is nothing but a distraction: it conceals the real scale of the problem. We need a systematic rethinking of how energy is produced and used throughout society. We need urgent government action on emissions regulation and energy pricing. We need a massive investment in R&D on zero emissions technology (but through an open science initiative, rather than a closed, centralized Manhattan Project style effort). We need a massive R&D effort into how to adapt to those climate changes that we cannot  now avoid: on a warmer planet, we will need to completely rethink food production, water management, disease control, population migration, urban planning, etc. And we will need to understand the potential impacts of the large scale geo-engineering projects that might buy us more time. We need an “all of the above” solution.

Put simply, we’ll need all the brainpower that the planet has to offer to figure out how to meet this challenge. We’ll need scientists and engineers from every discipline to come to the table, and figure out where their particular skills and experience can be most useful. We’ll need to break out of our disciplinary straightjackets, and engage in new interdisciplinary and problem-oriented research programs, to help us understand this new world, and how we might survive in it.

Governments are beginning to recognize the scale of the problem, and are starting to devote research funding to address it. It’s too little, and too late, but it’s a start. This funding is likely to grow substantially over the next few years, depending on how quickly politicians grasp the scale and urgency of the problem. But, as scientists, we shouldn’t wait for governments to get it. We need to get together now, to help explain the science to policymakers and to the public, and to start the new research programmes that will fill the gaps in our current knowledge.

So, here’s what I would like to discuss:

  • How do we get started?
  • How can we secure funding and institutional support for this?
  • How can professional scientists redirect their research efforts to this (and how does this affect the career scientist)?
  • How can scientists from different disciplines identify where their expertise might be needed and identify opportunities to get involved?
  • How can we foster the necessary inter-disciplinary links and open data sharing?
  • What barriers exist, and how can they be overcome?
04. May 2009 · 3 comments · Categories: blogging · Tags: ,

After that massive burst of liveblogging at the EGU, I took a week off from blogging. Which gave me time to reflect on the whole blogging experience, and what I want this blog to be. Some thoughts:

  • When I started this blog, I set myself the goal of writing something every (work) day. It’s been very good discipline: the act of writing stuff down on the blog helps me firm up my thinking, and means I have something to show at the end of each day – even if it’s just a couple of paragraphs. I wish I’d had this when I did my PhD.
  • I’m also using the blog to keep track of web links and published papers that I find interesting. For this alone, the blog is worth its weight in gold. (I used to write notes down on paper, but I found I would never look at them again!). I’m also find I’m keeping a long list of unpublished posts around for this too – I start a post when I find an interesting link, and a few weeks later when I have something interesting to say about it, I finish it off and post it. Sometimes, I save it until I have other related stuff to make a post on a cluster of related items (usually involving a serendipitous relationship!). And some things seem to stay in my “unpublished post” stack forever, but at least I know where they are if I ever need them.
  • The blog turns out to be a great way of capturing and sharing ideas at conferences. I especially like it when people I talk to then go on to blog about some of the ideas later – it opens up the discussion in ways that otherwise aren’t possible.
  • I also like it when my students blog about their research ideas, especially when they’re not so sure about something. It helps me to get a good sense of where they’re at, and where I might be able to help with advice.
  • Liveblogging a conference was brilliant and crazy. It kept me focussed during talks, but perhaps too much so – after all the main point of a conference is really the face-to-face discussions between talks. Finishing off my posts into the start of the coffee break definitely gets in the way of this. I need to find a better balance, but I do like the record I now have of all the ideas & links I encountered.

But there’s a bunch of stuff I don’t like, mainly to do with the linear structure of a blog. I miss having traditional navigation tools like an index and a contents list. The categories and tags are nice, but don’t really help me find the older material easily. If I want the posts to be accessible as an archive, I’ll need to impose some more organization on them. Many bloggers set up their blogs with no clear indication of who they are, and no easy way to browse their blogs other than scrolling through the linear sequence. And I still find it laborious to put weblinks into a blog post (drag’n’drop would be nice).

Finally, blogging is time consuming. Several people have told me this is why they don’t blog. But actually, this doesn’t seem to be an issue for me – each blog post represents a small chunk of research that I would do anyway – the only difference is that now I’m sharing my notes in the blog, rather than keeping them to myself. One of the hardest parts of doing research is that its very easy to let the “playing with ideas” part get endlessly encroached by things that have short term deadlines. The discipline of blogging daily means I then don’t let this happen.

Had an interesting conversation this afternoon with Brad Bass. Brad is a prof in the Centre for Environment at U of T, and was one of the pioneers of the use of models to explore adaptations to climate change. His agent based simulations explore how systems react to environmental change, e.g. exploring population balance among animals, insects, the growth of vector-borne diseases, and even entire cities. One of his models is Cobweb, an open-source platform for agent-based simulations. 

He’s also involved in the Canadian Climate Change Scenarios Network, which takes outputs from the major climate simulation models around the world, and extracts information on the regional effects on Canada, particularly relevant for scientists who want to know about variability and extremes on a regional scale.

We also talked a lot about educating kids, and kicked around some ideas for how you could give kids simplified simulation models to play with (along the line that Jon was exploring as a possible project), to get them doing hands on experimentation with the effects of climate change. We might get one of our summer students to explore this idea, and Brad has promised to come talk to them in May once they start with us.

Oh, and Brad is also an expert on green roofs, and will be demonstrating them to grade 5 kids at the Kids World of Energy Festival.

Computer Science, as an undergraduate degree, is in trouble. Enrollments have dropped steadily throughout this decade: for example at U of T, our enrollment is about half what it was at the peak. The same is true across the whole of North America. There is some encouraging news: enrollments picked up a little this year (after a serious recruitment drive, ours is up about 20% from it’s nadir, while across the US it’s up 6.2%). But it’s way to early to assume they will climb back up to where they were. Oh, and percentage of women students in CS now averages 12% – the lowest ever.

What happened? One explanation is career expectations. In the 80’s, its was common wisdom that a career in computers was an excellent move, for anyone showing an aptitude for maths. In the 90’s, with the birth of the web, computer science even became cool for a while, and enrollments grew dramatically, with a steady improvement in gender balance too. Then came the dotcom boom and bust, and suddenly a computer science degree was no longer a sure bet. I’m told by our high school liaison team that parents of high school students haven’t got the message that the computer industry is short of graduates to recruit (although with the current recession that’s changing again anyway).

A more likely explanation is perceived relevance. In the 80’s, with the birth of the PC, and in the 90’s with the growth of the web, computer science seemed like the heart of an exciting revolution. But now computers are ubiquitous, they’re no longer particularly interesting. Kids take them for granted, and a only a few über-geeks are truly interested in what’s inside the box. But computer science departments continue to draw boundaries around computer science and its subfields in a way that just encourages the fragmentation of knowledge that is so endemic of modern universities.

Which is why an experiment at Georgia Tech is particularly interesting. The College of Computing at Georgia Tech has managed to buck the enrollment trend, with enrollment numbers holding steady throughout this decade. The explanation appears to be a radical re-design of their undergraduate degree, into a set of eight threads. For a detailed explanation, there’s a white paper, but the basic aim is to get students to take more ownership of their degree programs (as opposed to waiting to be spoonfed), and to re-describe computer science in terms that make sense to the rest of the world (computer scientists often forget the the field is impenetrable to the outsider). The eight threads are: Modeling and simulation; Devices (embedded in the physical world); Theory; Information internetworks; Intelligence; Media (use of computers for more creative expression); People (human-centred design); and Platforms (computer architectures, etc). Students pick any two threads, and the program is designed so that any combination covers most of what you would expect to see in a traditional CS degree.

At first sight, it seems this is just a re-labeling effort, with the traditional subfields of CS (e.g. OS, networks, DB, HCI, AI, etc) mapping on to individual threads. But actually, it’s far more interesting than that. The threads are designed to re-contextualize knowledge. Instead of students picking from a buffet of CS courses, each thread is designed so that students see how the knowledge and skills they are developing can be applied in interesting ways. Most importantly, the threads cross many traditional disciplinary boundaries, weaving a diverse set of courses into a coherent theme, showing the students how their developing CS skills combine in intellectually stimulating ways, and preparing them for the connected thinking needed for inter-disciplinary problem solving.

For example the People thread brings in psychology and sociology, examining the role of computers in the human activity systems that give them purpose. It explore the perceptual and cognitive abilities of people as well as design practices for practical socio-technical systems. The Modeling and Simluation thread explores how computational tools are used in a wide variety of sciences to help understand the world. Following this thread will require consideration of epistemology of scientific knowledge, as well as mastery of the technical machinery by which we create models and simulations, and the underlying mathematics. The thread includes in a big dose of both continuous and discrete math, data mining, and high performance computing. Just imagine what graduates of these two threads would be able to do for our research on SE and the climate crisis! The other thing I hope it will do is to help students to know their own strengths and passions, and be able to communicate effectively with others.

The good news is that our department decided this week to explore our own version of threads. Our aims is to learn from the experience at Georgia Tech and avoid some of the problems they have experienced (for example, by allowing every possible combination of 8 threads, it appears they have created too many constraints on timetabling and provisioning individual courses). I’ll blog this initiative as it unfolds.

Okay, here’s a slightly different modeling challenge. It might be more of a visualization challenge. Whatever. In part 1, I suggested we use requirements analysis techniques to identify stakeholders, and stakeholder goals, and link them to the various suggested “wedges“.

Here, I want to suggest something different. There are several excellent books that attempt to address the “how will we do it?” challenge. They each set out a set of suggested solutions, add up the contribution of each solution to reducing emissions, assess the feasibility of each solution, add up all the numbers, and attempt to make some strategic recommendations. But each book makes different input assumptions, focusses on slightly different kinds of solutions, and ends up with different recommendations (but they also agree on many things).

Here are the four books:

Cover image for Monbiots Heat
George Monbiot, Heat: How to Stop the Planet from Burning. This is probably the best book I have ever read on global warming. It’s brilliantly researched, passionate, and doesn’t pull it’s punches. Plus it’s furiously upbeat – Monbiot takes on the challenge of how we get to 90% emissions reduction, and shows that it is possible (although you kind of have to imagine a world in which politicians are willing to do the right thing).

Joseph Romm, Hell and High Water: Global Warming–the Solution and the Politics–and What We Should Do. While lacking Monbiot’s compelling writing style, Romm makes up by being an insider – he was an energy policy wonk in the Clinton administration. The other contrast is Monbiot is British, and focusses mainly on British examples, Romm is American and focusses on US example. The cultural contrasts are interesting.

David MacKay, Sustainable Energy – Without the Hot Air. Okay, so I haven’t read this one yet, but it got a glowing write-up on Boing Boing . Oh, and it’s available as a free download.

Lester Brown, Plan B 3.0L Mobilizing to Save Civilization. This one’s been on my reading list for a while, will read it soon. It has a much broader remit than the others: Brown wants to solve world poverty, cure disease, feed the world, and solve the climate crisis. I’m looking forward to this one. And it’s also available as a free download.

Okay, so what’s the challenge? Model the set of solutions in each of these books so that it’s possible to compare and contrast their solutions, compare their assumptions, and easily identify areas of agreement and disagreement. I’ve no idea yet how to do this, but a related challenge would be to come up with compelling visualizations that explain to a much broader audience what these solutions look like, and why it’s perfectly feasible. Something like this (my current favourite graphic):

Graph of cost/benefit of climate mitigation strategies

Graph of cost/benefit of climate mitigation strategies

I just spent the last two hours chewing the fat with Mark Klein at MIT and Mark Tovey at Carleton, talking about all sorts of ideas, but loosely focussed on how distributed collaborative modeling efforts can help address global change issues (e.g. climate, peak oil, sustainability).

MK has a project, Climate Interactive,[update: Mark tells me I got the wrong project – it should be The Climate Collaboratorium. Climate Interactive is from a different group at MIT] which is exploring how climate simulation tools can be hooked up to discussions around decision making, which is one of the ideas we kicked around in our brainstorming sessions here.

MT has been exploring how you take ideas from distributed cognition and scale them up to much larger teams of people. He has put together a wonderful one-pager that summarized many interesting ideas on how mass collaboration can be applied in this space.

This conversation is going to keep me going for days on stuff to explore and blog about:

And lots of interesting ideas for new projects…

At many discussions about the climate crisis that I’ve had with professional colleagues, the conversation inevitably turns to how we (as individuals) can make a difference by reducing our personal carbon emissions. So sure, our personal choices matter. And we shouldn’t stop thinking about them. And there is plenty of advice out there on how to green your home, and how to make good shopping decisions, and so on. Actually, there is way too much advice out there on how to live a greener life. It’s overwhelming. And plenty of it is contradictory. Which leads to two unfortunate messages: (1) we’re supposed to fix global warming through our individual personal choices and (2) this is incredibly hard because there is so much information to process to do it right.

The climate crisis is huge, and systemic. It cannot be solved through voluntary personal lifestyle choices; it needs systemic changes throughout society as a whole. As Bill McKibben says:

“the number one thing is to organize politically; number two, do some political organizing; number three, get together with your neighbors and organize; and then if you have energy left over from all of that, change the light bulb.”

Now, part of getting politically organized is getting educated. Another part is connecting with people. We computer scientists are generally not very good at political action, but we are remarkably good at inventing tools that allow people to get connected. And we’re good at inventing tools for managing, searching and visualizing information, which helps with the ‘getting educated’ part and the ‘persuading others’ part.

So, I don’t want to have more conversations about reducing our personal carbon footprints. I want to have conversations about how we can apply our expertise as computer scientists and software engineers in new and creative ways. Instead of thinking about your footprint, think about your delta (okay, I might need a better name for it): what expertise and skills do you have that most others don’t, and how can they be applied to good effect to help?

A group of us at the lab, led by Jon Pipitone, has been meeting every Tuesday lunchtime (well almost every Tuesday) for a few months, to brainstorm ideas for how software engineers can contribute to addressing the climate crisis. Jon has been blogging some of our sessions (here, here and here).

This week we attempted to create a matrix, where the rows are “challenge problems” related to the climate crisis, and the columns are the various research areas of software engineering (e.g. requirements analysis, formal methods, testing, etc…). One reason to do this is to figure out how to run a structured brainstorming session with a bigger set of SE researchers (e.g. at ICSE). Having sketched out the matrix, we then attempted to populate one row with ideas for research projects. I thought the exercise went remarkably well. One thing I took away from it was that it was pretty easy to think up research projects to populate many of the cells in the matrix (I had initially thought the matrix might be rather sparse by the time we were done).

We also decided that it would be helpful to characterize each of the rows a little more, so that SE researchers who are unfamiliar with some of the challenges would understand each challenge enough to stimulate some interesting discussions. So, here is an initial list of challenges (I added some links where I could). Note that I’ve grouped them according to who immediate audience is for any tools, techniques, practices…).

  1. Help the climate scientists to develop a better understanding of climate processes.
  2. Help the educators to to teach kids about climate science – how the science is done, and how we know what we know about climate change.
    • Support hands-on computational science (e.g. an online climate lab with building blocks to support construction of simple simulation models)
    • Global warming games
  3. Help the journalists & science writers to raise awareness of the issues around climate change for a broader audience.
    • Better public understanding of climate processes
    • Better public understanding of how climate science works
    • Visualizations of complex earth systems
    • connect data generators (eg scientists) with potential users (e.g. bloggers)
  4. Help the policymakers to design, implement and adjust a comprehensive set of policies for reducing greenhouse gas emissions.
  5. Help the political activists who put pressure on governments to change their policies, or to get better leaders elected when the current ones don’t act.
    • Social networking tools for activitists
    • Tools for persuasion (e.g. visualizations) and community building (e.g. Essence)
  6. Help individuals and communities to lower their carbon footprints.
  7. Help the engineers who are developing new technologies for renewable energy and energy efficiency systems.
    • green IT
    • Smart energy grids
    • waste reduction
    • renewable energy
    • town planning
    • green buildings/architecture
    • transportation systems (better public transit, electric cars, etc)
    • etc