Had an interesting conversation this afternoon with Brad Bass. Brad is a prof in the Centre for Environment at U of T, and was one of the pioneers of the use of models to explore adaptations to climate change. His agent based simulations explore how systems react to environmental change, e.g. exploring population balance among animals, insects, the growth of vector-borne diseases, and even entire cities. One of his models is Cobweb, an open-source platform for agent-based simulations. 

He’s also involved in the Canadian Climate Change Scenarios Network, which takes outputs from the major climate simulation models around the world, and extracts information on the regional effects on Canada, particularly relevant for scientists who want to know about variability and extremes on a regional scale.

We also talked a lot about educating kids, and kicked around some ideas for how you could give kids simplified simulation models to play with (along the line that Jon was exploring as a possible project), to get them doing hands on experimentation with the effects of climate change. We might get one of our summer students to explore this idea, and Brad has promised to come talk to them in May once they start with us.

Oh, and Brad is also an expert on green roofs, and will be demonstrating them to grade 5 kids at the Kids World of Energy Festival.

Computer Science, as an undergraduate degree, is in trouble. Enrollments have dropped steadily throughout this decade: for example at U of T, our enrollment is about half what it was at the peak. The same is true across the whole of North America. There is some encouraging news: enrollments picked up a little this year (after a serious recruitment drive, ours is up about 20% from it’s nadir, while across the US it’s up 6.2%). But it’s way to early to assume they will climb back up to where they were. Oh, and percentage of women students in CS now averages 12% – the lowest ever.

What happened? One explanation is career expectations. In the 80’s, its was common wisdom that a career in computers was an excellent move, for anyone showing an aptitude for maths. In the 90’s, with the birth of the web, computer science even became cool for a while, and enrollments grew dramatically, with a steady improvement in gender balance too. Then came the dotcom boom and bust, and suddenly a computer science degree was no longer a sure bet. I’m told by our high school liaison team that parents of high school students haven’t got the message that the computer industry is short of graduates to recruit (although with the current recession that’s changing again anyway).

A more likely explanation is perceived relevance. In the 80’s, with the birth of the PC, and in the 90’s with the growth of the web, computer science seemed like the heart of an exciting revolution. But now computers are ubiquitous, they’re no longer particularly interesting. Kids take them for granted, and a only a few über-geeks are truly interested in what’s inside the box. But computer science departments continue to draw boundaries around computer science and its subfields in a way that just encourages the fragmentation of knowledge that is so endemic of modern universities.

Which is why an experiment at Georgia Tech is particularly interesting. The College of Computing at Georgia Tech has managed to buck the enrollment trend, with enrollment numbers holding steady throughout this decade. The explanation appears to be a radical re-design of their undergraduate degree, into a set of eight threads. For a detailed explanation, there’s a white paper, but the basic aim is to get students to take more ownership of their degree programs (as opposed to waiting to be spoonfed), and to re-describe computer science in terms that make sense to the rest of the world (computer scientists often forget the the field is impenetrable to the outsider). The eight threads are: Modeling and simulation; Devices (embedded in the physical world); Theory; Information internetworks; Intelligence; Media (use of computers for more creative expression); People (human-centred design); and Platforms (computer architectures, etc). Students pick any two threads, and the program is designed so that any combination covers most of what you would expect to see in a traditional CS degree.

At first sight, it seems this is just a re-labeling effort, with the traditional subfields of CS (e.g. OS, networks, DB, HCI, AI, etc) mapping on to individual threads. But actually, it’s far more interesting than that. The threads are designed to re-contextualize knowledge. Instead of students picking from a buffet of CS courses, each thread is designed so that students see how the knowledge and skills they are developing can be applied in interesting ways. Most importantly, the threads cross many traditional disciplinary boundaries, weaving a diverse set of courses into a coherent theme, showing the students how their developing CS skills combine in intellectually stimulating ways, and preparing them for the connected thinking needed for inter-disciplinary problem solving.

For example the People thread brings in psychology and sociology, examining the role of computers in the human activity systems that give them purpose. It explore the perceptual and cognitive abilities of people as well as design practices for practical socio-technical systems. The Modeling and Simluation thread explores how computational tools are used in a wide variety of sciences to help understand the world. Following this thread will require consideration of epistemology of scientific knowledge, as well as mastery of the technical machinery by which we create models and simulations, and the underlying mathematics. The thread includes in a big dose of both continuous and discrete math, data mining, and high performance computing. Just imagine what graduates of these two threads would be able to do for our research on SE and the climate crisis! The other thing I hope it will do is to help students to know their own strengths and passions, and be able to communicate effectively with others.

The good news is that our department decided this week to explore our own version of threads. Our aims is to learn from the experience at Georgia Tech and avoid some of the problems they have experienced (for example, by allowing every possible combination of 8 threads, it appears they have created too many constraints on timetabling and provisioning individual courses). I’ll blog this initiative as it unfolds.

Okay, here’s a slightly different modeling challenge. It might be more of a visualization challenge. Whatever. In part 1, I suggested we use requirements analysis techniques to identify stakeholders, and stakeholder goals, and link them to the various suggested “wedges“.

Here, I want to suggest something different. There are several excellent books that attempt to address the “how will we do it?” challenge. They each set out a set of suggested solutions, add up the contribution of each solution to reducing emissions, assess the feasibility of each solution, add up all the numbers, and attempt to make some strategic recommendations. But each book makes different input assumptions, focusses on slightly different kinds of solutions, and ends up with different recommendations (but they also agree on many things).

Here are the four books:

Cover image for Monbiots Heat
George Monbiot, Heat: How to Stop the Planet from Burning. This is probably the best book I have ever read on global warming. It’s brilliantly researched, passionate, and doesn’t pull it’s punches. Plus it’s furiously upbeat – Monbiot takes on the challenge of how we get to 90% emissions reduction, and shows that it is possible (although you kind of have to imagine a world in which politicians are willing to do the right thing).

Joseph Romm, Hell and High Water: Global Warming–the Solution and the Politics–and What We Should Do. While lacking Monbiot’s compelling writing style, Romm makes up by being an insider – he was an energy policy wonk in the Clinton administration. The other contrast is Monbiot is British, and focusses mainly on British examples, Romm is American and focusses on US example. The cultural contrasts are interesting.

David MacKay, Sustainable Energy – Without the Hot Air. Okay, so I haven’t read this one yet, but it got a glowing write-up on Boing Boing . Oh, and it’s available as a free download.

Lester Brown, Plan B 3.0L Mobilizing to Save Civilization. This one’s been on my reading list for a while, will read it soon. It has a much broader remit than the others: Brown wants to solve world poverty, cure disease, feed the world, and solve the climate crisis. I’m looking forward to this one. And it’s also available as a free download.

Okay, so what’s the challenge? Model the set of solutions in each of these books so that it’s possible to compare and contrast their solutions, compare their assumptions, and easily identify areas of agreement and disagreement. I’ve no idea yet how to do this, but a related challenge would be to come up with compelling visualizations that explain to a much broader audience what these solutions look like, and why it’s perfectly feasible. Something like this (my current favourite graphic):

Graph of cost/benefit of climate mitigation strategies

Graph of cost/benefit of climate mitigation strategies

This is depressing.

We were at the bank this morning, setting up some investment plans for retirement and for the kids to go through University (In Canada-speak: RRSPs and RESPs). We started to pick out a portfolio of mutual funds into which we would be putting the investments, and our financial advisor was showing us one of the mutual funds he would recommend when I noticed the fund included a substantial investment in the Canadian Oil Sands. “No way” says I. So we went to his next pick. Same thing. And the next. And the next….

The oil sands have been described as the most destructive project on earth. They are the major reason that Canada will renege on its Kyoto treaty obligations. They will devastate a huge area of Alberta, and threaten clean water supplies and the wildlife of large parts of North America.

So, I was struck by the irony of funding the kids through University by investing in a project that will so thoroughly screw up the world in which they will have to live when they grow up.

But then I thought about it some more. Pretty much the entire middle class in Canada must have money invested in this project, if it shows up in most of the mutual funds commonly recommended for retirement and education savings plans. Most of them probably have no idea (after all, who actually looks closely at the contents of their mutual funds?) and of those that do know, most of them will prefer the high rate of return on this project because they have no real understanding of the extent of the climate crisis.

Those funds are being used to maximize the profit from the oils sands, by paying for lobbyists to fight environmental regulations, to fight caps on greenhouse gas emissions, and to fight against alternative energy initiatives (which would eat into the market for oil from the oil sands).

How on earth can we make any progress on fighting climate change when we all have a financial stake in not doing so?

We’re fucked.

As my son (grade 4) has started a module at school on climate and global change, I thought I’d look into books on climate change for kids. Here’s what I have for them at the moment:

Weird Weather by Kate Evans. This is the kids favourite at the moment, probably because of its comicbook format. The narrative format works well – its involves the interplay between three characters: a businessman (playing the role of a denier), a scientist (who shows us the evidence) and a idealistic teenager, who gets increasingly frustrated that the businessman won’t listen.

The Down-to-Earth Guide to Global Warming, by Laurie David and Cambria Gordon. Visually very appealling, lots of interesting factoids for the kids, and a particular attention to the kinds of questions kids like to ask (e.g. to do with methane from cow farts).

How We Know What We Know About Our Changing Climate by Lynne Cherry and Gary Braasch. Beautiful book (fabulous photos!), mainly focusing on sources of evidence (ice cores, tree rings, etc), and how they were discovered. Really encourages the kids to do hands on data collection. Oh, and there’s a teacher’s guide as well, which I haven’t looked at yet.

Global Warming for Dummies by Elizabeth May and Zoe Caron. Just what we’d expect from a “Dummies Guide…” book. I bought it because I was on my way to a bookstore on April 1, when I heard an interview on the CBC with Elizabeth May (leader of the Canadian Green Party) talking about how they were planning to reduce the carbon footprint of their next election campaign, by hitchhiking all over Canada. My first reaction was incredulity, but then I remembered the date, and giggled uncontrollably all the way into the bookstore. So I just had to buy the book.

Whom do you believe: The Cato Institute, or the Hadley Centre? Both cannot be right. Yet both claim to be backed by real scientists.

First, to get this out of the way, the latest ad from Cato has been thoroughly debunked by RealClimate, including a critical look at whether the papers that Cato cites offer any support for Cato’s position (hint: they don’t), and a quick tour through related literature. So I won’t waste my time repeating their analysis.

The Cato folks attempted to answer back, but it’s largely by attacking red herrings. However, one point from this article jumped out at me:

“The fact that a scientist does not undertake original research on subject x does not have any bearing on whether that scientist can intelligently assess the scientific evidence forwarded in a debate on subject x”.

The thrust of this argument is an attempt to bury the idea of expertise, so that the opinions of the Cato institute’s miscellaneous collection of people with PhDs can somehow be equated with those of actual experts. Now, of course it is true that a (good) scientist in another field ought to be able to understand the basics of climate science, and know how to judge the quality of the research, the methods used, and the strength of the evidence, at least at some level. But unfortunately, real expertise requires a great deal of time and effort to acquire, no matter how smart you are.

If you want to publish in a field, you have to submit yourself to the peer-review process. The process is not perfect (incorrect results often do get published, and, on occasion, fabricated results too). But one thing it does do very well is to check whether authors are keeping up to date with the literature. That means that anyone who regularly publishes in good quality journals has to keep up to date with all the latest evidence. They cannot cherry pick.

Those who don’t publish in a particular field (either because they work in an unrelated field, or because they’re not active scientists at all) don’t have this obligation. Which means when they form opinions on a field other than their own, they are likely to be based on a very patchy reading of the field, and mixed up with a lot of personal preconceptions. They can cherry pick. Unfortunately, the more respected the scientist, the worse the problem. The most venerated (e.g. prize winners) enter a world in which so many people stroke their egos, they lose touch with the boundaries of their ignorance. I know this first hand, because some members of my own department have fallen into this trap: they allow their brilliance in one field to fool them into thinking they know a lot about other fields.

Hence, given two scientists who disagree with one another, it’s a useful rule of thumb to trust the one who is publishing regularly on the topic. More importantly, if there are thousands of scientists publishing regularly in a particular field and not one of them supports a particular statement about that field, you can be damn sure it’s wrong. Which is why the IPCC reviews of the literature are right, and Cato’s adverts are bullshit.

Disclaimer: I don’t publish in the climate science literature either (it’s not my field). I’ve spent enough time hanging out with climate scientists to have a good feel for the science, but I’ll also get it wrong occasionally. If in doubt, check with a real expert.

I just spent the last two hours chewing the fat with Mark Klein at MIT and Mark Tovey at Carleton, talking about all sorts of ideas, but loosely focussed on how distributed collaborative modeling efforts can help address global change issues (e.g. climate, peak oil, sustainability).

MK has a project, Climate Interactive,[update: Mark tells me I got the wrong project – it should be The Climate Collaboratorium. Climate Interactive is from a different group at MIT] which is exploring how climate simulation tools can be hooked up to discussions around decision making, which is one of the ideas we kicked around in our brainstorming sessions here.

MT has been exploring how you take ideas from distributed cognition and scale them up to much larger teams of people. He has put together a wonderful one-pager that summarized many interesting ideas on how mass collaboration can be applied in this space.

This conversation is going to keep me going for days on stuff to explore and blog about:

And lots of interesting ideas for new projects…

In our brainstorm session yesterday, someone (Faraz?) suggested I could kick off the ICSE session with a short video. The closest thing I can think of is this:

Wake Up, Freak Out – then Get a Grip

It’s not too long, it covers the recent science very well, and it is exactly the message I want to give – climate change is serious, urgent, demands massive systemic change, but is not something we should despair over. It also comes with a full transcript with detailed references into the primary scientific literature, which is well worth a browse.

Except that it scares the heck out of me every time I watch it. Could I really show this to an ICSE audience?

One of the things that came up in our weekly brainstorming session today was the question of whether climate models can be made more modular, to permit distributed development, and distributed execution. Carolyn has already blogged about some of these ideas. Here’s a little bit of history for this topic.

First, a very old (well, 1989) paper by Kalnay et al,  on Data Interchange Formats, in which they float the idea of “plug compatibility” for climate model components. For a long time, this idea seems to have been accepted as the long term goal for the architecture for climate models. But no-one appears to have come close. In 1996, David Randall wrote an interesting introspective on how university teams can (or can’t) participate in climate model building, in which he speculates that plug compatibility might not be achievable in practice because of the complexity of the physical processes being simulated, and the complex interactions between them. He also points out that all climate models (up to that point) had each been developed at a single site, and he talks a bit about why this appears to be necessarily so.

Fast forward to a paper by Dickinson et al in 2002, which summarizes the results of a series of workshops on how to develop a better software infrastructure for model sharing, and talks about some prototype software frameworks. Then, a paper by Larson et al in 2004, introducing a common component architecture for earth system models, and a bit about the Earth System Modeling Framework being developed at NCAR. And finally, Drake et al.’s Overview of the Community Climate System Model, which appears to use these frameworks very successfully.

Now, admittedly I haven’t looked closely at the CCSM. But I have looked closely at the Met Office’s Unified Model and the Canadian CCCma, and neither of them get anywhere close to the ideal of modularity. In both cases, the developers have to invest months of effort to ‘naturalize’ code contributed from other labs, in the manner described in Randall’s paper.

So, here’s the mystery. Has the CCSM really achieved the modularity that others are only dreaming of? And if so how? The key test would be how much effort it takes to ‘plug in’ a module developed elsewhere…

Well, here’s an interesting analysis of the lifecycle emissions of a computer. Turns out that computers require something like ten times their weight in fossil fuels to manufacture. Which is an order of magnitude higher than other durable goods, like cars and fridges, which only require about their own weight in fuel to manufacture. Oh, and the flat screen display accounts for the majority of it.

Okay, so I’ll concede that my computer is an order of magnitude more useful to me than a fridge (which after all, only does one thing). But it does mean that if we focus only on power consumption during use, we might be missing the biggest savings opportunities. 

The analysis is from the book Computers and the Environment, edited by Kuehr and Williams, and here’s a brief review.

Well, this is a little off topic, but we (Janice, Dana, Peggy and I) have been invited to run this year’s International Advanced School of Empirical Software Engineering, in Florida in October. We’ve planned the day around the content of our book chapter on Selecting Empirical Research Methods for Software Engineering Research, which appeared in the book Guide to Advanced Empirical Software Engineering. It’s going to be a lot of fun!

There is no silver bullet for climate change, just as there’s no silver bullet for software engineering. To understand why this is, you need to understand the magnitude of the problem.

Firstly, there’s the question of what a “safe” temperature rise would be. There’s a broad consensus among climate scientists that about a rise of around 2°C (above pre-industrial levels) is a sensible upper limit. I’ve asked a number of climate scientists why this threshold, and the answer is that above this level, scary feedback effects start to kick in, and then we’re in serious trouble. If you look at the assessments from the IPCC, the lowest stabilization level they consider is 450 ppm (parts per million), but its clear from their figures that even at this level, we would overshoot the 2°C threshold. Since that report, some scientists have argued this is way too high, and 350ppm would be a better target. Worryingly, the last IPCC assessment was based on climate models that did not include feedback effects.

Then, there’s the question of how to get there. Stabilizing at 350-450ppm requires a reduction of greenhouse emissions of around 80% in industrialized nations by the year 2050. Monbiot argues that if you think in terms of a reduction per capita, you have to allow for population growth. So that really means a reduction more like 90% per person. And again, due to our uncertainty about feedback effects, the emissions targets may need to be even lower.

How do reduce emissions by 90% per person? The problem is that our emissions of greenhouse gases come from everything we do, and no one activity or industry dominates. I was looking for a good graphic for my ICSE talk, to illustrate this point, when I came across this chart of sources of emissions:

 

World Greenhouse Gas Emissions by Sector

World Greenhouse Gas Emissions by Sector

 

 

I think that’s enough on it’s own to show there is not likely to be a silver bullet. The only way to solve the problem is a systemic analysis of sources of emissions, and we have to take into account a huge number of different options. If you want more detail on the figures, Jon Rynn at Grist has started to put together some spreadsheets to add up all the sources of emissions, and some specific contributors.

BTW, the IPCC’s frequently asked questions is a great primer for anyone new to the physics of climate change.

Having ranted yesterday about how discussions about our personal carbon footprints are a distraction, I now feel obliged to raise an exception. If we’re talking about how to use our software expertise to support personal footprint reduction on a grander scale, I’m all for it. Along those lines, here’s a list of ten green Internet startups – companies looking to leverage Internet technology, to support ‘greening’ initiatives. These ten were presented at a conference in SF yesterday called Green:Net09, and a panel of judges selected a winner – presumably the company most likely to succeed. The judges picked WattBot, while the audience favourite was FarmsReach.

At many discussions about the climate crisis that I’ve had with professional colleagues, the conversation inevitably turns to how we (as individuals) can make a difference by reducing our personal carbon emissions. So sure, our personal choices matter. And we shouldn’t stop thinking about them. And there is plenty of advice out there on how to green your home, and how to make good shopping decisions, and so on. Actually, there is way too much advice out there on how to live a greener life. It’s overwhelming. And plenty of it is contradictory. Which leads to two unfortunate messages: (1) we’re supposed to fix global warming through our individual personal choices and (2) this is incredibly hard because there is so much information to process to do it right.

The climate crisis is huge, and systemic. It cannot be solved through voluntary personal lifestyle choices; it needs systemic changes throughout society as a whole. As Bill McKibben says:

“the number one thing is to organize politically; number two, do some political organizing; number three, get together with your neighbors and organize; and then if you have energy left over from all of that, change the light bulb.”

Now, part of getting politically organized is getting educated. Another part is connecting with people. We computer scientists are generally not very good at political action, but we are remarkably good at inventing tools that allow people to get connected. And we’re good at inventing tools for managing, searching and visualizing information, which helps with the ‘getting educated’ part and the ‘persuading others’ part.

So, I don’t want to have more conversations about reducing our personal carbon footprints. I want to have conversations about how we can apply our expertise as computer scientists and software engineers in new and creative ways. Instead of thinking about your footprint, think about your delta (okay, I might need a better name for it): what expertise and skills do you have that most others don’t, and how can they be applied to good effect to help?

In honour of Ada Lovelace day, I decided to write a post today about Prof Julia Slingo, the new chief scientist at the UK Met Office. News of Julia’s appointment came out in the summer last year during my visit to the Met Office, coincidentally on the same day that I met her, at a workshop on the HiGEM project (where, incidentally, I saw some very cool simulations of ocean temperatures). Julia’s role at the meeting was to represent the sponsor (NERC – the UK equivalent of Canada’s NSERC), but what impressed me about her talk was both her detailed knowledge of the project, and the way she nurtured it – she’ll make a great chief scientist.

Julia’s research has focussed on tropical variability, particularly improving our understanding of the monsoons, but she’s also played a key role in earth system modeling, and especially in the exploration of high resolution models. But best of all, she’s just published a very readable account of the challenges in developing the next generation of climate models. Highly recommended for a good introduction to the state of the art in climate modeling.