There’s a fascinating piece up this week on The Grid on how to make Toronto a better city. They asked a whole bunch of prominent people for ideas, each to be no longer than 200 words. The ideas didn’t necessarily have to be practical, but would be things to make us think. Some of them are wacky, some are brilliant, and some are both. My favourites are:

  • Give people alternative ways to pay their dues, e.g. instead of taxes, struggling artists donate public art, etc. (Seema Jethalal);
  • Hold a blackout holiday twice a year, to mimic the sense of connectness we all got when the power grid went down in 2003 (Carlyle Jansen)
  • Use ranked ballots for all municipal elections (Dave Meslin)
  • Banish all outdoor commercial ads (Sean Martindale)
  • Ban parking on all main streets (Chris Hume)
  • Build a free wireless internet via decentralized network sharing (Jesse Hirsh)
  • Make the TTC (our public transit) free (David Mirvish)

Better yet, they asked for more suggestions from readers. Here are mine:

Safe bike routes to schools. Every school should be connected to a network of safe bike paths for kids. Unlike the city’s current bike network, these bike baths should avoid main roads as much as possible: bike lanes on main roads are not safe for kids. Instead they should go via residential streets, parks, and marginal spaces, and physically separate the bikes from all vehicle traffic. These routes should provide uninterrupted links from sheltered bike parking at each school all the way through the  residential neighbourhoods that each school serves. They should also provide a larger network, linking each school with neighbouring schools, for families where the kids go to different local schools, and where kids use services (e.g. pools) in other local schools.

Advantages: kids get exercise biking to school, gain some independence from parents, and become better connected with their environment. Traffic congestion and pollution at school drop-off and pickup times would drop. To build such a network, we would have to sacrifice some on-street parking in residential streets. However, a complete network of such bike paths could become a safer alternative to the current bike lanes on main streets, thus freeing up space on main streets.

and:

Car-free blocks on streetcar routes. On each streetcar route through the city, select individual blocks (i.e. stretches between adjacent cross-streets) at several points along each route, and close these stretches to all other motorized vehicle traffic. Such blocks would only allow pedestrians, bikes and streetcars. The sidewalks would then be extended for use as patios by cafes and restaurants. Delivery vehicles would still be permitted, perhaps only at certain times of day.

The aim is to discourage other traffic from using the streets that our streetcars run on as major commuting corridors through the city, and to speed up the flow of streetcars. The blocks selected to pedestrianize would be those where there is already a lively street life, with existing cafes, etc. Such blocks would become desirable destinations for shoppers, diners and tourists.

I’ve been working for the past couple of months with the Cities Centre and the U of T Sustainability Office to put together a symposium on sustainability, where we pose the question “What role should the University of Toronto play in the broader challenge of building a sustainable city?”. We now finally have all the details in place:

  • An Evening Lecture, on the evening of June 13, 6pm to 9pm, at Innis Town Hall, featuring Bob Willard, author of “The Sustainability Advantage”, Tanzeel Merchant, of the Ontario Growth Secretariat and Heritage Toronto, and John Robinson, Director of the UBC Sustainability initiative and the Royal Canadian Geographical Society’s Canadian Environmental Scientist of the Year.
  • A full day visioning workshop on June 14, 9am to 5pm, at the Debates Room, Hart House. With a mix of speakers and working group sessions, the goal will be to map out a vision for sustainability at U of T, that brings together research, teaching and operations at the University, and explores how we can use the University as a “living lab” to investigate challenges in urban sustainability.

And it’s free. Register here!

On my trip to Queens University last week, I participated in a panel session on the role of social media in research. I pointed out that tools like twitter provide a natural extension to the kinds of conversations we usually only get to have at conferences – the casual interactions with other researchers that sometimes lead to new research questions and collaborations.

So, with a little help from Storify, here’s an example…

In which we see and example of how Twitter can enable interesting science, and understand a little about the role of existing social networks in getting science done.

At the CMIP5 workshop earlier this week, one of Ed Hawkins‘ charts caught my eye, because he changed how we look at model runs. We’re used to seeing climate models used to explore the range of likely global temperature responses under different future emissions scenarios, and the results presented as a graph of changing temperature over time. For example, this iconic figure from the last IPCC assessment report (click for the original figure and caption at the IPCC site):

These graphs tend to focus too much on the mean temperature response in each scenario (where ‘mean’ means ‘the multi-model mean’). I tend to think the variance is more interesting – both within each scenario (showing differences in the various CMIP3 models on the same scenarios), and across the different scenarios (showing how our future is likely to be affected by the energy choices implicit in each scenario). A few months ago, I blogged about the analysis that Hawkins and Sutton did on these variabilities, to explore how the different sources of uncertainty change as you move from near term to long term. The analysis shows that in the first few decades, the differences in the models dominate (which doesn’t bode well for decadal forecasting – the models are all over the place). But by the end of the century, the differences between the emissions scenarios dominates (i.e. the spread of projections from the different scenarios is significantly bigger than the  disagreements between models). Ed presented an update on this analysis for the CMIP5 models this week, which looks very similar.

But here’s the new thing that caught my eye: Ed included a graph of temperature responses tipped on its side, to answer a different question: how soon will the global temperature exceed the policymaker’s adopted “dangerous” threshold of 2°C, under each emissions scenario. And, again, how big is the uncertainty? This idea was used in a paper last year by Joshi et. al., entitled Projections of when temperature change will exceed 2 °C above pre-industrial levels. Here’s their figure 1:

Figure 1 from Joshi et al, 2011

By putting the dates on the Y-axis and temperatures on the X-axis, and cutting off the graph at 2°C, we get a whole new perspective on what the models runs are telling us. For example, it’s now easy to see that in all these scenarios, we pass the 2°C threshold well before the end of the century (whereas the IPCC graph above completely obscures this point), and under the higher emissions scenarios, we get to 3°C by the end of the century.

A wonderful example of how much difference the choice of presentation makes. I guess I should mention, however, that the idea of a 2°C threshold is completely arbitrary. I’ve asked many different scientists where the idea came from, and they all suggest it’s something the policymakers dreamt up, rather than anything arising out of scientific analysis. The full story is available in Randalls, 2011, “History of the 2°C climate target”.

In the talk I gave this week at the workshop on the CMIP5 experiments, I argued that we should do a better job of explaining how climate science works, especially the day-to-day business of working with models and data. I think we have a widespread problem that people outside of climate science have the wrong mental models about what a climate scientist does. As with any science, the day-to-day work might appear to be chaotic, with scientists dealing with the daily frustrations of working with large, messy datasets, having instruments and models not work the way they’re supposed to, and of course, the occasional mistake that you only discover after months of work. This doesn’t map onto the mental model that many non-scientists have of “how science should be done”, because the view presented in school, and in the media, is that science is about nicely packaged facts. In reality, it’s a messy process of frustrations, dead-end paths, and incremental progress exploring the available evidence.

Some climate scientists I’ve chatted to are nervous about exposing more of this messy day-to-day work. They already feel under constant attack, and they feel that allowing the public to peer under the lid (or if you prefer, to see inside the sausage factory) will only diminish people’s respect for the science. I take the opposite view – the more we present the science as a set of nicely polished results, the more potential there is for the credibility of the science to be undermined when people do manage to peek under the lid (e.g. by publishing internal emails). I think it’s vitally important that we work to clear away some of the incorrect mental models people have of how science is (or should be) done, and give people a better appreciation for how our confidence in scientific results slowly emerges from a slow, messy, collaborative process.

Giving people a better appreciation of how science is done would also help to overcome some of games of ping pong you get in the media, where each new result in a published paper is presented as a startling new discovery, overturning previous research, and (if you’re in the business of selling newspapers, preferably) overturning an entire field. In fact, it’s normal for new published results to turn out to be wrong, and most of the interesting work in science is in reconciling apparently contradictory findings.

The problem is that these incorrect mental models of how science is done are often well entrenched, and the best that we can do is to try to chip away at them, by explaining at every opportunity what scientists actually do. For example, here’s a mental model I’ve encountered from time to time about how climate scientists build models to address the kinds of questions policymakers ask about the need for different kinds of climate policy:

This view suggests that scientists respond to a specific policy question by designing and building software models (preferably testing that the model satisfies its specification), and then running the model to answer the question. This is not the only (or even the most common?) layperson’s view of climate modelling, but the point is that there are many incorrect mental models of how climate models are developed and used, and one of the things we should strive to do is to work towards dislodging some of these by doing a better job of explaining the process.

With respect to climate model development, I’ve written before about how models slowly advance based on a process that roughly mimics the traditional view of “the scientific method” (I should acknowledge, for all the philosophy of science buffs, that there really isn’t a single, “correct” scientific method, but let’s keep that discussion for another day). So here’s how I characterize the day to day work of developing a model:

Most of the effort is spent identifying and diagnosing where the weaknesses in the current model are, and looking for ways to improve them. Each possible improvement then becomes an experiment, in which the experimental hypothesis might look like:

“if I change <piece of code> in <routine>, I expect it to have <specific impact on model error> in <output variable> by <expected margin> because of <tentative theory about climactic processes and how they’re represented in the model>”

The previous version of the model acts as a control, and the modified model is the experimental condition.

But of course, this process isn’t just a random walk – it’s guided at the next level up by a number of influences, because the broader climate science community (and to some extent the meteorological community) are doing all sorts of related research, which then influences model development. In the paper we wrote about the software development processes at the UK Met Office, we portrayed it like this:

But I could go even broader and place this within a context in which a number of longer term observational campaigns (“process studies”) are collecting new types of observational data to investigate climate processes that are still poorly understood. This then involves the interaction several distinct communities. Christian Jakob portrays it like this:

Although the point of Jakob’s paper is to argue that the modelling and process studies communities don’t currently do enough of this kind of interactions, so there’s room for improvement in how the modelling influences the kinds of process studies needed, and how the results from process studies feed back into model development.

So, how else should we be explaining the day-to-day work of climate scientists?

I’m attending a workshop this week in which some of the initial results from the Fifth Coupled Model Intercomparison Project (CMIP5) will be presented. CMIP5 will form a key part of the next IPCC assessment report – it’s a coordinated set of experiments on the global climate models built by labs around the world. The experiments include hindcasts to compare model skill on pre-industrial and 20th Century climate, projections into the future for 100 and 300 years, shorter term decadal projections, paleoclimate studies, plus lots of other experiments that probe specific processes in the models. (For more explanation, see the post I wrote on the design of the experiments for CMIP5 back in September).

I’ve been looking at some of the data for the past CMIP exercises. CMIP1 originally consisted of one experiment – a control run with fixed forcings. The idea was to compare how each of the models simulates a stable climate. CMIP2 included two experiments, a control run like CMIP1, and a climate change scenario in which CO2 levels were increased by 1% per year. CMIP3 then built on these projects with a much broader set of experiments, and formed a key input to the IPCC Fourth Assessment Report.

There was no CMIP4, as the numbers were resynchronised to match the IPCC report numbers (also there was a thing called the Coupled Carbon Cycle Climate Model Intercomparison Project, which was nicknamed C4MIP, so it’s probably just as well!), so CMIP5 will feed into the fifth assessment report.

So here’s what I have found so far on the vital statistics of each project. Feel free to correct my numbers and help me to fill in the gaps!

CMIP
(1996 onwards)
CMIP2
(1997 onwards)
CMIP3
(2005-2006)
CMIP5
(2010-2011)
Number of Experiments 1 2 12 110
Centres Participating 16 18 15 24
# of Distinct Models 19 24 21 45
# of Runs (Models X Expts) 19 48 211 841
Total Dataset Size 1 Gigabyte 500 Gigabyte 36 TeraByte 3.3 PetaByte
Total Downloads from archive ?? ?? 1.2 PetaByte
Number of Papers Published 47 595
Users ?? ?? 6700

[Update:] I’ve added a row for number of runs, i.e. the sum of the number of experiments run on each model (in CMIP3 and CMIP5, centres were able to pick a subset of the experiments to run, so you can’t just multiply models and experiments to get the number of runs). Also, I ought to calculate the total number of simulated years that represents (If a centre did all the CMIP5 experiments, I figure it would result in at least 12,000 simulated years).

Oh, one more datapoint from this week. We came up with an estimate that by 2020, each individual experiment will generate an Exabyte of data. I’ll explain how we got this number once we’ve given the calculations a bit more of a thorough checking over.

As today is the deadline for proposing sessions for the AGU fall meeting in December, we’ve submitted a proposal for a session to explore open climate modeling and software quality. If we get the go ahead for the session, we’ll be soliciting abstracts over the summer. I’m hoping we’ll get a lively session going with lots of different perspectives.

I especially want to cover the difficulties of openness as well as the benefits, as we often hear a lot of idealistic talk on how open science would make everything so much better. While I think we should always strive to be more open, it’s not a panacea. There’s evidence that open source software isn’t necessarily better quality, and of course, there’re plenty of people using lack of openness as a political weapon, without acknowledging just how many hard technical problems there are to solve along the way, not least because there’s a lack of consensus over the meaning of openness among it’s advocates.

Anyway, here’s our session proposal:

TITLE: Climate modeling in an open, transparent world

AUTHORS (FIRST NAME INITIAL LAST NAME): D. A. Randall1, S. M. Easterbrook4, V. Balaji2, M. Vertenstein3

INSTITUTIONS (ALL): 1. Atmospheric Science, Colorado State University, Fort Collins, CO, United States. 2. Geophysical Fluid Dynamics Laboratory, Princeton, NJ, United States. 3. National Center for Atmospheric Research, Boulder, CO, United States. 4. Computer Science, University of Toronto, Toronto, ON, Canada.

Description: This session deals with climate-model software quality and transparent publication of model descriptions, software, and results. The models are based on physical theories but implemented as software systems that must be kept bug-free, readable, and efficient as they evolve with climate science. How do open source and community-based development affect software quality? What are the roles of publication and peer review of the scientific and computational designs in journals or other curated online venues? Should codes and datasets be linked to journal articles? What changes in journal submission standards and infrastructure are needed to support this? We invite submissions including experience reports, case studies, and visions of the future.

This week, I’m featuring some of the best blog posts written by the students on my first year undergraduate course, PMU199 Climate Change: Software, Science and Society. This post is by Harry, and it first appeared on the course blog on January 29.

Projections from global climate models indicate that continued 21st century increases in emissions of greenhouse gases will cause the temperature of the globe to increase by a few degrees. These global changes in a few degrees could have a huge impact on our planet. Whether a few global degrees cooler could lead to another ice age, a few global degrees warmer enables the world to witness more of nature’s most terrifying phenomenon.

According to Anthony D. Del Genio the surface of the earth heats up from sunlight and other thermal radiation, the amount of energy accumulated must be offset to maintain a stable temperature. Our planet does this by evaporating water that condenses and rises upwards with buoyant warm air. This removes any excess heat from the surface and into higher altitudes. In cases of powerful updrafts, the evaporated water droplets easily rise upwards, supercooling them to a temperature between -10 and -40°C. The collision of water droplets with soft ice crystals forms a dense mixture of ice pellets called graupel. The densities of graupel and ice crystals and the electrical charges they induce are two essential factors in producing what people see as lightning.

Ocean and land differences in updrafts also cause higher lightning frequencies. Over the course of the day, heat is absorbed by the oceans and hardly warms up. Land surfaces, on the other hand, cannot store heat and so they warm significantly from the beginning of the day. The great deal of the air above land surfaces is warmer and more buoyant than that over the oceans, creating strong convective storms as the warm air rises. The powerful updrafts, as a result of the convective storms, are more prone to generate lightning.

According to the general circulation model by Goddard Institute for Space Studies, one of the two experiments conducted indicates that a 4.2°C global warming suggests an increase of 30% in global lightning activity. The second experiment indicated that a 5.9°C global cooling would cause a 24% decrease in global lightning frequencies. The summaries of the experiments signifies a 5-6% change in global lightning frequency for every 1°C of global warming or cooling.

As 21st century projections of carbon dioxide and other greenhouse gases emission remain true, the earth continues to warm and the ocean evaporates more water. This is largely because the drier land surface is unable to evaporate water at the same extent as the oceans, causing the land to warm more. This should cause stronger convective storms and produce higher lightning occurrence.

Greater lightning frequencies can contribute to a warmer earth. Lightning provides an abundant source of nitrogen oxides, which is a precursor for ozone production in the troposphere. The presence of ozone in the upper troposphere acts as a greenhouse gas that absorbs some of the infrared energy emitted by earth. Because tropospheric ozone traps some of the escaping heat, the earth warms and the occurence of lightning is even greater. Lightning frequencies creates a positive feedback process on our climate system. The impact of ozone on the climate is much stronger than carbon, especially on a per-molecule basis, since ozone has a radiative forcing effect that is approximately 1,000 times as powerful as carbon dioxide. Luckily, the presence of ozone in the troposphere on a global scale is not as prevalent as carbon and its atmospheric lifetime averages to 22 days.

"Climate simulations, which were generated from four Global General Circulation Models (GCM), were used to project forest fire danger levels with relation to global warming."

Lightning occurs more frequently around the world, however lightning only affects a very local scale. The  local effect of lightning is what has the most impact on people. In the event of a thunderstorm, an increase in lightning frequencies places areas with high concentration of trees at high-risk of forest fire. Such areas in Canada are West-Central and North-western woodland areas where they pose as major targets for ignition by lightning. In fact, lightning accounted for 85% of that total area burned from 1959-1999. To preserve habitats for animals and forests for its function as a carbon sink, strenuous pressure on the government must be taken to ensure minimized forest fire in the regions. With 21st century estimates of increased temperature, the figure of 85% of area burned could dramatically increase, burning larger lands of forests. This is attributed to the rise of temperatures simultaneously as surfaces dry, producing more “fuel” for the fires.

Although lightning has negative effects on our climate system and the people, lightning also has positive effects on earth and for life. The ozone layer, located in the upper atmosphere, prevents ultraviolet light from reaching earth’s surface. Also, lightning causes a natural process known as nitrogen fixation. This process has a fundamental role for life because fixed nitrogen is required to construct basic building blocks of life (e.g. nucleotides for DNA and amino acids for proteins).

Lightning is an amazing and natural occurrence in our skies. Whether it’s a sight to behold or feared, we’ll see more of it as our earth becomes warmer.

This week, I’m featuring some of the best blog posts written by the students on my first year undergraduate course, PMU199 Climate Change: Software, Science and Society. The first is by Terry, and it first appeared on the course blog on January 28.

A couple of weeks ago, Professor Steve was talking about the extra energy that we are adding to the earth system during one of our sessions (and on his blog). He showed us this chart from the last IPCC report in 2007 that summarizes the various radiative forces from different sources:

Notice how aerosols account for most of the negative radiative forcing. But what are aerosols? What is their direct effect, their contribution in the cloud albedo effect, and do they have any other impact?

More »

Ever since I wrote about peak oil last year, I’ve been collecting references to “Peak X”. Of course, the key idea, Hubbert’s Peak applies to any resource extraction, where the resource is finite. So it’s not surprising that wikipedia now has entries on:

And here’s a sighting of a mention of Peak Gold.

Unlike peak oil, some of these curves can be dampened by the appropriate recycling. But what of stuff we normally think of as endlessly renewable:

  • Peak Water – it turns out that we haven’t been managing the world’s aquifers and lakes sustainably, despite the fact that that’s where our freshwater supplies come from (See Peter Gleick’s 2010 paper for a diagnosis and possible solutions)
  • Peak Food – similarly, global agriculture appears to be unsustainable, partly because food policy and speculation have wrecked local sustainable farming practices, but also because of population growth (See Jonathan Foley’s 2011 paper for a diagnosis and possible solutions).
  • Peak Fish – although overfishing is probably old news to everyone now.
  • Peak Biodiversity (although here it’s referred to as Peak Nature, which I think is sloppy terminology)

Which also leads to pressure on specific things we really care about, such as:

Then there is a category of things that really do need to peak:

And just in case there’s too much doom and gloom in the above, there are some more humorous contributions:

And those middle two, by the wonderful Randall Monroe make me wonder what he was doing here:

I can’t decide whether that last one is just making fun of the the singularity folks, or whether it’s a clever ruse to get people realize Hubbert’s Peak must kick in somewhere!

I was talking with Eric Yu yesterday about a project to use goal modeling as a way of organizing the available knowledge on how to solve a specific problem, and we thought that geo-engineering would make an interesting candidate to try this out on: It’s controversial, a number of approaches have been proposed, there are many competing claims made for them, and it’s hard to sort through such claims.

So, I thought I would gather together the various resources I have on geo-engineering:

Introductory Overviews:

Short commentaries:

Books:

Specific studies / papers:

Sometime in May, I’ll be running a new graduate course, DGC 2003 Systems Thinking for Global Problems. The course will be part of the Dynamics of Global Change graduate program, a cross-disciplinary program run by the Munk School of Global Affairs.

Here’s my draft description of the course:

The dynamics of global change are complex, and demand new ways of conceptualizing and analyzing inter-relationships between multiple global systems. In this course, we will explore the role of systems thinking as a conceptual toolkit for studying the inter-relationships between problems such as globalization, climate change, energy, health & wellbeing, and food security. The course will explore the roots of systems thinking, for example in General Systems Theory, developed by Karl Bertalanffy to study biological systems, and in Cybernetics, developed by Norbert Wiener to explore feedback and control in living organisms, machines, and organizations. We will trace this intellectual history to recent efforts to understand planetary boundaries, tipping points in the behaviour of global dynamics, and societal resilience. We will explore the philosophical roots of systems thinking as a counterpoint to the reductionism used widely across the natural sciences, and look at how well it supports multiple perspectives, trans-disciplinary synthesis, and computational modeling of global dynamics. Throughout the course, we will use global climate change as a central case study, and apply systems thinking to study how climate change interacts with many other pressing global challenges.

I’m planning to get the students to think about issues such as the principle of complementarity, and second-order cybernetics, and of course, how to understand the dynamics of non-linear systems, and the idea of leverage points. We’ll take a quick look at how earth system models work, but not in any detail, because it’s not intended to be physics or computing course; I’m expecting most of the students to be from political science, education, etc.

The hard part will be picking a good core text. I’m leaning towards Donnella Meadows’s book, Thinking in Systems, although I just received my copy of the awesome book Systems Thinkers, by Magnus Ramage and Karen Shipp (I’m proud to report that Magnus was once a student of mine!).

Anyway, suggestions for material to cover, books & papers to include, etc are most welcome.

Our paper on defect density analysis of climate models is now out for review at the journal Geoscientific Model Development (GMD). GMD is an open review / open access journal, which means the review process is publicly available (anyone can see the submitted paper, the reviews it receives during the process, and the authors’ response). If the paper is eventually accepted, the final version will also be freely available.

The way this works at GMD is that the paper is first published to Geoscientific Model Development Discussions (GMDD) as an un-reviewed manuscript. The interactive discussion is then open for a fixed period (in this case, 2 months). At that point the editors will make a final accept/reject decision, and, if accepted, the paper is then published to GMD itself. During the interactive discussion period, anyone can post comments on the paper, although in practice, discussion papers often only get comments from the expert reviewers commissioned by the editors.

One of the things I enjoy about the peer-review process is that a good, careful review can help improve the final paper immensely. As I’ve never submitted before to a journal that uses an open review process, I’m curious to see how the open reviewing will help – I suspect (and hope!) it will tend to make reviewers more constructive.

Anyway, here’s the paper. As it’s open review, anyone can read it and make comments (click the title to get to the review site):

Assessing climate model software quality: a defect density analysis of three models

J. Pipitone and S. Easterbrook
Department of Computer Science, University of Toronto, Canada

Abstract. A climate model is an executable theory of the climate; the model encapsulates climatological theories in software so that they can be simulated and their implications investigated. Thus, in order to trust a climate model one must trust that the software it is built from is built correctly. Our study explores the nature of software quality in the context of climate modelling. We performed an analysis of defect reports and defect fixes in several versions of leading global climate models by collecting defect data from bug tracking systems and version control repository comments. We found that the climate models all have very low defect densities compared to well-known, similarly sized open-source projects. We discuss the implications of our findings for the assessment of climate model software trustworthiness.

I was talking to the folks at our campus sustainability office recently, and they were extolling the virtues of Green Revolving Funds. The idea is ridiculously simple, but turns out to be an important weapon in making sure that the savings from  energy efficiency don’t just disappear back into the black hole of University operational budgets. Once the fund is set up, it provides money for the capital costs of energy efficiency projects, so that they don’t have to compete with other kinds of projects for scarce capital. The money saved from reduced utility bills is then ploughed back into the fund to support more such projects. And the beauty of the arrangement is that you don’t then have to go through endless bureaucracy to get new projects going. According to wikipedia, this arrangement is increasingly common across university campuses in the US and Canada.

So I’m delighted to see the Toronto District School Board (TDSB) is proposing a revolving fund for this too. Here’s the motion to be put to the TDSB’s Operations and Facilities Management Committee next week. Note the numbers in there about savings already realised:

Motion – Environmental Legacy Fund

Whereas in February 2010, the Board approved the Go Green: Climate Change Action Plan that includes the establishment of an Environmental Advisory Committee and the Environmental Legacy Fund;

Whereas the current balance of the Environmental Legacy Fund includes revenues from the sale of carbon credits accrued through energy efficiency projects and from Feed-in-Tariff payments accruing from nine Ministry-funded solar electricity projects;

Whereas energy efficiency retrofit projects completed since 1990/91 have resulted in an estimated 33.9% reduction in greenhouse gas emissions to date and lowered the TDSB’s annual operating costs significantly, saving the Board a $22.43 million in 2010/11 alone; and

Whereas significant energy efficiency and renewable energy opportunities remain available to the TDSB which can provide robust annual operational savings, new revenue streams as well as other benefits including increasing the comfort and health of the Board’s learning spaces;

Therefore, the Environmental Advisory Committee recommends that:

  1. The Environmental Legacy Fund be utilized in a manner that advances projects that directly and indirectly reduce the Board’s greenhouse gas (GHG) emissions and lower the TDSB’s long-term operating costs;
  2. The Environmental Legacy Fund be structured and operated as a revolving fund;
  3. The Environmental Legacy Fund be replenished and  augmented from energy cost savings achieved, incentives and grant revenues secured for energy retrofit projects and renewable energy projects, and an appropriate level of renewable energy revenue as determined by the Board.,
  4. The TDSB establish criteria for how and how much of the Environmental Legacy Fund can be used to advance environmental initiatives that have demonstrated GHG reduction benefits but may not provide a short-term financial return and opportunity for replenishing the Fund.
  5. To ensure transparency and document success, the Board issue an annual financial statement, on the Environmental Legacy Fund along with a report on the energy and GHG savings attributable to projects financed by the Fund.

The 12th Annual Weblog (Bloggies) awards shortlists are out. This year, they have merged the old categories of “Best Science weblog” and “Best Computer or Technology Weblog” into a single category, “Best Science or Technology Weblog“. And the five candidates on the shortlist? Four technology blogs and one rabid anti-science blog.

Not that this award ever had any track record for being able to distinguish science from pseudo-science; the award is legendary for vote-stuffing. But this year it has really stooped to new depths.