One of the things that strikes me about discussions of climate change, especially from those who dismiss it as relatively harmless, is a widespread lack of understanding on how non-linear systems behave. Indeed, this seems to be one of the key characteristics that separate those who are alarmed at the prospect of a warming climate from those who are not.

At the AGU meeting this month, Kerry Emanuel presented a great example of this in his talk on “Hurricanes in a Warming Climate”. I only caught his talk by chance, as I was slipping out of the session in the next room, but I’m glad I did, because he made an important point about how we think about the impacts of climate change, and in particular, showed two graphs that illustrate the point beautifully.

Kerry’s talk was an overview of a new study that estimates changes in damage from tropical cyclones with climate change, using a new integrated assessment model. The results are reported in detail in a working paper at the World Bank. The report points out that the link between hurricanes and climate change remains controversial. So, while Atlantic hurricane power has more than doubled over the last 30 years, and model forecasts show an increase in the average intensity of hurricanes in a warmer world, there is still no clear statistical evidence of a trend in damages caused by these storms, and hence a great deal of uncertainty about future trends.

The analysis is complicated by several factors:

  • Increasing insurance claims from hurricane damage in the US have a lot to do with growing economic activity in vulnerable regions. Indeed, expected economic development in the regions subject to tropical storm damage means that there’s certain to be big increases in damage even if there were no warming at all.
  • The damage is determined more by when and where each storm makes landfall than it is by the intensity of the storm.
  • There simply isn’t enough data to detect trends. More than half of the economic damage due to hurricanes in the US since 1870 was caused by just 8 storms.

The new study by Emanuel and colleagues overcomes some of these difficulties by simulating large numbers of storms. They took the outputs of four different Global Climate Models, using the A1B emissions scenario, and fed them into a cyclone generator model to simulate thousands of storms, comparing the characteristics of these storms with those that have caused damage in the US in the last few decades, and then adjusting the damage estimates according to anticipated changes in population and economic activity in the areas impacted (for details, see the report).

The first thing to note is that the models forecast only a small change in hurricanes, typically a slight decrease in medium-strength storms and a slight increase in more intense storms. For example, at first sight, the MIROC model indicates almost no difference:

Probability density for storm damage on the US East Coast, generated from the MIROC model for current vs. year 2100, under the A1B scenario, for which this model forecasts a global average temperature increase of around 4.5C. Note that x axis is a logarithmic scale: 8 means $100 million, 9 means $1 billion, 10 means $10 billion, etc (source: Figure 9 in Mendelsohn et al, 2011)

Note particularly that at the peak of the graph, the model shows a very slight reduction in the number of storms (consistent with a slight decrease in the overall frequency of hurricanes), while on the upper tail, the model shows a very slight increase (consistent with a forecast that there’ll be more of the most intense storms). The other three models show slightly bigger changes by the year 2100, but overall, the graphs seem very comforting. It looks like we don’t have much to worry about (at least as far as hurricane damage from climate change is concerned). Right?

The problem is that the long tail is where all the action is. The good news is that there appears to be a fundamental limit on storm intensity, so the tail doesn’t really get much longer. But the problem is that it only takes a few more of these very intense storms to make a big difference in the amount of damage caused. Here’s what you get if you multiply the probability by the damage in the above graph:

Changing risk of hurricane damage due to climate change. Calculated as probability times impact. (Source: courtesy of K. Emanuel, from his AGU 2011 talk)

That tiny change in the long tail generates a massive change in the risk, because the system is non-linear. If most of the damage is done by a few very intense storms, then you only need a few more of them to greatly increase the damage. Note in particular, what happens at 12 on the damage scale – these are trillion dollar storms. [Update: Kerry points out that the total hurricane damage is proportional to the area under the curves of the second graph].

The key observation here is that the things that matter most to people (e.g. storm damage) do not change linearly as the climate changes. That’s why people who understand non-linear systems tend to worry much more about climate change than people who do not.

Here’s the call for papers for a workshop we’re organizing at ICSE next May:

The First International Workshop on Green and Sustainable Software (GREENS’2012)

(In conjunction with the 34th International Conference on Software Engineering (ICSE 2012), Zurich, Switzerland, June 2-9, 2012

Important Dates:

  • 17th February 2012 – paper submission
  • 19th March 2012 – notification of acceptance
  • 29th March 2012 – camera-ready
  • 3rd June 2011 – workshop

Workshop theme and goals: The Focus of the GREENS workshop is the engineering of green and sustainable software. Our goal is to bring together academics and practitioners to discuss research initiatives, challenges, ideas, and results in this critically important area of the software industry. To this end GREENS will both discuss the state of the practice, especially at the industrial level, and define a roadmap, both for academic research and for technology transfer to industry. GREENS seeks contributions addressing, but not limited to, the following list of topics:

Concepts and foundations:

  • Definition of sustainability properties (e.g. energy and power consumption, green-house gases emissions, waste and pollutants production), their relationships, their units of measure, their measurement procedures in the context of software-intensive systems, their relationships with other properties (e.g. response time, latency, cost, maintainability);
  • Green architectural knowledge, green IT strategies and design patterns;

Greening domain-specific software systems:

  • Energy-awareness in mobile software development;
  • Mobile software systems scalability in low-power situations;
  • Energy-efficient techniques aimed at optimizing battery consumption;
  • Large and ultra-large scale green information systems design and development (including inter-organizational effects)

Greening of IT systems, data and web centers:

  • Methods and approaches to improve sustainability of existing software systems;
  • Customer co-creation strategies to motivate behavior changes;
  • Virtualization and offloading;
  • Green policies, green labels, green metrics, key indicators for sustainability and energy efficiency;
  • Data center and storage optimization;
  • Analysis, assessment, and refactoring of source code to improve energy efficiency;
  • Workload balancing;
  • Lifecycle Extension

Greening the process:

  • Methods to design and develop greener software systems;
  • Managerial and technical risks for a sustainable modernization;
  • Quality & risk assessments, tradeoff analyses between energy efficiency, sustainability and traditional quality requirements;

Case studies, industry experience reports and empirical studies:

  • Empirical data and analysis about sustainability properties, at various granularity levels: complete infrastructure, or nodes of the infrastructure (PCs, servers, and mobile devices);
  • Studies to define technical and economic models of green aspects;
  • Return on investment of greening projects, reasoning about the triple bottom line of people, planet and profits;
  • Models of energy and power consumption, at various granularity levels;
  • Benchmarking of power consumption in software applications;

Guidelines for Submission: We are soliciting papers in two distinct categories:

  1. Research papers describing innovative and significant original research in the field (maximum 8 pages);
  2. Industrial papers describing industrial experience, case studies, challenges, problems and solutions (maximum 8 pages).

Please submit your paper online through EasyChair (see the GREENS website). Submissions should be original and unpublished work. Each submitted paper will undergo a rigorous review process by three members of the Program Committee. All types of papers must conform to the ICSE submission format and guidelines. All accepted papers will appear in the ACM Digital Library.

Workshop Organizers:

  • Patricia Lago (VU University Amsterdam, The Netherlands)
  • Rick Kazman (University of Hawaii, USA)
  • Niklaus Meyer (Green IT SIG, Swiss Informatics Society, Switzerland)
  • Maurizio Morisio (Politecnico di Torino, Italy)
  • Hausi A. Mueller (University of Victoria, Canada)
  • Frances Paulisch (Siemens Corporate Technology, Germany)
  • Giuseppe Scanniello (Università della Basilicata, Italy)
  • Olaf Zimmermann (IBM Research, Zurich, Switzerland)

Program committee:

  • Marco Aiello, University of Groningen, Netherlands
  • Luca Ardito, Politecnico di Torino, Italy
  • Ioannis Athanasiadis, Democritus Univ. of Thrace, Greece
  • Rami Bahsoon, University College London, UK
  • Ivica Crnkovic, Malardalen University, Sweden
  • Steve Easterbrook, University of Toronto, Canada
  • Hakan Erdogmus, Things Software
  • Anthony Finkelstein, University College London, UK
  • Matthias Galster, University of Groningen, Netherlands
  • Ian Gorton, Pacific Northwest National Laboratory, USA
  • Qing Gu, VU University Amsterdam, Netherlands
  • Wolfgang Lohmann, Informatics and Sustainability Research, Swiss Federal Laboratories for Materials Science and Technology, Switzerland
  • Lin Liu, School of Software, Tsinghua University, China
  • Alessandro Marchetto, Fondazione Bruno Kessler, Italy
  • Henry Muccini, University of L’Aquila, Italy
  • Stefan Naumann, Trier University of Applied Sciences, Environmental Campus, Germany
  • Cesare Pautasso, University of Lugano, Switzerland
  • Barbara Pernici, Politecnico di Milano, Italy
  • Giuseppe Procaccianti, Politecnico di Torino, Italy
  • Filippo Ricca, University of Genova
  • Antony Tang, Swinburne University of Tech., Australia
  • Antonio Vetro’, Fraunhofer IESE, USA
  • Joost Visser, Software Improvement Group and Knowledge Network Green Software, Netherlands
  • Andrea Zisman, City University London, UK

A number of sessions at the AGU meeting this week discussed projects to improve climate literacy among different audiences:

  • The Climate Literacy and Energy Awareness Network (CLEANET), are developing concept maps for use in middle school and high school, along with a large set of pointers to educational resources on climate and energy for use in the classroom.
  • The Climate Literacy Zoo Education Network (CliZEN). Michael Mann (of Hockey Stick Fame) talked about this project, which was a rather nice uplifting change from hearing about his experiences with political attacks on his work. This is a pilot effort, currently involving ten zoos, mainly in the north east US. So far, they have completed a visitor survey across a network of zoos, plus some aquaria, exploring the views of visitors on climate change, using the categories of the Six Americas report. The data they have collected show that zoo visitors tend to be more skewed towards the “alarmed” category compared to the general US population. [Incidentally, I’m impressed with their sample size: 3,558 responses. The original Six Americas study only had 981, and most surveys in my field have much smaller sample sizes]. The next steps in the project are to build on this audience analysis to put together targeted information and education material that links what we know about climate climate with it’s impact on specific animals at the zoos (especially polar animals).
  • The Climate Interpreter Project. Bill Spitzer from the New England Aquarium talked about this project. Bill points out that aquaria (and museums, zoos etc) are have an important role to play, because people come for the experience, which must be enjoyable, but they do expect to learn something, and they do trust museums and zoos to provide them accurate information. This project focusses on the role of interpreters and volunteers, who are important because they tend to be more passionate, more knowledgeable, and come into contact with many people. But many interpreters are not yet comfortable in talking about issues around climate change. They need help, and training. Interpretation isn’t just transmission of information. It’s about translating science in a way that’s meaningful and resonates with an audience. It requires a systems perspective. The strategy adopted by this project is to begin with audience research, to understand people’s interests and passions; connect this with the cognitive and social sciences on how people learn, and how they make sense of what they’re hearing; and finally to make use of strategic framing, which gets away from the ‘crisis’ frame that dominates most news reporting (on crime, disasters, fires), but which tends to leave people feeling overwhelmed, which leads them to treat it as someone else’s problem. Thinking explicitly about framing allows you to connect information about climate change with people’s values, with what they’re passionate about, and even with their sense of self-identity. The website climateinterpreter.org describes what they’ve learnt so far
    (As an aside, Bill points out that it can’t just be about training the interpreters – you need institutional support and leadership, if they are to focus on a controversial issue. Which got me thinking about why science museums tend to avoid talking much about climate change – it’s easy for the boards of directors to avoid the issue, because of worries about whether it’s politically sensitive, and hence might affect fundraising.)
  • The WorldViews Network. Rachel Connolly from Nova/WGBH presented this collaboration between museums, scientists and TV networks. Partners include planetariums and groups interested in data visualization, GIS data, mapping, many from an astronomy background. Their 3-pronged approach, called TPACK, identifies three types of knowledge: technological, pedagogical, and content knowledge.  The aim is to take people from seeing, to knowing, to doing. For example, they might start with a dome presentation, but bring into it live and interactive web resources, and then move on to community dialogues. Storylines use visualizations that move seamlessly across scale: cosmic, global, bioregional. They draw a lot on the ideas from Rockstrom’s planetary boundaries, within which they’r focussing on three: climate, biodiversity loss and ocean acidification. A recent example from Denver, in May, focussed on water. On the cosmic scale, they look at where water comes from as planets are formed. They eventually bring this down to the bioregional scale, looking at the rivershed for Denver, and the pressures on the Colorado River. Good visual design is a crucial part of the project (Rachel showed a neat example of a visualization of the size of water on the planet: comparing all water, with fresh water and frozen water. Another fascinating example was a satellite picture of the border of Egypt and Israel, where the different water management strategies either side of the border produce a starkly visible difference either side of the border. (Rachel also recommended Sciencecafes.org and the Buckminster Fuller Challenge).
  • ClimateCommunication.org. There was a lot of talk through the week about this project, led by Susan Hassol and Richard Somerville, especially their recent paper in Physics Today, which explores the use of jargon, and how it can mislead the general public. The paper went viral on the internet shortly after it was published, and and they used an open google doc to collect many more examples. Scientists are often completely unaware the non-specialists have different meaning for jargon terms, which can  then become a barrier for communication. My favourite examples from Susan’s list are “aerosol”, which to the public means a spray can (leading to a quip by Glenn Beck who had heard that aerosols cool the planet); ‘enhanced’, which the public understands as ‘made better’ so the ‘enhanced greenhouse effect’ sounds like a good thing, and ‘positive feedback’ which also sounds like a good thing, as it suggests a reward for doing something good.
  • Finally, slightly off topic, but I was amused by the Union of Concerned Scientists’ periodic table of political interferences in science.

On Thursday, Kaitlin presented her poster at the AGU meeting, which shows the results of the study she did with us in the summer. Her poster generated a lot of interest, especially the visualizations she has of the different model architectures. Click on thumbnail to see the full poster at the AGU site:

A few things to note when looking at the diagrams:

  • Each diagram shows the components of a model, scale to their relative size by lines of code. However, the models are not to scale with one another, as the smallest, UVic’s, is only a tenth of the size of the biggest, CESM. Someone asked what accounts for that size. Well, the UVic model is an EMIC rather than a GCM. It has a very simplified atmosphere model that does not include atmospheric dynamics, which makes it easier to run for very long simulations (e.g. to study paleoclimate). On the other hand, CESM is a community model, with a large number of contributors across the scientific community. (See Randall and Held’s point/counterpoint article in last months IEEE Software for a discussion of how these fit into different model development strategies).
  • The diagrams show the couplers (in grey), again sized according to number of lines of code. A coupler handles data re-gridding (when the scientific components use different grids), temporal aggregation (when the scientific components run on different time steps) along with other data handling. These are often invisible in diagrams the scientists create of their models, because they are part of the infrastructure code; however Kaitlin’s diagrams show how substantial they are in comparison with the scientific modules. The European models all use the same coupler, following a decade-long effort to develop this as a shared code resource.
  • Note that there are many different choices associated with the use of a coupler, as sometimes it’s easier to connect components directly rather through the coupler, and the choice may be driven by performance impact, flexibility (e.g. ‘plug-and-play’ compatibility) and legacy code issues. Sea ice presents an interesting example, because its extent varies over the course of a model run. So somewhere there must be code that keeps track of which grid cells have ice, and then routes the fluxes from ocean and atmosphere to the sea ice component for these grid cells. This could be done in the coupler, or in any of the three scientific modules. In the GFDL model, sea ice is treated as an interface to the ocean, so all atmosphere-ocean fluxes pass through it, whether there’s ice in a particular cell or not.
  • The relative size of the scientific components is a reasonable proxy for functionality (or, if you like, scientific complexity/maturity). Hence, the diagrams give clues about where each lab has placed its emphasis in terms of scientific development, whether by deliberate choice, or because of availability (or unavailability) of different areas of expertise. The differences between the models from different labs show some strikingly different choices here, for example between models that are clearly atmosphere-centric, versus models that have a more balanced set of earth system components.
  • One comment we received in discussions around the poster was about the places where we have shown sub-components in some of the models. Some modeling groups are more explicit about naming the sub-components, and indicating them in the code. Hence, our ability to identify these might be more dependent on naming practices rather than any fundamental architectural differences.

I’m sure Kaitlin will blog more of her reflections on the poster (and AGU in general) once she’s back home.

I’m at the AGU meeting in San Francisco this week. The internet connections in the meeting rooms suck, so I won’t be twittering much, but will try and blog any interesting talks. But first things first! I presented my poster in the session on “Methodologies of Climate Model Evaluation, Confirmation, and Interpretation” yesterday morning. Nice to get my presentation out of the way early, so I can enjoy the rest of the conference.

Here’s my poster, and the abstract is below (click for the full sized version at the AGU ePoster site):

A Hierarchical Systems Approach to Model Validation

Introduction

Discussions of how climate models should be evaluated tend to rely on either philosophical arguments about the status of models as scientific tools, or on empirical arguments about how well runs from a given model match observational data. These lead to quantitative measures expressed in terms of model bias or forecast skill, and ensemble approaches where models are assessed according to the extent to which the ensemble brackets the observational data.

Such approaches focus the evaluation on models per se (or more specifically, on the simulation runs they produce), as if the models can be isolated from their context. Such approaches may overlook a number of important aspects of the use of climate models:

  • the process by which models are selected and configured for a given scientific question.
  • the process by which model outputs are selected, aggregated and interpreted by a community of expertise in climatology.
  • the software fidelity of the models (i.e. whether the running code is actually doing what the modellers think it’s doing).
  • the (often convoluted) history that begat a given model, along with the modelling choices long embedded in the code.
  • variability in the scientific maturity of different components within a coupled earth system model.

These omissions mean that quantitative approaches cannot assess whether a model produces the right results for the wrong reasons, or conversely, the wrong results for the right reasons (where, say the observational data is problematic, or the model is configured to be unlike the earth system for a specific reason).

Furthermore, quantitative skill scores only assess specific versions of models, configured for specific ensembles of runs; they cannot reliably make any statements about other configurations built from the same code.

Quality as Fitness for Purpose

The problem is that there is no such thing as “the model”. The body of code that constitutes a modern climate model actually represents an enormous number of possible models, each corresponding to a different way of configuring that code for a particular run. Furthermore, this body of code isn’t a static thing. The code is changed on a daily basis, through a continual process of experimentation and model improvement. This applies even to any specific “official release”, which again is just a body of code that can be configured to run as any of a huge number of different models, and again, is not unchanging – as with all software, there will be occasional bugfix releases applied to it, along with improvements to the ancillary datasets.

Evaluation of climate models should not be about “the model”, but about the relationship between a modelling system and the purposes to which it is put. More precisely, it’s about the relationship between particular ways of building and configuring models and the ways in which the runs produced by those models are used.

What are the uses of a climate model? They vary tremendously:

  • To provide inputs to assessments of the current state of climate science;
  • To explore the consequences of a current theory;
  • To test a hypothesis about the observational system (e.g. forward modeling);
  • To test a hypothesis about the calculational system (e.g. to explore known weaknesses);
  • To provide homogenized datasets (e.g. re-analysis);
  • To conduct thought experiments about different climates;
  • To act as a comparator when debugging another model;

In general, we can distinguish three separate systems: the calculational system (the model code); the theoretical system (current understandings of climate processes) and the observational system. In the most general sense, climate models are developed to explore how well our current understanding (i.e. our theories) of climate explain the available observations. And of course the inverse: what additional observations might we make to help test our theories.

We're dealing with relationships between three different systems

Validation of the Entire Modeling System

When we ask questions about likely future climate change, we don’t ask the question of the calculational system, we ask it of the theoretical system; the models are just a convenient way of probing the theory to provide answers.
When society asks climate scientists for future projections, the question is directed at climate scientists, not their models. Modellers apply their judgment to select appropriate versions & configurations of the models to use, set up the runs, and interpret the results in the light of what is known about the models’ strengths and weaknesses and about any gaps between the computational models and the current theoretical understanding. And they add all sorts of caveats to the conclusions they draw from the model runs when they present their results.

Validation is not a post-hoc process to be applied to an individual “finished” model, to ensure it meets some criteria for fidelity to the real world. In reality, there is no such thing as a finished model, just many different snapshots of a large set of model configurations, steadily evolving as the science progresses. Knowing something about the fidelity of a given model configuration to the real world is useful, but not sufficient to address fitness for purpose. For this, we have to assess the extent to which climate models match our current theories, and the extent to which the process of improving the models keeps up with theoretical advances.

Summary

Our approach to model validation extends current approaches:

  • down into the detailed codebase to explore the processes by which the code is built and tested. Thus, we build up a picture of the day-to-day practices by which modellers make small changes to the model and test the effect of such changes (both in isolated sections of code, and on the climatology of a full model). The extent to which these practices improve the confidence and understanding of the model depends on how systematically this testing process is applied, and how many of the broad range of possible types of testing are applied. We also look beyond testing to other software practices that improve trust in the code, including automated checking for conservation of mass across the coupled system, and various approaches to spin-up and restart testing.
  • up into the broader scientific context in which models are selected and used to explore theories and test hypotheses. Thus, we examine how features of the entire scientific enterprise improve (or impede) model validity, from the collection of observational data, creation of theories, use of these theories to develop models, choices for which model and which model configuration to use, choices for how to set up the runs, and interpretation of the results. We also look at how model inter-comparison projects provide a de facto benchmarking process, leading in turn to exchanges of ideas between modelling labs, and hence advances in the scientific maturity of the models.

This layered approach does not attempt to quantify model validity, but it can provide a systematic account of how the detailed practices involved in the development and use of climate models contribute to the quality of modelling systems and the scientific enterprise that they support. By making the relationships between these practices and model quality more explicit, we expect to identify specific strengths and weaknesses the modelling systems, particularly with respect to structural uncertainty in the models, and better characterize the “unknown unknowns”.

I’ve spent much of the last month preparing a major research proposal for the Ontario Research Fund (ORF), entitled “Integrated Decision Support for Sustainable Communities”. We’ve assembled a great research team, with professors from a number of different departments, across the schools of engineering, information, architecture, and arts and science. We’ve held meetings with a number of industrial companies involved in software for data analytics and 3D modeling, consultancy companies involved in urban planning and design, and people from both provincial and city government. We started putting this together in September, and were working to a proposal deadline at the end of January.

And then this week, out of the blue, the province announced that it was cancelling the funding program entirely, “in light of current fiscal challenges”. The best bit in the letter I received was:

The work being done by researchers in this province is recognized and valued. This announcement is not a reflection of the government’s continued commitment through other programs that provides support to the important work being done by researchers.

I’ve searched hard for the “other programs” they mention, but there don’t appear to be any. It’s increasingly hard to get any finding for research, especially trans-disciplinary research. Here’s the abstract from our proposal:

Our goal is to establish Ontario as a world leader in building sustainable communities, through the use of data analytics tools that provide decision-makers with a more complete understanding of how cities work. We will bring together existing expertise in data integration, systems analysis, modeling, and visualization to address the information needs of citizens and policy-makers who must come together to re-invent towns and cities as the basis for a liveable, resilient, carbon-neutral society. The program integrates the work of a team of world-class researchers, and builds on the advantages Ontario enjoys as an early adopter of smart grid technologies and open data initiatives.

The long-term sustainability of Ontario’s quality of life and economic prosperity depends on our ability to adopt new, transformative approaches to urban design and energy management. The transition to clean energy and the renewal of urban infrastructure must go hand-in-hand, to deliver improvements across a wide range of indicators, including design quality, innovation, lifestyle, transportation, energy efficiency and social justice. Design, planning and decision-making must incorporate a systems-of-systems view, to encompass the many processes that shape modern cities, and the complex interactions between them.

Our research program integrates emerging techniques in five theme areas that bridge the gap between decision-making processes for building sustainable cities and the vast sources of data on social demographics, energy, buildings, transport, food, water and waste:

  • Decision-Support and Public Engagement: We begin by analyzing the needs of different participants, and develop strategies for active engagement;
  • Visualization: We will create collaborative and immersive visualizations to enhance participatory decision-making;
  • Modelling and Simulation: We will develop a model integration framework to bring together models of different systems that define the spatio-temporal and socio-economic dynamics of cities, to drive our visualizations;
  • Data Privacy: We will assess the threats to privacy of all citizens that arise when detailed data about everyday activities is mined for patterns and identify appropriate techniques for protecting privacy when such data is used in the modeling and analysis process;
  • Data Integration and Management: We will identify access paths to the data sources needed to drive our simulations and visualizations, and incorporate techniques for managing and combining very large datasets.

These themes combine to provide an integrated approach to intelligent, data-driven planning and decision-making. We will apply the technologies we develop in a series of community-based design case studies, chosen to demonstrate how our approach would apply to increasingly complex problems such as energy efficiency, urban intensification, and transportation. Our goal is to show how an integrated approach can improve the quality and openness of the decision-making process, while taking into account the needs of diverse stakeholders, and the inter-dependencies between policy, governance, finance and sustainability in city planning.

Because urban regions throughout the world face many of the same challenges, this research will allow Ontario to develop a technological advantage in areas such as energy management and urban change, and enabling a new set of creative knowledge-based services address the needs of communities and governments. Ontario is well placed to develop this as a competitive advantage, due to its leadership in the collection and maintenance of large datasets in areas such as energy management, social well-being, and urban infrastructure. We will leverage this investment and create a world-class capability not available in any other jurisdiction.

Incidentally, we spent much of last fall preparing a similar proposal for the previous funding round. That was rejected on the basis that we weren’t clear enough what the project outcomes would be, and what the pathways to commercialization were. For our second crack at it, we were planning to focus much more specifically on the model integration part, by developing a software framework for coupling urban system models, based on a detailed requirements analysis of the stakeholders involved in urban design and planning, with case studies on neighbourhood re-design and building energy retro-fits. Our industrial partners have identified a number of routes to commercial services that would make use of such software. Everything was coming together beautifully. *Sigh*.

Now we have to find some other source of funding for this. Contributions welcome!