Like most universities, U of T had a hiring freeze for new faculty for the last few years, as we struggled with budget cuts. Now, we’re starting to look at hiring again, to replace faculty we lost over that time, and to meet the needs of rapidly growing student enrolments. Our department (Computer Science) is just beginning the process of deciding what new faculty positions we wish to argue for, for next year. This means we get to engage in a fascinating process of exploring what we expect to be the future of our field, and where there are opportunities to build exciting new research and education programs. To get a new faculty position, our department has to make a compelling case to the Dean, and the Dean has to balance our request with those from 28 other departments and 46 interdisciplinary groups. So the pitch has to be good.

So here’s my draft pitch:

(1) Create a joint faculty position between the Department of Computer Science and the new School of Environment.

Last summer U of T’s Centre for Environment was relaunched as a School of Environment, housed wholly within the Faculty of Arts and Science. As a school, it can now make up to 49% faculty appointments. [The idea is that to do interdisciplinary research, you need a base in a home department/discipline, where your tenure and promotion will be evaluated, but would spend half your time engaged in inter-disciplinary research and teaching at the School. Hence, a joint position for us would be 51% CS and 49% in the School of Environment.]

A strong relationship between Computer Science and the School of Environment makes sense for a number of reasons. Most environmental science research makes extensive use of computational modelling as a core research tool, and the environmental sciences are one of the greatest producers of big data. As an example, the Earth System Grid currently stores more than 3 petabytes of data from climate models, and this is expected to grow to the point where by the end of the decade a single experiment with a climate model would generate an exabyte of data. This creates a number of exciting opportunities for application of CS tools and algorithms, in a domain that will challenge our capabilities. At the same time, this research is increasingly important to society, as we seek to find ways to feed 9 billion people, protect vital ecosystems, and develop strategies to combat climate change.

There are a number of directions we could go with such a collaboration. My suggestion is to pick one of:

  • Climate informatics. A small but growing community is applying machine learning and data mining techniques to climate datasets. Two international workshops have been held in the last two years, and the field has had a number of successes in knowledge discovery that have established its importance to climate science. For a taste of what the field covers, see the agenda of the last CI Workshop.
  • Computational Sustainability. Focuses on the decision-support needed for resource allocation to develop sustainable solutions in large-scale complex adaptive systems. This could be viewed as a field of applied artificial intelligence, but to do it properly requires strong interdisciplinary links with ecologists, economists, statisticians, and policy makers. This growing community has run run an annual conference, CompSust, since 2009, as well as tracks at major AI conferences for the last few years.
  • Green Computing. Focuses on the large environmental footprint of computing technology, and how to reduce it. Energy efficient computing is a central concern, although I believe an even more interesting approach is when we take a systems approach to understand how and why we consume energy (whether in IT equipment directly, or in devices that IT can monitor and optimize). Again, a series of workshops in the last few years has brought together an active research community (see for example, Greens’2013),

(2) Hire more software engineering professors!

Our software engineering group is now half the size it was a decade ago, as several of our colleagues retired. Here’s where we used to be, but that list of topics and faculty is now hopelessly out of date. A decade ago we had five faculty and plans to grow this to eight by now. Instead, because of the hiring freeze and the retirements, we’re down to three. There were a number of reasons we expected to grow the group, not least because for many years, software engineering was our most popular undergraduate specialist program and we had difficulty covering all the teaching, and also because the SE group had proved to be very successful in bringing in research funding, research prizes, and supervising large numbers of grad students.

Where do we go from here? Deans generally ignore arguments that we should just hire more faculty to replace losses, largely because when faculty retire or leave, that’s the only point at which a university can re-think its priorities. Furthermore, some of our arguments for a bigger software engineering group at U of T went away. Our department withdrew the specialist degree in software engineering, and reduced the number of SE undergrad courses, largely because we didn’t have the faculty to teach them, and finding qualified sessional instructors was always a struggle. In effect, our department has gradually walked away from having a strong software engineering group, due to resource constraints.

I believe very firmly that our department *does* need a strong software engineering group, for a number of reasons. First, it’s an important part of an undergrad CS education. The majority of our students go on to work in the software industry, and for this, it is vital that they have a thorough understanding of the engineering principles of software construction. Many of our competitors in N America run majors and/or specialist programs in software engineering, to feed the enormous demand from the software industry for more graduates. One could argue that this should be left to the engineering schools, but these schools tend to lack sufficient expertise in discrete math and computing theory. I believe that software engineering is rooted intellectually in computer science and that a strong software engineering program needs the participation (and probably the leadership) of a strong computer science department. This argument suggests we should be re-building the strength in software engineering that we used to have in our undergrad program, rather than quietly letting it whither.

Secondly, the complexity of modern software systems makes software engineering research ever more relevant to society. Our ability to invent new software technology continues to outpace our ability to understand the principles by which that software can be made safe and reliable. Software companies regularly come to us seeking to partner with us in joint research and to engage with our grad students. Currently, we have to walk away from most of these opportunities. That means research funding we’re missing out on.

Here’s the call for papers for a workshop we’re organizing at ICSE next May:

The First International Workshop on Green and Sustainable Software (GREENS’2012)

(In conjunction with the 34th International Conference on Software Engineering (ICSE 2012), Zurich, Switzerland, June 2-9, 2012

Important Dates:

  • 17th February 2012 – paper submission
  • 19th March 2012 – notification of acceptance
  • 29th March 2012 – camera-ready
  • 3rd June 2011 – workshop

Workshop theme and goals: The Focus of the GREENS workshop is the engineering of green and sustainable software. Our goal is to bring together academics and practitioners to discuss research initiatives, challenges, ideas, and results in this critically important area of the software industry. To this end GREENS will both discuss the state of the practice, especially at the industrial level, and define a roadmap, both for academic research and for technology transfer to industry. GREENS seeks contributions addressing, but not limited to, the following list of topics:

Concepts and foundations:

  • Definition of sustainability properties (e.g. energy and power consumption, green-house gases emissions, waste and pollutants production), their relationships, their units of measure, their measurement procedures in the context of software-intensive systems, their relationships with other properties (e.g. response time, latency, cost, maintainability);
  • Green architectural knowledge, green IT strategies and design patterns;

Greening domain-specific software systems:

  • Energy-awareness in mobile software development;
  • Mobile software systems scalability in low-power situations;
  • Energy-efficient techniques aimed at optimizing battery consumption;
  • Large and ultra-large scale green information systems design and development (including inter-organizational effects)

Greening of IT systems, data and web centers:

  • Methods and approaches to improve sustainability of existing software systems;
  • Customer co-creation strategies to motivate behavior changes;
  • Virtualization and offloading;
  • Green policies, green labels, green metrics, key indicators for sustainability and energy efficiency;
  • Data center and storage optimization;
  • Analysis, assessment, and refactoring of source code to improve energy efficiency;
  • Workload balancing;
  • Lifecycle Extension

Greening the process:

  • Methods to design and develop greener software systems;
  • Managerial and technical risks for a sustainable modernization;
  • Quality & risk assessments, tradeoff analyses between energy efficiency, sustainability and traditional quality requirements;

Case studies, industry experience reports and empirical studies:

  • Empirical data and analysis about sustainability properties, at various granularity levels: complete infrastructure, or nodes of the infrastructure (PCs, servers, and mobile devices);
  • Studies to define technical and economic models of green aspects;
  • Return on investment of greening projects, reasoning about the triple bottom line of people, planet and profits;
  • Models of energy and power consumption, at various granularity levels;
  • Benchmarking of power consumption in software applications;

Guidelines for Submission: We are soliciting papers in two distinct categories:

  1. Research papers describing innovative and significant original research in the field (maximum 8 pages);
  2. Industrial papers describing industrial experience, case studies, challenges, problems and solutions (maximum 8 pages).

Please submit your paper online through EasyChair (see the GREENS website). Submissions should be original and unpublished work. Each submitted paper will undergo a rigorous review process by three members of the Program Committee. All types of papers must conform to the ICSE submission format and guidelines. All accepted papers will appear in the ACM Digital Library.

Workshop Organizers:

  • Patricia Lago (VU University Amsterdam, The Netherlands)
  • Rick Kazman (University of Hawaii, USA)
  • Niklaus Meyer (Green IT SIG, Swiss Informatics Society, Switzerland)
  • Maurizio Morisio (Politecnico di Torino, Italy)
  • Hausi A. Mueller (University of Victoria, Canada)
  • Frances Paulisch (Siemens Corporate Technology, Germany)
  • Giuseppe Scanniello (Università della Basilicata, Italy)
  • Olaf Zimmermann (IBM Research, Zurich, Switzerland)

Program committee:

  • Marco Aiello, University of Groningen, Netherlands
  • Luca Ardito, Politecnico di Torino, Italy
  • Ioannis Athanasiadis, Democritus Univ. of Thrace, Greece
  • Rami Bahsoon, University College London, UK
  • Ivica Crnkovic, Malardalen University, Sweden
  • Steve Easterbrook, University of Toronto, Canada
  • Hakan Erdogmus, Things Software
  • Anthony Finkelstein, University College London, UK
  • Matthias Galster, University of Groningen, Netherlands
  • Ian Gorton, Pacific Northwest National Laboratory, USA
  • Qing Gu, VU University Amsterdam, Netherlands
  • Wolfgang Lohmann, Informatics and Sustainability Research, Swiss Federal Laboratories for Materials Science and Technology, Switzerland
  • Lin Liu, School of Software, Tsinghua University, China
  • Alessandro Marchetto, Fondazione Bruno Kessler, Italy
  • Henry Muccini, University of L’Aquila, Italy
  • Stefan Naumann, Trier University of Applied Sciences, Environmental Campus, Germany
  • Cesare Pautasso, University of Lugano, Switzerland
  • Barbara Pernici, Politecnico di Milano, Italy
  • Giuseppe Procaccianti, Politecnico di Torino, Italy
  • Filippo Ricca, University of Genova
  • Antony Tang, Swinburne University of Tech., Australia
  • Antonio Vetro’, Fraunhofer IESE, USA
  • Joost Visser, Software Improvement Group and Knowledge Network Green Software, Netherlands
  • Andrea Zisman, City University London, UK

Today Jonathan Lung released a new version of our open, shareable, web-based calculator, Inflo. We have a new screencast to explain what it is:

You can play with Inflo yourself (just say yes to accept the site certificate; you’ll need to register a new username if you want to save your calculations on the server). Or go see some of the calculations we’ve already built with it:

Or there’s always the tutorial for inflo in inflo itself….

This only runs on Windows, so I’ll have to wait a while for the Mac version before I can try it myself, so maybe in the meantime someone else can play it and tell me what it’s like:

I’m particularly intrigued by the fact that Myles Allen (famous for climateprediction.net and the Trillionth Tonne study) was a consultant in the game design. Does this mean it brings on board some of the dynamics in the latest GCMs? The Guardian previewed the beta version of the game back in the fall, but they don’t appear to have actually played it. PC Gamer magazine did play it, and concludes that it really does succeed in it’s goal of making people thinks seriously about the issues.

Hmmm, almost makes me want to borrow a PC to try it…

I’ve mentioned the Clear Climate Code project before, but it’s time to give them an even bigger shout out, as the project is a great example of of the kind of thing I’m calling for in my grand challenge paper. The project is building an open source community around the data processing software used in climate science. Their showcase project is an open source Python re-implementation of gistemp, and very impressive it is too.

Now they’ve gone one better, and launched the Climate Code Foundation, a non-profit organisation aimed at “improving the public understanding of climate science through the improvement and publication of climate science software”. The idea is for it to become an umbrella body that will nurture many more open source projects, and promote greater openness of the software tools and data used for the science.

I had a long chat with Nick Barnes, one of the founders of CCF, on the train to Exeter last night, and was very impressed with his enthusiasm and energy. He’s actively seeking more participants, more open source projects for the foundation to support, and of course, for funding to keep the work going. I think this could be the start of something beautiful.

The British Columbia provincial government has set up a Climate Change Data Catalogue, with open access to data such as GHG emissions inventories, records of extreme weather events, and data on energy use by different industrial sectors. They recently held a competition for software developers to create applications that make use of the data, and got some interesting submissions, which were announced this week. Voting is open to vote for the people’s choice winner until Aug 31st.

(h/t to Neil for this)

This week we’re demoing Inflo at the Ontario Centres of Excellence Discovery Conference 2010. It’s given me a chance to play a little more with the demo, and create some new sample calculations (with Jonathan valiantly adding new features on the fly in response to my requests!). The idea of Inflo is that it should be an open source calculation tool – one that supports a larger community of people discussing and reaching consensus on the best way to calculate the answer to some (quantifiable) question.

For the demo this week, I re-did the calculation on how much of the remaining global fossil fuel reserves we can burn and still keep global warming within the target threshold of a +2°C rise over pre-industrial levels. I first did this calculation in blog post back in the fall, but I’ve been keen to see if Inflo would provide a better way of sharing the calculation. Creating the model is still a little clunky (it is, after all, a very preliminary prototype), but I’m pleased with the results. Here’s a screenshot:

And here’s a live link to try it out. A few tips: the little grey circles under a node indicate there’s some hidden subtrees. Double-clicking on one of these will expand it, while double clicking on an expanded node will collapse everything below it, so you can explore the basis for each step in the calculation. The Node Editor tool bar on the left shows you the formula for the selected node, and any notes. Some of the comments in the “Description” field are hotlinks to data sources – mouseover the text to find them. Oh, and the arrows don’t always update properly when you change views – selecting a node in the graph should force them to update. Oh, and the units are propagated (and scaled for readability) automatically, which is why they sometime look a little odd, eg. “tonne of carbon” rather than “tonnes”. One of our key design decisions is to make the numbers as human-readable as possible, and always ensure correct units are displayed.

The demo should get across some of what we’re trying to do. The idea is to create a visual, web-based calculator that can be edited and shared; eventually we hope to build wikipedia-like communities who will curate the calculations, to ensure that the appropriate sources of data are used, and that the results can be trusted. We’ll need to add more facilities for version management of calculations, and for linking discussions to (portions of) the graphs.

Here’s another example: Jono’s carbon footprint analysis of whether you should print a document or read it on the screen (double click the top node to expand the calculation).

On March 30, David Mackay, author of Sustainable Energy without the Hot Air, will be giving the J Tuzo Wilson lecture in the dept of Physics (Details of the time/location here). Here’s the abstract for his talk:

How easy is it get off our fossil fuel habit? What do the fundamental limits of physics say about sustainable energy? Could a typical “developed” country live on its own renewables? The technical potential of renewables is often said to be “huge” -but we need to know how this “huge” resource compares with another  “huge”: our huge power consumption. The public discussion of energy policy needs numbers, not adjectives. In this talk I will express power consumption and sustainable production in a single set of personal, human-friendly units. Getting off fossil fuels is not going to be easy, but it is possible.

The book itself is brilliant (and freely available online). But David’s visit is even more relevant, because it will give us a chance to show him a tool our group has been developing to facilitate and share the kinds of calculations that David does so well in the book.

We started from the question of how to take “back of the envelope” calculations and make them explicitly shareable over the web. And not just shareable, but to turn them into structured objects that can be discussed, updated, linked to evidence and so on (in much the same way that wikipedia entries are). Actually, the idea started with Jono’s calculations for the carbon footprint of “paper vs. screen”. When he first showed me his results, we got into a discussion of how other people might validate his calculations, and customize them for different contexts (e.g. for different hardware setups, different parts of the world with different energy mixes, etc). He came up with a graphical layout for the calculations, and we speculated how we would apply version control to this, make it a live calculator (so that changes in the input assumptions propagate like they would in a spreadsheet), and give each node it’s own URL, so that it can be attached to discussions, sources of evidence, etc. We brainstormed a long list of other features we’d want in such a tool, and we’re now busy creating a first prototype.

What kind of tool is it? My short description is that it is a crowd-sourced carbon calculator. Because I find existing carbon calculators to be very frustrating, because I can’t play with the assumptions in the calculations. Effectively, they are closed-source.

At the time we came up with these ideas, we were also working on modeling the analysis in David Mackay’s book (JP shows some preliminary results, here and here), to see if we could come up with a way of comparing his results with other books that also attempt to layout solutions to climate change. We created a domain model (as a UML class diagram), which was big and ugly, and a strategic actor goal model (using i*), which helped to identify key stakeholders, but didn’t capture the main content of Mackay’s analysis. So we tried modeling a chapter of the book as a calculation in Jonathan’s style, and it worked remarkably well. So we realized we needed to actually build the tool. And the rest, as they say, is history. Or at least will be, once we have a demo-able prototype…

I’m proposing a new graduate course for our department, to be offered next January (after I return from sabbatical). For the course calendar, I’m required to describe it in fewer than 150 words. Here’s what I have so far:

Climate Change Informatics

This introductory course will explore the contribution of computer science to the challenge of climate change, including: the role of computational models in understanding earth systems, the numerical methods at the heart of these models, and the software engineering techniques by which they are built, tested and validated; challenges in management of earth system data, such as curation, provenance, meta-data description, openness and reproducibility; tools for communication of climate science to broader audiences, such as simulations, games, educational software, collective intelligence tools, and the challenges of establishing reputation and trustworthiness for web-based information sources; decision-support tools for policymaking and carbon accounting, including the challenges of data collection, visualization, and trade-off analysis; the design of green IT, such as power-aware computing, smart controllers and the development of the smart grid.

Here’s the rationale:

This is an elective course. The aim is to bring a broad range of computer science graduate students together, to explore how their skills and knowledge in various areas of computer science can be applied to a societal grand challenge problem. The course will equip the students with a basic understanding of the challenges in tackling climate change, and will draw a strong link between the students’ disciplinary background and a series of inter-disciplinary research questions. The course crosscuts most areas of computer science.

And my suggested assessment modes:

  • Class participation: 10%
  • Term Paper 1 (essay/literature review): 40%
  • Term Paper 2 (software design or implementation): 40%
  • Oral Presentation or demo: 10%

Comments are most welcome – the proposal has to get through various committees before the final approval by the school of graduate studies. There’s plenty of room to tweak it in that time.

A few more late additions to my posts last month on climate science resources for kids:

  • NASA’s Climate Kids is a lively set of tools for younger kids, with games, videos (the ‘Climate Tales’ videos are wonderfully offbeat), and even information on future careers to help the planet.
  • Climate4Classrooms, put together by the British Council, includes a set of learning modules for kids ages 11+. Looks like a very nice set of resources.
  • And my kids came back from a school book fair last week with the DK Eyewitness book Climate Change, which is easily the best kids book I’ve seen yet (we have several other books in this series, and they’re all excellent). It’s a visual feast with photos and graphics, but it doesn’t skimp on the science, nor the policy implications, nor the available clean energy technologies – in fact it seems to cover everything! The parts that caught my eye (and are done very well) include a page on climate models, and a page entitled “What scares the scientists”, on climate tipping points.

I’ve been hearing about Wolfram Alpha a lot lately, and I finally got a chance to watch a demo screencast today, and I have to say, it looks really cool. It’s a combination of a search engine, a set of computational widgets, and a large, curated knowledge base. Exactly the kind of thing we need for playing with climate datasets, and giving a larger audience a glimpse into how climate science is done. The only thing I can see missing (and maybe it’s there, and I just didn’t look hard enough) is the idea of a narrative thread – I want to be able to create a narrated trail through a set of computational widgets to tell a story about how we build up a particular scientific conclusion from a multitude of sources of evidence…

[Update 19 May 2010: They’ve added some climate datasets to Wolfram Alpha]

Having posted last night about how frustrating it is to see the same old lies get recycled in every news report, this morning I’m greeted with the news that there’s now an app for that. I’ve posted before about the skeptical science site. Well, now it’s available on a free iPhone app. I’ve downloaded it and played with it, and it looks fabulous. Here’s the screenshots:

Just perfect for bringing the science to the masses down the pub.

I posted a while back the introduction to a research proposal in climate change informatics. And I also posted a list of potential research areas, and a set of criteria by which we might judge climate informatics tools. But I didn’t say what kinds of things we might want climate informatics tools to do. Here’s my first attempt, based on a slide I used at the end of my talk on usable climate science:

What do we want the tools to support?

What I was trying to lay out on this slide was a wide range of possible activities for which we could build software tools, combining good visualizations, collaborative support, and compelling user interface design. If we are to improve the quality of the public discourse on climate change, and support the kind of collective decision making that leads to effective action, we need better tools for all four of these areas:

  • Improve the public understanding of the basic science. Much of this is laid out in the IPCC reports, but to most people these are “dead tree science” – lots of thick books that very few people will read. So, how about some dynamic, elegant and cool tools to convey:
    • The difference between emissions and concentrations.
    • The various sources of emissions and how we know about them from detection/attribution studies.
    • The impacts of global warming on your part of the world – health, food and water, extreme weather events, etc.
    • The various mitigation strategies we have available, and what we know about the cost and effectiveness of each.
  • Achieve a better understanding of how the science works, to allow people to evaluate the nature of the evidence about climate change:
    • How science works, as a process of discovery, including how scientists develop theories, and how they correct mistakes.
    • What climate models are and how they are used to improve our understanding of climate processes.
    • How the peer-review process works, and why it is important, both as a filter for poor research, and a way of assessing the credentials of scientists.
    • What it means to be expert in a particular field, why expertise matters, and why expertise in one area of science doesn’t necessarily mean expertise in another.
  • Tools to support critical thinking, to allow people to analyze the situation for themselves:
    • The importance of linking claims to sources of evidence, and the use of multiple sources of evidence to test a claim.
    • How to assess the credibility of a particular claim, and the credibility of its source (desperately needed for appropriate filtering of ‘found’ information on the internet).
    • Systems Thinking – because reductionist approaches won’t help. People need to be able to recognize and understand whole systems and the dynamics of systems-of-systems.
    • Understanding risk – because the inability to assess risk factors is a major barrier to effective action.
    • Identifying the operation of vested interests. Because much of the public discourse isn’t about science or politics. It’s about people with vested interests attempting to protect those interests, often at the expense of the rest of society.
  • And finally, none of the above makes any difference if we don’t also provide tools to support effective action:
    • How to prioritize between short-term and long term goals.
    • How to identify which kinds of personal action are important and effective.
    • How to improve the quality of policy-making, so that policy choices are linked to the scientific evidence.
    • How to support consensus building and democratic action for collective decision making, at the level of communities, cities, nationals, and globally.
    • Tools to monitor effectiveness of policies and practices once they are implemented.

When I was visiting MPI-M earlier this month, I blogged about the difficulty of documenting climate models. The problem is particularly pertinent to questions of model validity and reproducibility, because the code itself is the result of a series of methodological choices by the climate scientists, which are entrenched in their design choices, and eventually become inscrutable. And when the code gets old, we lose access to these decisions. I suggested we need a kind of literate programming, which sprinkles the code among the relevant human representations (typically bits of physics, formulas, numerical algorithms, published papers), so that the emphasis is on explaining what the code does, rather than preparing it for a compiler to digest.

The problem with literate programming (at least in the way it was conceived) is that it requires programmers to give up using the program code as their organising principle, and maybe to give up traditional programming languages altogether. But there’s a much simpler way to achieve the same effect. It’s to provide an organising structure for existing programming languages and tools, but which mixes in non-code objects in an intuitive way. Imagine you had an infinitely large sheet of paper, and could zoom in and out, and scroll in any direction. Your chunks of code are laid out on the paper, in an spatial arrangement that means something to you, such that the layout helps you navigate. Bits of documentation, published papers, design notes, data files, parameterization schemes, etc can be placed on the sheet, near to the code that they are relevant to. When you zoom in on a chunk of code, the sheet becomes a code editor; when you zoom in on a set of math formulae, it becomes a LaTeX editor, and when you zoom in on a document it becomes a word processor.

Well, Code Canvas, a tool under development in Rob Deline‘s group at Microsoft Research does most of this already. The code is laid out as though it was one big UML diagram, but as you zoom in you move fluidly into a code editor. The whole thing appeals to me because I’m a spatial thinker. Traditional IDEs drive me crazy, because they separate the navigation views from the code, and force me to jump from one pane to another to navigate. In the process, they hide the inherent structure of a large code base, and constrain me to see only a small chunk at a time. Which means these tools create an artificial separation between higher level views (e.g. UML diagrams) and the code itself, sidelining the diagrammatic representations. I really like the idea of moving seamlessly back and forth between the big picture views and actual chunks of code.

Code Canvas is still an early prototype, and doesn’t yet have the ability to mix in other forms of documentation (e.g. LaTeX) on the sheet (or at least not in any demo Microsoft are willing to show off), but the potential is there. I’d like to explore how we take an idea like this an customize it for scientific code development, where there is less of a strict separation of code and data than in other forms of programming, and where the link to published papers and draft reports is important. The infinitely zoomable paper could provide an intuitive unifying tool to bring all these different types of object together in one place, to be managed as a set. And the use of spatial memory to help navigate will be helpful, when the set of things gets big.

I’m also interested in exploring the idea of using this metaphor for activities that don’t involve coding – for example complex decision-support for sustainability, where you need to move between spreadsheets, graphs & charts, models runs, and so on. I would lay out the basic decision task as a graph on the sheet, with sources of evidence connecting into the decision steps where they are needed. The sources of evidence could be text, graphs, spreadsheet models, live datafeeds, etc. And as you zoom in over each type of object, the sheet turns into the appropriate editor. As you zoom out, you get to see how the sources of evidence contribute to the decision-making task. Hmmm. Need a name for this idea. How about DecisionCanvas?

Update: Greg also pointed me to CodeBubbles and Intentional Software