06. January 2024 · Write a comment · Categories: courses · Tags: ,

I’m teaching a new course this term, called Confronting the Climate Crisis. As it’s the first time I’ve taught since the emergence of the latest wave of AI chatbots. Here’s what I came up with:

The assignments on this course have been carefully designed to give you meaningful experiences that build your knowledge and skills, and I hope you will engage with them in that spirit. If you decide to use any AI tools, you *must* include a note explaining what tools you used and how you used them, and include a reflection on how they have affected your learning process. Without such a note, use of AI tools will be treated as an academic offence, with the same penalties as if you had asked someone else (rather than a bot) to do the work for you.

Rationale for this policy: In the last couple of years, so-called Artificial Intelligence (AI) tools have become commonplace, particularly tools that use generative AI to create text and images. The underlying technology uses complex statistical models of typical sequences of words (and elements of images), which can instantly create very plausible responses to a variety of prompts. However, these tools have no understanding of the meanings that we humans attach to words and images, and no experience of the world in which those meanings reside. The result is that they are expert at mimicking how humans express themselves, but they are often factually wrong, and their outputs reflect the biases (racial, gender, socio-economic, geographic) that are inherent in the data on which the models were trained. If you choose to use AI tools to help you create your assignments for this course, you will still be responsible for any inaccuracies and biases in the generated content.

More importantly, these AI tools raise important questions about the nature of learning in higher education. Unfortunately, we have built a higher education system that places far too much emphasis on deadlines and grades, rather than on learning and reflection. In short, we have built a system that encourages students to cheat. The AI industry promotes its products as helpful tools, perhaps no different from using a calculator in math, or a word processor when writing. And there are senses in which this is true – for example if you suffer from writer’s block, an AI tool can quickly generate an outline or a first draft to get you started. But the crucial factor in deciding when and how to use such tools is a question of what, exactly, you are offloading onto the machine. If a tool helps you overcome some of the tedious, low-level steps so that you can move on faster to the important learning experiences, that’s great! If on the other hand, the tool does all the work for you, so you never have to think or reflect on the course material, you will gain very little from this course other than (perhaps) a good grade. In that sense, most of the ways you might use an AI tool in your coursework are no different from other forms of ‘cheating’: they provide a shortcut to a good grade, by skipping the learning process you would experience if you did the work yourself.

Icon for Creative Commons licence CC-BY-NC-SA

This course policy is licensed under a Creative Commons Licence CC BY-NC-SA 4.0. Feel free to use and adapt for non-commercial purposes, as long as you credit me, and share alike any adaptations you make.

So, here’s an interesting thought that came up the the Michael Jackson festschrift yesterday. Michael commented in his talk that understanding is not a state, it’s a process. David Notkin then asked how we can know how well we’re doing in that process. I suggested that one of the ways you know is by discovering where your understanding is incorrect, which can happen if your model surprises you. I noticed this is a basic mode of operation for earth system modelers. They put their current best understanding of the various earth systems (atmosphere, ocean, carbon cycle, atmospheric chemistry, soil hydrology, etc) into a coupled simulation model and run it. Whenever the model surprises them, they know they’re probing the limits of their understanding. For example, the current generation of models at the Hadley centre don’t get the Indian Monsoon in the right place at the right time. So they know there’s something in that part of the model they don’t yet understand sufficiently.

Contrast this with the way we use (and teach) modeling in software engineering. For example, students construct UML models as part of a course in requirements analysis. They hand in their models, and we grade them. But at no point in the process do the models ever surprise their authors. UML models don’t appear to have the capacity for surprise. Which is unfortunate, given what the students did in previous courses. In their programming courses, they were constantly surprised. Their programs didn’t compile. Then they didn’t run. Then they kept crashing. Then they gave the wrong outputs. At every point, the surprise is a learning opportunity, because it means there was something wrong with their understanding, which they have to fix. This contrast explains a lot about the relative value students get from programming courses versus software modeling courses.

Now of course, we do have some software engineering modeling frameworks that have the capacity for surprise. They allow you to create a model and play with it, and sometimes get unexpected results. For example, Alloy. And I guess model checkers have that capacity too. A necessary condition is that you can express some property that your model ought to have, and then automatically check that it does have it. But that’s not sufficient, because if the properties you express aren’t particularly interesting, or are trivially satisifed, you still won’t be surprised. For example, UML syntax checkers fall into this category – when your model fails a syntax check, that’s not surprising, it’s just annoying. Also, you don’t necessarily have to formally state the properties – but you do have to at least have clear expectations. When the model doesn’t meet those expectations, you get the surprise. So surprise isn’t just about executability, it’s really about falsifiability.

Having talked with some of our graduate students about how to get a more inter-disciplinary education while they are in grad school, I’ve been collecting links to collaborative grad programs at U of T:

The Dynamics of Global Change Doctoral Program, housed in the Munk Centre. The core course, DGC1000H is very interesting – it starts with Malcolm Gladwell’s Tipping Point book, and then tours through money, religion, pandemics, climate change, the internet and ICTs, and development. What a wonderful journey.

The Centre for the Environment runs a Collaborative Graduate Program (MSc and PhD) in which students take some environmental science courses in addition to satisfying the degree requirements of their home department. The core course for this program is ENV1001, Environmental Decision Making, and it also include an internship to get hands on experience with environmental problem solving.

The Knowledge Media Design Institute (KMDI) also has a collaborative doctoral program, perfect for those interested in design and evaluation of new knowledge media, with a strong focus on knowledge creation, social change, and community

Finally, the Centre for Global Change Science has a set of graduate student awards, to help fund grad students interested in global change science. Oh, and they have a fascinating seminar series, mainly focussed on climate science (all done for this year, but get on their mailing list for next years seminars).

Are there any more I missed?

Had an interesting conversation this afternoon with Brad Bass. Brad is a prof in the Centre for Environment at U of T, and was one of the pioneers of the use of models to explore adaptations to climate change. His agent based simulations explore how systems react to environmental change, e.g. exploring population balance among animals, insects, the growth of vector-borne diseases, and even entire cities. One of his models is Cobweb, an open-source platform for agent-based simulations. 

He’s also involved in the Canadian Climate Change Scenarios Network, which takes outputs from the major climate simulation models around the world, and extracts information on the regional effects on Canada, particularly relevant for scientists who want to know about variability and extremes on a regional scale.

We also talked a lot about educating kids, and kicked around some ideas for how you could give kids simplified simulation models to play with (along the line that Jon was exploring as a possible project), to get them doing hands on experimentation with the effects of climate change. We might get one of our summer students to explore this idea, and Brad has promised to come talk to them in May once they start with us.

Oh, and Brad is also an expert on green roofs, and will be demonstrating them to grade 5 kids at the Kids World of Energy Festival.

Computer Science, as an undergraduate degree, is in trouble. Enrollments have dropped steadily throughout this decade: for example at U of T, our enrollment is about half what it was at the peak. The same is true across the whole of North America. There is some encouraging news: enrollments picked up a little this year (after a serious recruitment drive, ours is up about 20% from it’s nadir, while across the US it’s up 6.2%). But it’s way to early to assume they will climb back up to where they were. Oh, and percentage of women students in CS now averages 12% – the lowest ever.

What happened? One explanation is career expectations. In the 80’s, its was common wisdom that a career in computers was an excellent move, for anyone showing an aptitude for maths. In the 90’s, with the birth of the web, computer science even became cool for a while, and enrollments grew dramatically, with a steady improvement in gender balance too. Then came the dotcom boom and bust, and suddenly a computer science degree was no longer a sure bet. I’m told by our high school liaison team that parents of high school students haven’t got the message that the computer industry is short of graduates to recruit (although with the current recession that’s changing again anyway).

A more likely explanation is perceived relevance. In the 80’s, with the birth of the PC, and in the 90’s with the growth of the web, computer science seemed like the heart of an exciting revolution. But now computers are ubiquitous, they’re no longer particularly interesting. Kids take them for granted, and a only a few über-geeks are truly interested in what’s inside the box. But computer science departments continue to draw boundaries around computer science and its subfields in a way that just encourages the fragmentation of knowledge that is so endemic of modern universities.

Which is why an experiment at Georgia Tech is particularly interesting. The College of Computing at Georgia Tech has managed to buck the enrollment trend, with enrollment numbers holding steady throughout this decade. The explanation appears to be a radical re-design of their undergraduate degree, into a set of eight threads. For a detailed explanation, there’s a white paper, but the basic aim is to get students to take more ownership of their degree programs (as opposed to waiting to be spoonfed), and to re-describe computer science in terms that make sense to the rest of the world (computer scientists often forget the the field is impenetrable to the outsider). The eight threads are: Modeling and simulation; Devices (embedded in the physical world); Theory; Information internetworks; Intelligence; Media (use of computers for more creative expression); People (human-centred design); and Platforms (computer architectures, etc). Students pick any two threads, and the program is designed so that any combination covers most of what you would expect to see in a traditional CS degree.

At first sight, it seems this is just a re-labeling effort, with the traditional subfields of CS (e.g. OS, networks, DB, HCI, AI, etc) mapping on to individual threads. But actually, it’s far more interesting than that. The threads are designed to re-contextualize knowledge. Instead of students picking from a buffet of CS courses, each thread is designed so that students see how the knowledge and skills they are developing can be applied in interesting ways. Most importantly, the threads cross many traditional disciplinary boundaries, weaving a diverse set of courses into a coherent theme, showing the students how their developing CS skills combine in intellectually stimulating ways, and preparing them for the connected thinking needed for inter-disciplinary problem solving.

For example the People thread brings in psychology and sociology, examining the role of computers in the human activity systems that give them purpose. It explore the perceptual and cognitive abilities of people as well as design practices for practical socio-technical systems. The Modeling and Simluation thread explores how computational tools are used in a wide variety of sciences to help understand the world. Following this thread will require consideration of epistemology of scientific knowledge, as well as mastery of the technical machinery by which we create models and simulations, and the underlying mathematics. The thread includes in a big dose of both continuous and discrete math, data mining, and high performance computing. Just imagine what graduates of these two threads would be able to do for our research on SE and the climate crisis! The other thing I hope it will do is to help students to know their own strengths and passions, and be able to communicate effectively with others.

The good news is that our department decided this week to explore our own version of threads. Our aims is to learn from the experience at Georgia Tech and avoid some of the problems they have experienced (for example, by allowing every possible combination of 8 threads, it appears they have created too many constraints on timetabling and provisioning individual courses). I’ll blog this initiative as it unfolds.