Scientists have a tendency to deal with nonsense by ignoring it. Papers that make it into the peer-reviewed literature that are obviously wrong are usually left to die a quiet death. They don’t get cited, they don’t get replicated, they don’t even get talked about (at least among the experts). It’s not worth anyone’s time (or career) to publish response papers demonstrating that nonsense is nonsense. And of course, nonsense that doesn’t even get into peer reviewed papers is even easier to ignore. The mainstream media and internet discussions are so full of it that many scientists just tune it out.

But outside of a particular scientific field, lay observers find it hard to tell nonsense from sound science. So the nonsense spreads insidiously, and the public discourse diverges ever further from the scientific one.

Luckily, there are a few people who are willing to devote themselves to tackling the nonsense head on. Ben Goldacre is my favourite example – he runs a newspaper column, blog and book called Bad Science. It helps that he’s a witty writer and an even wittier speaker. (It probably also helps that he’s British).

Of course, climate science gets more than its fair share of nutters spouting nonsense, so it’s good to see at last a more coordinated effort among science communicators to counter it. SkepticalScience has been doing a wonderful job over the past couple of years at documenting all the false memes about climate change floating around on the internet, and countering them with actual science. Now they’ve ratcheted it up a notch, with a complete round up of all the nonsense spouted by a certain Christopher Monckton. I sure hope this becomes a series, as there are plenty of other serial nutcases out there spreading misinformation about climate science.

After perusing the list of Monckton’s Myths, I don’t have much more to add. Except to note that, after all, this is the man who argues that Christianity is likely to be a better arbiter than science of what’s true about the real world:

Perhaps, therefore, no one should be allowed to practice in any of the sciences […] unless he can certify that he adheres to one of those major religions – Christianity outstanding among them – that preach the necessity of morality, and the reality of the distinction between that which is so and that which is not. [Christopher Monckton, Jan 13, 2010]

Mr. Monckton’s grasp of epistemology seems to be as bad as his grasp of climate science. (Unfortunately, Monckton is British too, so there goes my theory about that…)

For my new undergrad course, we’ve started a course blog. We’ll be using it to discuss the course material, and also for assignments – the first assignment (due today!) is to write a post for the blog. Actually, what they’ve been asked to do is publish a draft blog post by today, so we can give them feedback on it, and the final, polished version is due next Friday. So, head on over, see what the students have been writing, and give them some feedback:

Now that I’m back in Toronto, my calendar’s filling up already. Here’s a bunch of upcoming seminars on campus that I hope to get to:

Phew, that’ll keep me busy. For more details on the various seminar series see the CGCS seminar schedule, the Centre for Environment seminar schedule and the EHJIC seminar schedule.

Oh I do hate seeing blog posts with titles like “Easterbrook’s Wrong (Again)“. Luckily, it’s not me they’re talking about. It’s some other dude who, as far as I know, is completely unrelated to me. And that’s a damn good thing, as this Don Easterbrook appears to be a serial liar. Apparently he’s an emeritus geology prof from from some university in the US. And because he’s ready to stand up and spout scientific sounding nonsense about “global cooling”, he gets invited to talk to journalists all the time. And then his misinformation then gets duly repeated on blog threads all over the internet, despite the efforts of a small group of bloggers trying to clean up the mess:

In a way, this another instance of the kind of denial of service attack I talked about last year. One retired professor fakes a few graphs, and they spread so widely over the internet that many good honest science bloggers have to stop what they’re doing, research the fraud, and expose it. And still they can’t stop the nonsense from spreading (just google “Don Easterbrook” to see how widely he’s quoted, usually in glowing terms).

The depressing thing is that he’s not the only Easterbrook doing this. I appear to be out-numberered: ClimateProgress, Sept 13, 2008: Gregg Easterbrook still knows nothing about global warming — and less about clean energy.

Last summer, when I was visiting NCAR, Warren Washington gave me some papers to read on the first successful numerical weather prediction, done on ENIAC in 1950, by a team of meteorologists put together by John von Neumann. von Neumann was very keen to explore new applications for ENIAC, and saw numerical weather prediction as an obvious thing to try, especially as he’d been working with atmospheric simulations for modeling nuclear weapons explosions. There was, of course, a military angle to this – the cold war was just getting going, and weather control was posited as a potentially powerful new weapon. Certainly that was enough to get the army to fund the project.

The original forecast took about 24 hours to compute (including time for loading the punched cards), for a 24-hour forecast. This was remarkable, as it meant that with a faster machine, useful weather prediction was possible. There’s a very nice account of the ENIAC computations in:

…and a slightly more technical account, with details of the algorithm in:

So having read up on this, I thought it would be interesting to attempt to re-create the program in a modern programming language, as an exercise in modeling, and as way of better understanding this historic milestone. At which point Warren pointed out to me that it’s already been done, by Peter Lynch at University College Dublin:

And not only that, but Peter then went one better, and re-implemented it again on a mobile phone, as a demonstration of how far Moore’s law has brought us. And he calls it PHONIAC . It took less than a second to compute on PHONIAC (remember, the original computation needed a room-sized machine, a bunch of operators, and 24 hours).

One thought I heard repeated several times at the AGU meeting last month (e.g. see Michael Oppenheimer’s talk) is that the scientific evidence on climate change isn’t enough on its own to justify action – we also have to also identify the values that we care about. Even though this has always been clear in the IPCC’s mandate (e.g. see P2 of the TAR Synthesis Report), many people ignore this point, arguing that the scientific evidence on its own compels us to take swift and drastic action to mitigate climate change (and indeed, I’ve made exactly this argument in the past). And some people argue that the science isn’t strong enough (yet), or isn’t certain enough, or even that the science is wrong, and that therefore that the action we should take is to carry on as we have been doing, and worry later (if ever) about impacts on the climate.

The missing piece in such arguments is a clarification of the value judgments we’re making in calling for different types of action. Different value systems lead people to react differently when hearing about the scientific evidence for climate change.

I’ve always considered myself to be an environmentalism. As a student, I was active in campaigning for recycling programs and alternative energy back in the 1980’s. I’ve been a member of the Green Party on and off for most of my adult life. I can’t claim to have been as active since I got married, got a job, had kids, and so on. But I’ve always supported environmentalist groups as an armchair activist. Strangely, somehow or other, I remained largely ignorant about climate change until much more recently. Back in the 1980’s, we were motivated by concerns about pollution, air quality, damage to ecosystems, resource mismanagement, etc. And while the key scientific ideas of climate change date back well over a century, widespread concern about it didn’t really get going until around 1990. This was the year the IPCC produced its first assessment report, and it led, two years later, to the creation of the UN Framework Convention on Climate Change. Somehow, though, despite my environmental leanings, I missed all this at the time. There were other pressing matters, including many local environmental problems, and of course a life to live and a career in computer science research that was beckoning.

But when I first realised the seriousness of climate change, in the mid-2000s, I was easily persuaded that this was a serious environmental problem, and that drastic action was needed to reduce emissions. As I read up more and more of the science, and as I talked to more and more climate scientists, it became obvious to me that climate change wasn’t just another problem that needed solving – it was THE problem. The one that will dominate this century, and define our generation. The scale and urgency of the problem was so clear to me, that it was unthinkable for anyone to argue otherwise.

What I hadn’t noticed was how strongly my pre-existing values prepared the way for this line of reasoning. And not just my environmentalist leanings, but my entire value system. For example, I’m also somewhat of an egalitarian, certainly left-leaning in conventional political terms, and rather libertarian when it comes to the personal and social spheres. To pin it down even more, I’m right down in the left hand corner on the political compass, beyond even Ghandi and Nelson Mandela.

For an egalitarian, it’s quite obvious that it is not right for we in the rich countries of the world to build our economic prosperity by plundering the natural resources of the world, and dumping our pollution on the rest of the world. Equally obviously, it’s wrong to leave the world in a seriously damaged state for future generations. Climate science shows that future generations will most likely have to deal with increased drought, serious shortages of food and water, sea level rise, ocean acidification, etc. But it is our values (of fairness, responsibility, justice, etc) that tell us this is wrong, and that we should take action to prevent it. The stronger we adhere to some of these values, the more adamant we will be that drastic action is needed.

Now consider someone who doesn’t hold with egalitarianism. Someone who believes an unequal distribution of wealth is not only right, but perhaps even that it’s in the best interests of everyone. For example, Adam Smith argued that allowing some people to take what were once shared natural resources (land, mineral rights, etc) and call them personal property actually benefits everyone in the long run. The argument is that some people are able to exploit such resources in a way that leads to a general improvement in the quality of life for everyone. Of course, the people doing the exploiting might get seriously rich. But this is okay (according to this line of reasoning) because the benefits trickle down, so that even the average person gets to live in better housing, have better food, and so on. If you follow this reasoning, then inequality is not only justified, but it’s actually morally right, because avoiding such inequalities will also deny many people access to economic development.

With these values, one could argue that curbing greenhouse gas emissions is wrong. Even though climate change will have serious consequences for people in many regions of the earth, that could be okay, if some people can still get rich by continuing to burn fossil fuels, and then use that wealth to alleviate the suffering for more people, and perhaps to invent better and better technology to cure climate change (rather than prevent it). This is of course, the core of Lomborg’s argument.

And of course, for many libertarians, any intervention by governments (or worse, international agencies) must be resisted. So the idea of governments regulating the use of fossil fuels is itself wrong. This then might lead to the conclusion that whatever ought to be done about greenhouse gases, it must not be via governmental control. Unfortunately, it also often leads to some serious backwards reasoning – if the answer to climate change is more governmental regulation, then the science itself must be wrong. Or if not wrong, then perhaps just not strong enough. Strong libertarians might demand no governmental regulation of greenhouse gases ever. More moderate libertarians might just demand higher and higher standards of evidence from the science before they are convinced. Either way, they do not agree that the science gives us a compelling reason to act now.

The more alarming projections of climate change, along the business-as-usual path, lead to a world that can only sustain a small fraction of the current human population. Billions of people will die. But no matter how certain the science is on this point, it still requires a value judgment (e.g on the value of human lives), to say that the world (and the developed countries in particular) need to drastically change course to avoid this.

So people with different value systems can listen to the same scientific evidence, and draw dramatically different conclusions about what should be done about it. Some of these value systems will even cause a bias towards ignoring or disbelieving the scientific evidence. All of which puts the question of what to do about climate change out of the hands of science, and squarely into the hands of the political process, where (at least in the ideal case) actions are chosen based on shared values.

Unfortunately, the science now indicates that the scale of the impacts of climate change are likely to be so high that none of our old value systems can be trusted to guide us:

  • Environmentalism might suggest principles such as “the polluter pays”, and that vital ecosystems must be protected. But we are all polluters, and so were our parents, and their parents before them. Climate change implicates everyone across all regions of the world, and across history for at least the last century. Figuring out who is responsible, who should fix the problem, is way beyond normal environmental reasoning. And the damage to all ecosystems will be so significant, that the idea of protecting some of them is futile.
  • Traditional ideas about wealth creation in modern economics also break down, because they are based primarily on the assumption of access to unlimited energy. The capitalist philosophy that everyone will benefit in the long run from economic development is no longer sound, as the misery and suffering caused by climate change will be so widespread and so serious that it will soon vastly outweigh all of the benefits of our wealth. When the the entire world’s food production system falters, and whole societies collapse, no amount of wealth will protect people.

So while our actions on climate change must certainly be driven by an understanding of our common values, it’s also the case that we will need new value systems to understand decisions we now face, as individuals and as societies. The worrying part is that for many people, their existing value systems work to prevent them acknowledging the scale of the problem in the first place.

I’m currently reading James Garvey’s book “The Ethics of Climate Change“. He tackles this failure of existing value systems, and proposes a new ethical framework for motivating the choices we have to make about climate change. It’s fascinating stuff, and I’ll write more about it when I’m further into the book…

Occasionally I come across blog posts that I wish I’d written myself, because they capture so well some of the ideas I’ve been thinking about. Such the is the case with Ricky Rood’s series on open climate models, over at Weather Underground (which itself is an excellent resource – particularly Jeff Master’s Wunderblog):

  1. Greening of the Desert: Open Climate Models
  2. Stickiness and Climate Models
  3. Open Source Communities, What are the problems?

I’ve nothing really to add, other than to note that the points Ricky makes in the third post, on the need for governance, are crucial. Wikipedia is a huge success, but not because the technology is right (quite frankly, wikis rather suck from a usability point of view), nor because people are inherently good at massive collaborative projects. Wikipedia is a success because they got the social processes right that govern editing and quality control. Open source communities do the same. They’re not really as open as most people think – an inner core of people impose tight control over the vision for the project and the quality control of the code. And they sometimes struggle to keep the clueless newbies out, to stop them messing things up.

I spent some time this week explaining to my undergraduate class the ideas of thermal equilibrium (loosely speaking, the point at which the planet’s incoming solar radiation and outgoing blackbody radiation are in balance) and climate sensitivity (loosely speaking, how much warmer the earth will get per doubling of CO2, until it reaches a new equilibrium). I think some of my students might prefer me to skip the basic physics, and get on quicker to the tough questions of what solutions there are to climate change, whether geo-engineering will work, and the likely impacts around the world.

So it’s nice to be reminded that a good grasp of the basic science is important. A study produced by the Argentinean group Federacion Ecologia Universal, and published on the American Association for Advancement of Science website, looked at the likely impact of climate change on global food supplies by the year 2020, concluding that global food prices will rise by iup to 20%, and some countries, such as India, will see crop yields drop as much as 30%. The study claims to have used IPCC temperature projections of up to 2.4°C rise in global average temperatures by 2020 on a business-as-usual scenario.

The trouble is the IPCC doesn’t have temperature projections anywhere near this high for 2020. As Scott Mandia explains, it looks like the author of the report made a critical (but understandable) mistake, confusing the two ways of understanding ‘climate sensitivity':

  • Equilibrium Climate Sensitivity (ECS), which means overall eventual global temperature rise that would result  if we double the level of CO2 concentrations in the atmosphere.
  • Transient Climate Response (TCR), which means the actual temperature rise the planet will have experienced at the time this doubling happens.

These are different quantities because of lags in the system. It takes many years (perhaps decades) for the earth to reach a new equilibrium whenever we increase the concentrations of greenhouse gases, because most of the extra energy is initially absorbed by the oceans, and it takes a long time for the oceans and atmosphere to settle into a new balance. By global temperature, scientists normally mean the average air temperature measured just above the surface (which is probably where temperature matters most to humans).

BTW, calculating the temperature rise “per doubling of CO2″ make sense because the greenhouse effect is roughly logarithmic – each doubling produces about the same temperature rise. So for example, the pre-industrial concentration of CO2 was about 280ppm (parts per million). So a doubling would take us to 560ppm (we’re currently at 390ppm).

To estimate how quickly the earth will warm, and where the heat might go, we need good models of how the earth systems (ocean, atmosphere, ice sheets, land surfaces) move heat around. In earth system models, the two temperature responses are estimated from two different types of experiment:

  • equilibrium climate sensitivity is calculated by letting CO2 concentrations rise steadily over a number of years, until they reach double the pre-industrial levels. They are then held steady after this point, and the run continues until the global temperature stops changing.
  • transient climate sensitivity response is calculated by increasing CO2 concentrations by 1% per year, until they reach double the pre-industrial levels, and taking the average temperature at that point.

Both experiments are somewhat unrealistic, and should be thought of more as thought experiments rather than predictions. For example, in the equilibrium experiment, it’s unlikely that CO2 concentrations would stop rising and then remain constant from that point on. In the transient experiment, the annual rise of 1% a little unrealistic –  CO2 concentrations rose by a less than 1% per year over the last decade. On the other hand, knowing the IPCC figures for equilibrium sensitivity tells you very little about the eventual temperature change if (when) we do reach 560ppm, because when we reach that level, it’s unlikely we’ll be able to prevent CO2 concentrations going even higher still.

Understanding all this matters for many reasons. If people confuse the two types of sensitivity, they’ll misunderstand what temperature changes are likely to happen when. More importantly, failure to understand these ideas means a failure to understand the lags in the system:

  • there’s a lag of decades between increasing greenhouse gas concentrations and the eventual temperature response. In other words, we’re always owed more warming than we’ve had. Even if we stopped using fossil fuels immediately, temperatures would still rise for a while.
  • there’s another lag also decades long, between peak emissions and peak concentrations. If we get greenhouse gas emissions under control and then start to reduce them, atmospheric concentrations will continue to rise for as long as the emissions exceed the rate of natural removal of CO2 from the atmosphere.
  • there’s another lag (and evidence shows it’s also decades long) between humans realising climate change is a serious problem, and any coordinated attempts to do something about it.
  • and yet another lag (probably also decades long, hopefully shorter) between the time we implement any serious international climate policies and the point at which we reach peak emissions, because it will take a long time to re-engineer the world’s energy infrastructure to run on non-fossil fuel energy.

Add up these lags, and it becomes apparent that climate change is a problem that will stretch most people’s imaginations. We’re not used to having to having to plan decades ahead, and we’re not used to the idea that any solution will take decades before it starts to make a difference.

And of course, if people who lie about climate change for a living merely say “ha, ha, a scientist made a mistake so global warming must be a myth!” we’ll never get anywhere. Indeed, we may even have already caused the impacts on food supply described in the withdrawn report. It’s just that it’s likely to take longer than 2020 before we see them played out.

This post contains lots of questions and few answers. Feel free to use the comment thread to help me answer them!

Just about every scientist I know would agree that being more open is a good thing. But in practice, being fully open is fraught with difficulty, and most scientists fall a long way short of the ideal. We’ve discussed some of the challenges for openness in computational sciences before: the problem of hostile people deliberately misinterpreting or misusing anything you release, the problem of ontological drift, so that even honest collaborators won’t necessarily interpret what you release in the way you intended. And for software, all the extra effort it takes to make code ready for release, and the fact that there is no reward system in place for those who put in such effort.

Community building is a crucial success factor for open source software (and presumably, by extension, for open science). The vast majority of open source projects never build a community, so, while we often think of the major successes of open source (after all, that’s how the internet was built), these successes are vastly outnumbered by the detritus of open source projects that never took off.

Meanwhile, any lack of openness (whether real or perceived) is a stick by which to beat climate scientists, and those wielding this stick remain clueless about the technical and institutional challenges of achieving openness.

Where am I going with this? Well, I mentioned the Climate Code Foundation a while back, and I’m delighted to be serving as a member of the advisory board. We had the first meeting of the advisory board meeting in the fall (in the open), and talked at length about organisational and funding issues, and how to get the foundation off the ground. But we didn’t get much time to brainstorm ideas for new modes of operation – for what else the foundation can do.

The foundation does have a long list of existing initiatives, and a couple of major successes, most notably, a re-implementation of GISTEMP as open source Python, which helped to validate the original GISTEMP work, and provide an open platform for new research. Moving forward, things to be done include:

  • outreach, lobbying, etc. to spread the message about the benefits of open source climate code;
  • more open source re-implementations of existing code (building on the success of ccc-GISTEMP);
  • directories / repositories of open source codes;
  • advice – e.g. white papers offering guidance to scientists on how to release their code, benefits, risks, licensing models, pitfalls to avoid, tools and resources;
  • training – e.g. workshops, tutorials etc at scientific conferences;
  • support – e.g. code sprints, code reviews, process critiques, etc.

All of which are good ideas. But I feel there’s a bit of a chicken-and-egg problem here. Once the foundation is well-known and respected, people will want all of the above, and will seek out the foundation for these things. But the climate science community is relatively conservative, and doesn’t talk much about software, coding practices, sharing code, etc in any systematic way.

To convince people, we need some high profile demonstration projects. Each such project should showcase a particular type of climate software, or a particular route to making it open source, and offer lessons learnt, especially on how to overcome some of the challenges I described above. I think such demonstration projects are likely to be relatively easy (!?) to find among the smaller data analysis tools (ccc-GISTEMP is only a few thousand lines of code).

But I can’t help but feel the biggest impact is likely to come with the GCMs. Here, it’s not clear yet what CCF can offer. Some of the GCMs are already open source, in the sense that the code is available free on the web, at least to those willing to sign a basic license agreement. But simply being available isn’t the same as being a fully fledged open source project. Contributions to the code are tightly controlled by the modeling centres, because they have to be – the models are so complex, and run in so many different configurations, that deep expertise is needed to successfully contribute to the code and test the results. So although some centres have developed broad communities of users of their models, there is very little in the way of broader communities of code contributors. And one of the key benefits of open source code is definitely missing: the code is not designed for understandability.

So where do we start? What can an organisation like the Climate Code Foundation offer to the GCM community? Are there pieces of the code that are ripe for re-implementation as clear code? Even better, are there pieces that an open source re-implementation could be useful to many different modeling centres (rather than just one)? And would such re-implementations have to come from within the existing GCM community (as is the case with all the code at the moment), or could outsiders accomplish this? Is re-implementation even the right approach for tackling the GCMs?

I should mention that there are already several examples of shared, open source projects in the GCM community, typically concerned with infrastructure code: couplers (e.g. OASIS) and frameworks (e.g. the ESMF). Such projects arose when people from different modelling labs got together and realized they could benefit from a joint software development project. Is this the right approach for opening up more of the GCMs? And if so, how can we replicate these kinds of projects more widely? And, again, how can the Climate Code Foundation help?

I mentioned before that I’m teaching a new course this term, a first-year undergraduate seminar course on climate computing. Course starts next week, so I’m busy putting together some material. It’s intended to be pretty open ended, as it’s a small group seminar, and we can jump into topics that the students are interested in. So I put together a core set of topics I want to cover, and then a long list of other possible topics. Here’s what I have so far:

Core topics:

Part 1: Background & History

Week 1: Climate Science BC (Before Computers), in which we cover Fourier, Tyndall, Arrhenius, Callendar, and the discovery of global warming. We’ll introduce the key concepts for understanding how the physical climate system works.

Week 2: Taming Chaos in which we talk about Bjerknes and Lorenz, and look at the basic equations for modeling the atmosphere, and a gentle introduction to Chaos theory.

Week 3: The Giant Brain in which we talk about von Neumann, Charney, ENIAC and the first general circulation models.

Part 2: Basics of Climate Modeling

Week 4: Inside the simulation (also known as “Computational Fluid Dynamics on a Rotating Sphere for beginners”), where we look at the major elements of a climate model, including grids, dynamics, radiation, parameterizations, etc.

Week 5: What happens at the Boundaries? in which we look at the physical boundaries (land and ocean, the boundary layer), spatial boundaries (subgrid processes), temporal boundaries (start states, long runs, etc), and climate state boundaries (forcings, emissions scenarios, etc), and talk about the difference between Initial Value and Boundary Value problems.

Week 6: Towards Earth System Models in which we look at the growing complexity of modern GCMs, and the trade-offs between more resolution, more earth system processes, and more complexity.

Part 3: Choosing and Using Models

Week 8: On the Catwalk in which we look at the vast range of different types of models that are available, and what they are used for.

Week 9: Experimentation in which we look at what’s involved in running a model, and the kinds of experiments you might do with one. If we’re lucky, we’ll get to try running some experiments for real.

Week 10: How good are the models? in which we look at how to test and validate models, what it means to “do science” with a model, what model inter-comparison projects tell us, and some of weaknesses of current earth system models.

Part 4: What we can know, and what we can do

Week 11: Knowledge and Uncertainty, in which we talk about what we know and what we don’t know about climate change, sources of uncertainty, and whether we can predict the future. We’ll also explore how climate models interact with other types of knowledge about global climate change.

Week 12: Decisions, Decisions, Decisions in which we look at what policymakers need, and what they get. We’ll talk about the IPCC process, and maybe a bit about some of the policy options. We’ll talk about the need for better predictions of climate extremes, and regional impacts. And we’ll look at the difference between GCMs and IAMs.

Week 13: Enough talk, time for action! in which we face up to the question that given we now have to learn how to manage the earth’s climate systems, what should we be doing about climate change, and what other tools do we need in order to be successful?

Additional Topics:

We can include any of these based on interest and enthusiasm (but probably not all of them!). Some of these stray away from the “computing” them of the course, so we might need to agree on some criteria for which ones to include. In no particular order:

  • Carbon calculators (and other softwre for personal/community decision support);
  • Data Collection for weather and climate: how the global observing system works;
  • Supercomputers and the path to exascale computing;
  • Media potrayals of climate models and their projections;
  • The role of Free/Open source software in climate science;
  • The carbon footprint of computing;
  • Other sources of evidence about climate change: paleoclimate, observational data, measurable impacts;
  • Controversy and disinformation – how do you know who to trust? Is there really a debate, and if so, what about?
  • How bad will it get?
  • Climate Change and other global issues: over-population, peak oil, conservation of ecosystems, international conflict, food security, renewable energy, etc.;
  • Geo-engineering;
  • Climate Ethics: inter-nation and inter-generational issues.

Hope everyone had a relaxing holiday and a great new year. I still have a whole pile of notes from the AGU meeting to polish up and post, but unfortunately they’re on a disk that crashed during my travels back from the meeting, so I’m keeping my fingers crossed that I can recover them.

In the meantime, here’s three interesting upcoming workshops this year: