Last year I wrote a Climate Rocks post, with lots of songs about climate change. It’s time for part 2, with a whole load more songs!

1) G. T., “How Dare You”

Last year’s post ended with The 1975, and a song that put a Geta Thunberg speech to music, as a sort of spoken poem. But The 1975 are not the only ones to set Greta Thunberg’s words to music. Here’s a wonderful(!) one by John Meredith (aka G.T.) – the drummer for the band Suaka. He morphs Greta’s “How Dare You” speech into a Swedish Death Metal song…

Yeah, I know, but hear me out. It works. It’s worth a listen! For more on this song, see this article in Rolling Stone.

2) Delusion Squared, “In My Time of Dying”

Greta Thunberg isn’t the only youth climate activist to have her words set to music. Back in 1992, Severn Suzuki gave a speech to the Rio Earth summit when she was 12 years old. The song is “In My Time of Dying” by Delusion Squared (who also featured in the last Climate Rocks post) sets this speech to music.

It’s a song about denial: “They were truly frightening / The things we were denying / In my time of dying”

Severn’s speech made headlines around the world, particularly for the phrase “If you don’t know how to fix it, please stop breaking it.” It was hailed as the speech that silenced the world. 

“Parents should be able to comfort their children by saying, “Everything’s going to be all right; it’s not the end of the world, and we’re doing the best we can.” But I don’t think you can say that to us anymore. Are we even on your list of priorities?”

3) Childish Gambino, “Feels Like Summer”

Next up is a subtle one. It’s Childish Gambino, with an innocuous sounding song called “Feels Like Summer”. At first, it sounds like one of those upbeat Earth Wind & Fire songs celebrating happy things. And the video really plays this up. But listen carefully to the lyrics, and watch for small clues in the video that something is wrong…

The lyrics to “Feels Like Summer” start out harmless enough:
“You can feel it in the streets /
On a day like this, the heat /
It feel like summer”

And then it shifts gears without skipping a beat:
“Seven billion souls that move around the sun /
Rolling faster, faster, not a chance to slow down”

Till we get to:
“Every day gets hotter than the one before /
Running out of water, it’s about to go down /
Go down”
Still in the same upbeat tone. 

Brutal. The video really picks up this juxtaposition. It depicts Donald Glover walking through his neighbourhood, but every character is a celebrity: they’re all famous rappers or Black figures. So the video distracts you from the lyrics: we’re all so focussed on celebrity gossip, we’re ignoring the little signs of environmental collapse all around us.

BTW if you want a full breakdown of who all the characters in the video are, Wikipedia has you covered: https://en.wikipedia.org/wiki/Feels_Like_Summer_(Childish_Gambino_song)

Most of the cultural references went straight over my head, but I did laugh at the one of Kanye West in a MAGA hat crying his eyes out while being comforted by Michelle Obama…

4) Anohni, “4 Degrees”

Next up is Anohni, with “4 Degrees”. I wasn’t sure about this one when I first heard it: 
“I wanna burn the sky, I wanna burn the breeze / I wanna see the animals die in the trees /
Oh, let’s go, let’s go, it’s only four degrees”.

The song was written just before the Paris Agreement in 2015, when climate projections were suggesting the world would warm by 4 degrees by the end of the century. The song seems to be saying: bring it on…

Really, it’s a piece of reverse psychology. In interviews about the song, Anohni describes her deep concern about climate change, and her struggle to come to terms with her own carbon footprint. The song is about being accountable: it expresses the implications of how we behave, rather than our intent.

It’s a pretty powerful song.

5) Paul McCartney, “Despite Repeated Warnings”

Next up is a song from Paul McCartney, written in 2018, called “Despite Repeated Warnings”, inspired by a newspaper article about climate change containing that phrase.

In interviews, McCartney confirmed he was thinking of Trump: 
“The captain’s crazy /
But he doesn’t let them know it /
He’ll take us with him /
If we don’t do something /
Soon to slow it”

6) Bad Religion, “Kyoto Now!”

It’s time for some punk: Bad Religion’s song “Kyoto Now!”, released in 2002. 

The Kyoto protocol, of course, was the first ever international agreement to reduce emissions of greenhouse gases. Signed in Dec ’97, it didn’t enter into force until enough countries had ratified it to cover 55% of all global CO2 emissions. That took until 2005. 

Maybe an unusual topic for a punk song, but it captures the anger people felt at the time. A lot of the anger was directed at the US. While Clinton signed the Kyoto agreement in 1997, he never sent it to the US Senate for ratification, because the senate had already passed a resolution (95-0) saying the US should not sign any agreement unless developing countries were also required to reduce emissions. After his election, George W Bush made it clear he would never agree to it. So the US never ratified it.

Not to be outdone, Canada – under Conservative prime minister Steven Harper – eventually withdrew from the Kyoto protocol in 2011. Although at that point, given Canada’s emissions had actually risen by 17% since 1990 rather than falling, the decision to withdraw was largely irrelevant. Being a huge petro-state, Canada wasn’t going to act on it anyway.

All these political maneuverings are captured well in the song:
“The media parading /
Disjointed politics /
Founded on petrochemical plunder /
And we’re its hostages”

And because this is punk, the song even acknowledges its own futility:
“You might not think it matters now /
But what if you were wrong? /
You might not think there’s any wisdom /
In a fucked up punk rock song”

7) Gojira, “Gobal Warming”

Let’s continue to explore different music genres. This one might be termed progressive death metal (it certainly has the growling in it). The song is “Global Warming”, by the French band Gojira, written for their environmentally themed concept album “From Mars to Sirius”, in 2005. Fast and furious, once again, a song that channels anger at the state of the planet. Worth a listen, even if you never listen to Death Metal.

The title is obvious, the lyrics perhaps less so. The singer is channelling the view of the planet itself:
“And when I see the smoke around /
I feel like I’m not from humankind down there /
I feel like glaciers are my eyes /
And mountains are my head, my heart is ocean”

But rather than descending into misanthropy, the song bends towards hope:

“I had this dream, our planet surviving”, and ending on the repeated line:
“We will see our children growing”.

8) Steel Pulse, “Global Warning”

Continuing our tour of music genres, this one is a reggae song from Steel Pulse, called “Global Warning”. It’s from their 2004 album African Holocaust, which tackles a range of themes around racial oppression, African Nationalism and Rastafarianism. This song focusses on the link between colonialism and environmental destruction, and the need to stand up for change. 

Interestingly, the song isn’t specifically about climate change (note the spelling of the title), and it name-checks wildlife extinction, acid rain, pollution in the water, and deforestation. It elegantly connects the clearing of forests with the need for political resistance: 
“Stand up and be counted / Don’t ever let them chop us down, hey.” 

And it couches the whole message in Rastafarianism:
“Destroying earth was not Jah’s plan / 
It’s the work of man”

Of relevance: this article on the symbolism of “Babylon” in reggae. Babylon is taken as a symbol both of the decadent culture of colonial oppressors and a target for pan-African consciousness, as in the term “Beating down Babylon”. And a biting critique of western democracy as a sham (a “de mockroicy”), in which politicians are scam artists, pretending to be representatives of the people, while really just enriching themselves.

9) Pitbull, “Global Warming”

Still continuing our genre tour, this one is a rap song called “Global Warming” by Pitbull, from his 2012 album, also called Global Warming. It’s a very short song (1:24), and acts as an introduction to the album. So even if you don’t normally dig rap, take a listen…

The song starts with a very clear message:
“Category 6’s are stormin’ /
Take this as a, take this as a warning /
Welcome to, welcome to global warming”

And then he proceeds to critique the choice of things rappers sing about, with a (perhaps too subtle) critique of the obsession with glamour and the lifestyles of the rich, with their private jets:

“It’s all about them billionaires /
I’m so fucking serious /
Look, I love them zeros, they looking like Cheerios”

Interestingly, Pitbull’s songs rarely contain environmental messages, he just names his albums that way:
2012: “Global Warming”
2014: “Globalization”
2017: “Climate Change”

In interviews, he says: “If I made a record about [climate change], nobody would listen to it. I make records for people to have a good time, […] but with the titles, they start to connect the dots like a treasure hunt that I put together for them”

10) Amarok, “Hero”

This one is a lovely melodic rock song called “Hero”, from the Polish band Amarok. It starts out with what sounds like a fragment from a Greta Thunberg speech: 
“Now the eyes of all generations are upon you. 
The planet is dying, destroyed, sick of consumerism. 
Most people don’t even notice it”.

It’s not actually Greta, but channels her style almost perfectly. The voice is Marta Wojtas, who does backing vocals and writes all the lyrics for the band.

The song is about our need for heroes, and the internal struggles faced by those in the climate movement that we treat as heroes.

It’s perhaps my favourite song in the entire thread so far. Take a listen – it’s a gorgeous song…

11) Ela Minus, “Megapunk”

To mark the passing of another useless COP meeting, we need a protest song: Ela Minus with Megapunk, from her 2020 album “Acts of Rebellion”.

On the album version, there’s no other voices than Ela’s own lyrics (“You won’t make us stop”), but when she play it live, she mixes in other voice samples, so I’m sharing this version which includes samples from a Greta Thunberg speech. Amazing how many climate songs Greta has inspired!

Ela’s music is usually described as electro-pop, but I much prefer the term DIY techno-punk I saw in one review. Or, as she describes it: “bright music for dark times”.

Megapunk is the perfect upbeat protest song: 
“We can’t seem to find / 
A reason to stay quiet / 
We’re afraid we’ll run out of time”

and
“You don’t want to understand / 
You’re choosing to lead us apart /
But against all odds /
You still won’t make us stop”

12) Macy Gray, “All I Want For Christmas”

Time for a festive tune from Macy Gray, called “All I Want for Christmas”. It’s packed full of sensible Christmas wishes:
“All I want for Christmas is a whole bunch of stuff /
But anything that you can buy me won’t be enough /
‘Cause everything I’m hoping for is intangible /
Like free health care and gun control”

Makes me wonder why there aren’t more seasonal songs like this. Too earnest?

Macy’s verse on climate change is pretty straightforward:

“All I want for Christmas is to have a chance /
So please take care of the environment /
Take Mr. Gore more seriously /
And do what you can to stop global warming”

The Mr Gore reference dates it a bit, but then it praises “Barack”, and:

“I hope that your successor
Does the things he or she should /
That Mr. Trump, he’s an entertaining guy /
But let’s face it, really is he qualified?

Understated, perhaps?

13) Midnight Oil, “Rising Seas”

Last #ClimateRocks for this post. After a year of record temperatures, and useless climate policy negotiations, what could be more fitting than Midnight Oil’s “Rising Seas”, from their 2022 album Resist (featuring the warming stripes on the cover).

“Every child put down your toys/
And come inside to sleep/
We have to look you in the eye and say we sold you cheap/
Let’s confess we did not act
With serious urgency/
So open up the floodgates
To the rising seas”

I included a Midnight Oil song in my previous Climate Rocks post: Beds are Burning, which is often assumed to be about climate change, but is really about Indigenous Land rights. Given their legacy of protest songs, it’s not surprising the band have turned their full attention to climate change. 
Nearly all the songs on the new album deal with the climate crisis in some way, so give the whole thing a listen. 

It’s the first week of term here in Toronto, and I’m busy launching a new course. It’s a small-seminar, first-year undergraduate course, open to any student in the Faculty of Arts and Science, called Confronting the Climate Crisis. I’m launching it as a small seminar course this year as a pilot project, with the aim that we go big next year, turning it into a much larger lecture course, open to hundreds (and maybe thousands) of students. I’ll have to think about how to make it scale.

The idea for this course arose in response to an initiative at the University of Barcelona to create a mandatory course on the climate crisis for all undergraduate students, to meet one of the demands of a large-scale student protest in the fall of 2022. The University of Barcelona expects to launch such a course later this year. Increasingly, our students are demanding that Universities respond to declarations of a climate emergency (e.g. by the Federal Government and the City of Toronto) by re-thinking how our programs are preparing them with the resilience and skills needed in a world that will be radically re-shaped by climate change in the coming decades.

The design of this course responds directly to the challenge posed at U Barcelona: If every student were required to take (at least!) one course on the climate crisis, what would such a course look like? Climate change is a complex, trans-disciplinary problem, and needs to be viewed through multiple lenses to create an integrated understanding of how we arrived at this moment in history, and what paths we now face as a society to stabilize the climate system and create a just transition to a sustainable society. The course needs to give the students a clear-eyed understanding of how serious and urgent the crisis is, but also needs to give them the tools to deal with that understanding, psychologically, politically, and sociologically. So it needs to balance the big picture view, and a very personal response: what do you do to avoid falling into despair, once you understand.

It’s not clear to me that in any reasonable amount of time we could get the University of Toronto to agree to make such a course mandatory for every student, given the complex and devolved governance structure of the second largest university in North America. But we can make a lot of progress by starting bottom up: by launching the course now, we intend to provoke a much wider response across the University: How are we preparing all of our students for a climate changed world? What do different departments and programs need to do in response? If other departments want to add this course to their undergrad programs, I’ll be delighted. Or if they want to create versions of the course that are more specifically tailored to their own students’ needs, I’ll be equally delighted.

Alright, enough pre-amble. Here’s the syllabus entry:

This course is a comprehensive, interdisciplinary introduction to the climate crisis, suitable for any undergraduate student at U of T. The course examines the climate crisis from scientific, social, economic, political, and cultural perspectives, from the physical science basis through to the choices we now face to stabilize the climate system. The course uses a mixture of lectures, hands-on activities, group projects, online discussion, and guest speakers to give students a deeper understanding of climate change as a complex, interconnected set of problems, while equipping them with a framework to evaluate the choices we face as a society, and to cultivate a culture of hope in the face of a challenging future.

And here’s the outline I’ve developed for a 12-week course:

  1. How long have we known?
    • Course intro and brief overview of the history of climate science
  2. What causes climate change?
    • Greenhouse gases – where they come from and what they do
    • Sources of data about climate change
    • How scientists use models to assess climate sensitivity
  3. How bad is it?
    • Future projections of climate change
    • Understanding targets: 350ppm, 1.5°C & 2°C; Net Zero
    • Irreversibility, overshoot, long-term implications, and emergency measures (geoengineering)
  4. Who does it affect?
    • Key impacts: extreme weather, sea level rise, ocean acidification, ecosystem collapse, etc
    • Regional disparities in climate impacts and adaptation, and the rise of climate migrants
    • Inequities in responsibility and impacts – the role of climate justice.
  5. Do we have the technology to fix it?
    • Decarbonization pathways
    • Sectoral analysis: energy, buildings, transport, food systems, waste, etc
    • Interaction effects among climate solutions
  6. Can we agree to fix it?
    • International policymaking: UNFCC, IPCC, Kyoto, Paris, etc.
    • Policy tools: carbon taxes, carbon trading, subsidies, direct investment, etc.
    • Barriers to political action
  7. What will it cost to fix it?
    • Intro to climate economics
    • Costs and benefits of adaptation and mitigation
    • Ecomodernism vs. Degrowth
  8. What’s stopping us?
    • Climate communication and climate disinformation
    • The role of political lobbying
    • How we talk about climate change and the role of framing
  9. What are we afraid of?
    • The psychology of climate change
    • Affective responses to climate change: ecoanxiety, doomerism, denial, etc.
    • Maintaining mental health in the climate crisis
  10. How can we make our voices heard?
    • Protest movements and climate activism
    • Theories of Change
    • Modes of activism and the ethics of disruptive protest
  11. What gives us hope?
    • Constructive hope as a response to eco-anxiety
    • The role of worldviews, culture, and language
    • Reconnecting with nature
  12. Where do we go from here?
    • Importance of systems thinking and multisolving.
    • The role of storytelling in creating a narrative of hope
    • Making your studies count: the role of universities in a climate emergency.
06. January 2024 · Write a comment · Categories: courses · Tags: ,

I’m teaching a new course this term, called Confronting the Climate Crisis. As it’s the first time I’ve taught since the emergence of the latest wave of AI chatbots. Here’s what I came up with:

The assignments on this course have been carefully designed to give you meaningful experiences that build your knowledge and skills, and I hope you will engage with them in that spirit. If you decide to use any AI tools, you *must* include a note explaining what tools you used and how you used them, and include a reflection on how they have affected your learning process. Without such a note, use of AI tools will be treated as an academic offence, with the same penalties as if you had asked someone else (rather than a bot) to do the work for you.

Rationale for this policy: In the last couple of years, so-called Artificial Intelligence (AI) tools have become commonplace, particularly tools that use generative AI to create text and images. The underlying technology uses complex statistical models of typical sequences of words (and elements of images), which can instantly create very plausible responses to a variety of prompts. However, these tools have no understanding of the meanings that we humans attach to words and images, and no experience of the world in which those meanings reside. The result is that they are expert at mimicking how humans express themselves, but they are often factually wrong, and their outputs reflect the biases (racial, gender, socio-economic, geographic) that are inherent in the data on which the models were trained. If you choose to use AI tools to help you create your assignments for this course, you will still be responsible for any inaccuracies and biases in the generated content.

More importantly, these AI tools raise important questions about the nature of learning in higher education. Unfortunately, we have built a higher education system that places far too much emphasis on deadlines and grades, rather than on learning and reflection. In short, we have built a system that encourages students to cheat. The AI industry promotes its products as helpful tools, perhaps no different from using a calculator in math, or a word processor when writing. And there are senses in which this is true – for example if you suffer from writer’s block, an AI tool can quickly generate an outline or a first draft to get you started. But the crucial factor in deciding when and how to use such tools is a question of what, exactly, you are offloading onto the machine. If a tool helps you overcome some of the tedious, low-level steps so that you can move on faster to the important learning experiences, that’s great! If on the other hand, the tool does all the work for you, so you never have to think or reflect on the course material, you will gain very little from this course other than (perhaps) a good grade. In that sense, most of the ways you might use an AI tool in your coursework are no different from other forms of ‘cheating’: they provide a shortcut to a good grade, by skipping the learning process you would experience if you did the work yourself.

Icon for Creative Commons licence CC-BY-NC-SA

This course policy is licensed under a Creative Commons Licence CC BY-NC-SA 4.0. Feel free to use and adapt for non-commercial purposes, as long as you credit me, and share alike any adaptations you make.

Cover for Computing the ClimateAs my book has now been published, it’s time to provide a little more detail. My goal in writing the book was to explain what climate models do, how they are built, and what they tell us. It’s intended to be very readable for a general audience, in the popular-science genre. The title is “Computing the Climate: How we know what we know about climate change”.

You can order it here.

The first half of the book focusses on the history of climate models.

In Chapter 1, Introduction, I begin with a key moment in climate modeling: the 1979 Charney report, commissioned by President Jimmy Carter, which developed a new framework to evaluate climate models. The key idea was a benchmark experiment that could be run in each climate model, to study what they agree on and how they differ and offer insight into where the uncertainties lie.

Chapter 2, The First Climate Model, tells the story of Svante Arrhenius’s climate model, developed in Stockholm in the 1890s. I explain in some detail how Arrhenius’s model worked, where he obtained the data, and how well his results stand up. Along the way, I explain the greenhouse effect, how it was first discovered, and why adding more greenhouse gases warms the planet.

Chapter 3, The Forecast Factory, tells the story of the first numerical weather forecasting program, which ran on the first electronic programmable computer, ENIAC, in 1949, and traces the history of the ideas on which it was based. I then explore how this led to the development of “global circulation models”, which simulate the dynamics of the atmosphere.

Chapter 4, Taming Chaos, describes how experiments with weather forecast models led to the discovery of chaos theory, with big implications for predictability of weather and climate. Weather is a chaotic process, so inaccuracies in the initial measurements grow exponentially as a weather model runs, making it hard to predict weather beyond a week or two. Climate prediction doesn’t suffer this problem because it focuses instead on how overall weather patterns change.

The second half of the book describes my visits to a number of different climate modeling labs, and focusses on the work of the scientists I met at these labs.

Chapter 5, The Heart of the Machine, explains the key design ideas at the core of a modern climate model, examining the design choices and computational limitations that shape it, with the UK Met Office’s Unified Model as a case study.

Chapter 6, The Well-Equipped Physics Lab, explores the experiments that climate scientists do with their models, and how these have changed over their history. It features the Community Earth System Model, developed at NCAR in Boulder, Colorado as a case study.

Chapter 7, Plug and Play, explores why it is hard to couple together models of different parts of the Earth’s system – oceans, atmosphere, ice sheets, vegetation, etc. I describe how, with the right architecture, a climate model supports a new kind of cross-disciplinary collaboration in the earth sciences through shared models, and I use the Earth System Model developed at the Institut Pierre-Simon Laplace in Paris as a case study.

Chapter 8, Sound Science, explores how modern climate models are tested, and how modelers maintain software quality. I also explore how well they can simulate recent climate change, as well as climates of the distant past, and discuss how we know what the models get right, and where they still have problems. The Earth System Model developed at the Max Planck Institute in Hamburg features as a case study.

The last chapter, Choosing a Future, concludes the book with a summary of what climate models tell us about likely future climate change, where the remaining uncertainties are, and what pathways we might choose to avert catastrophic climate change. I also explore why action on climate change has been so slow, what we can still do, and why there are reasons for hope.

14. June 2023 · 1 comment · Categories: Art · Tags:

Last year, I posted a regular Friday twitter thread #ClimateRocks featuring songs I’ve been listening to over the last few years that address climate change in some way. As I’m no longer on Twitter, I’m putting the whole thing here. Enjoy!

First up has to be… 

1) Marillion, Seasons End

Incredibly, it was written in 1988 – the first I ever heard climate change referenced in a song. Love the guitar solo, but the lyrics are what makes this truly sublime:
“We’ll tell our children’s children why…”.
There’s also a reference to the ozone hole there: 
“We grew so tall and reached so high / We left our footprints in the earth / And punched a hole right through the sky”.

2) Billie Eilish, “All the Good Girls Go To Hell”

The next entry for my Friday #ClimateRocks series just has to be Billie Eilish with All the Good Girls Go to Hell. Now at first sight, this might not seem to be about climate change. But pay attention to the lyrics. It’s a damning indictment of humanity for what we’ve done to the planet, framed as a debate between good and evil. It’s quite a contrast to last week’s song. In the late 80s, in an earlier generation, Marillion were writing about climate change as a future threat, a curious news item (“I heard somebody say…”). Fast forward to 2019, and Eilish, captures the angst of her generation—who have grown up with climate change as a fact all around them: “There’s nothing left to save now”. The song is both gorgeous and nihilistic, an anthem for our current predicament.

While we’re at it, the visuals in the video are very apt too: Eilish is dressed as a fallen angel, rising from a pool of oil, struggling through a barren landscape trailing streams of black oil behind her, while the world around her burns.

3) Buffy Ste Marie, “Carry It On”

The next #ClimateRocks song has to be Buffy Ste Marie with Carry it On, from her 2015 album, Power in the Blood. In contrast to Billie Eilish last week, this song is full of hope, a call to action. Carry it On represents only the latest in a long series of protest songs from Buffy, covering Indigenous rights, peace, and environmental justice, going all the way back to her 1960s anti-war anthem Universal Soldier. Buffy donated the song Carry it On to the global climate movement, and it works brilliantly as the protest song we need, inviting us to take heart and join together in collective action to protect the Earth. It’s all about hope.

4) Neil Young, “Green is Blue” and “Shut it Down”

As Buffy Ste Marie has been singing environmental protest songs since the 1960s, so it only seems right to pair her with Neil Young. Two songs on Neil Young’s 2019 album Colorado are explicitly about climate change, so I’m going to include them both. The first is “Green is Blue”, a gorgeous song that laments all the things we didn’t do despite the warnings:
“We heard the warning calls, ignored them
We watched the weather change, we saw the fires and floods
We saw the people rise, divided
We fought each other while we lost our coveted prize
There’s so much we didn’t do
That we knew we had to do” 

And it’s immediately followed by a grungy protest song, Shut it Down, which really gets to the heart of the matter:
“They’re all wearing climate change/As cool as they can be/Have to shut the whole system down” 

Interestingly, Neil Young released a new video for Shut it Down in 2020, during the first wave of the pandemic, which deftly re-purposes the song for the collective effort to flatten the curve. Actually, watching that video almost seems nostalgic. That was long before the anti-vaxxers and lockdown protesters destroyed the collective sense that we are all responsible for protecting one another. 

As a bonus, here’s a whole essay about how Neil Young has addressed climate change and environmental issues during his long career.

5) Miley Cyrus, “Wake Up America”

Okay, next up is, of course, Miley Cyrus, with her 2008 song Wake Up America. I wasn’t aware of this song until after I started this thread, but it’s just perfect for the series. Incredibly, Miley was only 16 when she wrote it, but it captures perfectly the bewilderment of a 2000s teen discovering the climate crisis:
“I want to learn what it’s all about but
Everything I read’s global warming, going green
I don’t know what all this means
But it seems to be saying…”
Miley wrote this during one of the big waves of activism on climate change, around the time of Al Gore’s Inconvenient Truth, and the lead up to the 2009 Copenhagen talks, which were supposed to usher in a new global agreement, but were completely derailed by fossil fuel funded disinformation. Today’s teens, of course, grew up in a very different world. They do know what all this means, and are (rightfully) a lot more angry about it.

6) Melissa Etheridge, “I Need to Wake Up”

This one picks up on the same theme as last week’s. It’s Melissa Etheridge with “I need to Wake Up“, written in 2006, specifically for the movie An Inconvenient Truth, and an Academy Award winner for best original song. So written a couple of years before Miley Cyrus’s Wake Up America, and, I think, a lot more convincing, because it’s focussed inwards on shaking oneself out of denial, rather than berating others to do so. And realizing connections matter:
“I am not an island / I am not alone”. 
Oh, and I listened a lot to Melissa Etheridge in the 90s, so I have a soft spot for her voice. 

7) Midnight Oil, “Beds Are Burning”

Next up, three songs that appear to be about climate change, but are not. Interestingly, each is about a different global issue that connects to the climate crisis in important ways. The first is Midnight Oil from 1987 with Beds are Burning. The chorus is instantly familiar:
“How can we dance when our earth is turning? / How do we sleep while our beds are burning?” 
But Beds are Burning was actually written to raise awareness of the need to return stolen land to Indigenous communities, in this case, the return of Iluru (previously Ayers Rock) in Australia to the Anangu peoples, to whom it is a sacred site.

The connection to climate change is deep: colonialism spread the idea that we can buy and sell (and steal) land, own the rights to natural resources, and extract them for profit without taking responsibility for the long term consequences. It’s no coincidence colonialists work to eradicate Indigenous cultures. Those cultures offer radical alternatives to extractivism: eg land should belong to the community not the individual, and it must be safeguarded for the benefit of future generations. 

Naturally, in 2009, a large group of celebrities repurposed Beds are Burning as a climate protest song, with updated lyrics. And in the process, they eradicated the Indigenous land issues the song was written about.

8) Peter Gabriel, “Here Comes The Flood”

Our second example is Peter Gabriel’s 1977 song Here Comes the Flood.
“Lord, here comes the flood / We’ll say goodbye to flesh and blood / If again the seas are silent / In any still alive…” 
It could be about sea level rise and flash floods from extreme weather events under global warming. It’s not. Gabriel wrote the song after a dream “in which the psychic barriers which normally prevent us from seeing into each others’ thoughts had been completely eroded producing a mental flood”. Such a flood would sweep away those who prefer to cut themselves off as islands—concealing their innermost thoughts—just as much as those who are open and honest. He might as well have been describing the mental flood of social media, especially the way it sweeps away rational argument and stokes divisions, as everyone’s innermost thoughts are broadcast to the world. It’s ironic that the flood of disinformation on social media has come to dominate public discourse at exactly the moment in history when we most need to come together to address the climate crisis.
“There’s no point in direction / We cannot even choose a side” 

9) Chris Rea, “The Road to Hell”

Our third example is Chris Rea with The Road to Hell from 1989:
“Well, I’m standing by a river but the water doesn’t flow / It boils with every poison you can think of […] This ain’t no technological breakdown, oh no, this is the road to hell”
The inspiration for this song came when Rea was stuck in traffic on London’s M25 orbital motorway, a road that goes nowhere, and is notoriously congested. The locals call it the M25 car park. This is the “river that doesn’t flow” in the first line of the song. But the song is really much darker than just bad traffic.

It’s about being stuck on a technological path that inevitably leads to our destruction, and the feeling there’s nothing we can do about it. The M25 epitomizes what urban planners have learned the hard way: building more roads only increases traffic congestion. So, it does work very well as a metaphor for climate change. The question is: what are the exits from the road to hell, and will we take any of them?

10) Muse, “The 2nd Law: Unsustainable”

This one isn’t subtle at all: it’s Muse with “The 2nd Law: Unsustainable“. Everything about this song is over-the-top, so if you’ve never heard it before, turn up the volume and listen… Musically, the song fuses Muse’s trademark bombastic stadium rock with dubstep (inspired by the band listening to Skrillex, apparently). It shouldn’t work, but it does. 

But of course, we’re here for the lyrics. Delivered by a faux newsreader, they start out literally with the 2nd law of thermodynamics:
“All natural and technological processes proceed in such a way that the availability of the remaining energy decreases”… 
And then use this as a metaphor for climate change and environmental destruction:
“The fundamental laws of thermodynamics will place fixed limits on technological innovation and human advancement. […] A species set on endless growth is unsustainable”
I already wrote about how this song links the 1972 Limits to Growth study with more recent work from ecological economists on our obsession with economic growth.

As a bonus, the next song on the album is a haunting piece called “The 2nd Law: Isolated System“, weaving lyrics from the previous song with snippets of news reports documenting a collapsing economy. (And was used in the zombie movie World War Z) 

11) Delusion Squared, “An Ominous Way Down”

Next is a French band called Delusion Squared, with the song “An Ominous Way Down“. Like the previous songs from MUSE, it weaves snippets of news and speeches into the music. This song is from their album “Anthropocene”. The first few songs tackle climate change, war, food crises and the collapse of the biosphere. Partway through the album it moves into a post-apocalyptic wasteland, with songs about the survivors. There’s even a song, Under Control, about a desperate geoengineering plan to save the planet:
“We have a plan to cure the biosphere / There’s nothing that cannot be engineered /The loss of sunlight is so small a price to pay / And you’ll see everything will be okay” 

I could have picked almost any song from this album, but An Ominous Way down is the one that first captivated me, because of the sampled voices. I’ve managed to track a few of them down. It’s quite a collection! 

  • “Even now, man may be unwittingly changing the world’s climate through the waste products of its civilization” is a quote from the 1958 Frank Capra movie Unchained Goddess. Yes, 1958!
  • “If you actually believe that global warming is the biggest problem we face…you’re the dumbest son-of-a-bitch on the planet.” is Glenn Beck, talking about then-President Obama in 2015. In the song, this quote helps set the scene for why we let the apocalypse happen. 
  • “The ocean is not Republican. It is not Democrat. All it knows how to do is rise.” is Mayor Levine of Miami, in the National Geographic documentary Before the Flood (narrated by Leonardo Di Caprio)
  • “No amount of technology and no amount of human ingenuity can possibly overturn the laws of physics” is Mike Ruppert in the documentary “Collapse”
  • And “We believed then and now: There are no limits to growth and human progress when men and women are free to follow their dreams” is Ronald Reagan in his inaugural address, ushering in the neo-liberal era in defiance of warnings of the Limits to Growth study. 

And all these quotes are woven through a lovely gentle song about a fight to save a majestic oak, which ends:
“There was a great old oak tree / We promised it would be regrown / While we were celebrating / The whole forest went down”. 
Not seeing the wood for the trees. 

12) The 1975, “The 1975”

Finally, the 1975 with their song “The 1975“, which sets a Greta Thunburg speech to ambient background music. It packs a punch. The song has a wikipedia page, so I won’t explain the background, except to say the band’s intent was to document Greta’s words in popular culture. As a piece of art, I think it works brilliantly–it certainly makes you stop and think.

I’ve been thinking a lot this week about the role of Universities in the Climate Crisis. I plan to write more on this, but first, here’s a piece I scribbled on a napkin during a workshop prior to the pandemic, on bringing human values in to computer science, and never got around to sharing…

We speak of research agendas, curriculum change, working with practitioners, etc, as if the future world (over the next decade or so) will be like the present world. It won’t be.

The next decade will be marked by a struggle for rapid transformational change throughout society, and the outcome of that struggle will determine the future of human civilization. Yet everything we’ve mapped out speaks of incremental change. It’s a gradualist agenda that talks about working with existing companies, existing curricula, existing research labs, nudging them to take human values a little more seriously in their work.

But if you take seriously the confluence of (at least) three serious and urgent crises, it’s clear we don’t have time for an incrementalist approach:

1) The climate crisis, in which digital technology is deeply implicated. The carbon footprint of computing is growing dramatically, because we’re putting the internet in everything, and it’s amplifying all the worst trends of our disposable, consumerist society. Silicon valley’s model of innovation (“move fast, break things, and leave others to clear up the mess”) has focussed for so long on finding new ways to monetize our data that we’ve forgotten what innovation really looks like. A reminder: over the next decade or so, we need to completely transform our energy infrastructure to reach net zero global emissions. We can’t do this while silicon valley continues to hoover up all the available investment capital.

2) Automation and AI, which threatens to destroy any notion of a stable job for vast sectors of society, and which replaces human empathy for the cold, impenetrable injustice of algorithmic regulation (How do we just say “no” as a society to such technologies?).

3) The dismantling of democracy, through the use of ubiquitous digital surveillance by autocrats and corporatists, and the exploitation of (addictive) social media as a vector for extremist propaganda designed to pit us against one another.

So we should be striving for a much more radical agenda that envisages the wholesale transformation of the computing profession, putting an end to the technological solutionism of Silicon valley, turning it into a humble enterprise that places human dignity first. We need to dismantle the stranglehold of the big five tech corporations, break the relationship between digital technology and consumerism, and give ourselves the power to ban some technologies completely. We should not put activism in a box. As academics, activism should infuse all of our teaching, all our research, all our community engagement. If we’re not working for transformational change, we’re reinforcing the status quo.

Put simply, we need to recognize the unique historical moment we find ourselves in, and the role computing has played in our current existential crises.

In honour of today’s announcement that Syukuro Manabe, Klaus Hasselmann and Giorgio Parisi have been awarded the Nobel prize in physics for their contributions to understanding and modeling complex systems, I’m posting here some extracts from my forthcoming book, “Computing the Climate”, describing Manabe’s early work on modeling the climate system. We’ll start the story with the breakthrough by Norman Phillips at Princeton University’s Institute for Advanced Study (IAS), which I wrote about in my last post.

The Birth of General Circulation Modeling

Phillips had built what we now acknowledge as the first general circulation model (GCM), in 1955. It was ridiculously simple, representing the earth as a cylinder rather than a globe, with the state of the atmosphere expressed using a single variable—air pressure—at two different heights, at each of 272 points around the planet (a grid of 16 x 17 points). Despite its simplicity, Phillips’ model did something remarkable. When started with a uniform atmosphere—the same values at every grid point—the model gradually developed its own stable jet stream, under the influence of the equations that describe the effect of heat from the sun and rotation of the earth. The model was hailed as a remarkable success, and inspired a generation of atmospheric scientists to develop their own global circulation models.

The idea of starting the model with the atmosphere at rest—and seeing what patterns emerge—is a key feature that makes this style of modelling radically different how models are used in weather forecasting. Numerical weather forecasting had taken off rapidly, and by 1960, three countries—the United States, Sweden and Japan—had operational numerical weather forecasting services up and running. So there was plenty of expertise already in numerical methods and computational modelling among the meteorological community, especially in those three countries. 

But whereas a weather model only simulates a few days starting from data about current conditions, a general circulation model has to simulate long-term stable patterns, which means many of the simplifications to the equations of motion that worked in early weather forecasting models don’t work in GCMs. The weather models of the 1950s all ignore fast moving waves that are irrelevant in short-term weather forecasts. But these simplifications make the model unstable over longer runs. The atmosphere would steadily lose energy—and sometimes air and moisture too—so that realistic climatic patterns don’t emerge. The small group of scientists interested in general circulation modelling began to diverge from the larger numerical weather forecasting community, choosing to focus on versions of the equations and numerical algorithms with conservation of mass and energy built in, to give stable long-range simulations.

In 1955, the US Weather Bureau established a General Circulation Research Laboratory, specifically to build on Phillips’ success. It was headed by Joseph Smagorinsky, one of the original members of the ENIAC weather modelling team. Originally located just outside Washington DC, the lab has undergone several name changes and relocations, and is now the Geophysical Fluid Dynamics Lab (GFDL), housed at Princeton University, where it remains a major climate modelling lab today.

In 1959, Smagorinsky recruited the young Japanese meteorologist, Syukuro Manabe from Tokyo, and they began work on a primitive equation model. Like Phillips, they began with a model that represented only one hemisphere. Manabe concentrated on the mathematical structure of the models, while Smagorinsky hired a large team of programmers to develop the code. By 1963, they had developed a nine-layer atmosphere model which exchanged water—but not heat—between the atmosphere and surface. The planet’s surface, however, was flat and featureless—a continuous swamp from which water could evaporate, but which had no internal dynamics of its own. The model could simulate radiation passing through the atmosphere, interacting with water vapour, ozone and CO2. Like most of the early GCMs, this model captured realistic global patterns, but had many of the details wrong. 

Meanwhile, at the University of California, Los Angeles (UCLA), Yale Mintz, the associate director of the Department of Meteorology, recruited another young Japanese meteorologist, Akio Arakawa, to help him build their own general circulation model. From 1961, Mintz and Arakawa developed a series of models, with Mintz providing the theoretical direction, and Arakawa designing the model, with help from the department’s graduate students. By 1964, their model represented the entire globe with a 2-layer atmosphere and realistic geography.

Computational limitations dominated the choices these two teams had to make. For example, the GFDL team modelled only the northern hemisphere, with a featureless surface, so that they could put more layers into the atmosphere, while the UCLA team chose the opposite route: an entire global model with realistic layout of continents and oceans, but with only 2 layers of atmosphere.

Early Warming Signals

Meanwhile, in the early 1950s, oceanographers at the Scripps Institute for Oceanography in California, under the leadership of their new director, Roger Revelle, were investigating the spread of radioactive fallout in the oceans from nuclear weapons testing. Their work was funded by the US military, who needed to know how quickly the oceans would absorb these contaminants, to assess the risks to human health. But Revelle had many other research interests. He had read about the idea that carbon dioxide from fossil fuels could warm the planet, and realized radiocarbon dating could be used to measure how quickly the ocean absorbs CO2. Revelle understood the importance of a community effort, so he persuaded a number of colleagues to do similar analysis, and in a coordinated set of three papers [Craig, 1957; Revelle & Süess, 1957; and Arnold & Anderson, 1957], published in 1957, the group presented their results.

They all found a consistent pattern: the surface layer of the ocean continuously absorbs CO2 from the atmosphere, so on average, a molecule of CO2 molecule stays in the atmosphere only for about 7 years, before being dissolved into the ocean. But the surface waters also release CO2, especially when they warm up in the sun. So the atmosphere and surface waters exchange CO2 molecules continuously—any extra CO2 will end up shared between them

All three papers also confirmed that the surface waters don’t mix much with the deeper ocean. So it takes hundreds of years for any extra carbon to pass down into deeper waters. The implications were clear—the oceans weren’t absorbing CO2 anywhere near as fast as we were producing it.

These findings set alarm bells ringing amongst the geosciences community. If this was correct, the effects of climate change would be noticeable within a few decades. But without data, it would be hard to test their prediction. At Scripps, Revelle hired a young chemist, David Charles Keeling, to begin detailed measurements. In 1958, Keeling set up an observing station on Mauna Loa in Hawaii, and a second station in the Antarctic, both far enough from any major sources of emissions to give a reliable baseline measurements of CO2 in the atmosphere. Funding for the Antarctic station was cut a few years later, but Keeling managed to keep the recordings going at Mauna Loa, where they are still collected regularly today. Within two years, Keeling had enough data to confirm Bolin and Ericsson’s analysis: CO2 levels in the atmosphere were rising sharply.

Keeling’s data helped to spread awareness of the issue rapidly among the ocean and atmospheric science research communities, even as scientists in other fields remained unaware of the issue. Alarm at the implications of the speed at which CO2 levels were rising led some scientists to alert the country’s political leaders. When President Lyndon Johnson commissioned a report on the state of the environment, in 1964, the president’s science advisory committee invited a small subcommittee—including Revelle, Keeling, and Smagorinsky—to write an appendix to the report, focusing on the threat of climate change. And so, on February 8th, 1965, President Johnson became the first major world leader to mention the threat of climate change, in speech to congress: “This generation has altered the composition of the atmosphere on a global scale through…a steady increase in carbon dioxide from the burning of fossil fuels.”

Climate Modeling Takes Off

So awareness of the CO2 problem was spreading rapidly through the scientific community just as the general circulation modelling community was getting established. However, it wasn’t clear that global circulation models would be suited to this task. Computational power was limited, and it wasn’t yet possible to run the models long enough to simulate the decades or centuries over which climate change would occur. Besides, the first generation of GCMs had so many simplifications, it seemed unlikely they could simulate the effects of increasing CO2—that wasn’t what they were designed for.

To do this properly, the models would need to include all the relevant energy exchanges between the surface, atmosphere and space. That would mean a model that accurately captured the vertical temperature profile of the atmosphere, along with the process of radiation, convection, evaporation and precipitation, all of which move energy vertically. None of these processes are adequately captured in the primitive equations, so they would all need to be added as parameterization schemes in the models.

Smagorinsky and Manabe at GFDL were the only group anywhere near ready to try running CO2 experiments in their global circulation model. Their nine-layer model already captured some of the vertical structure of the atmosphere, and Suki Manabe had built in a detailed radiation code from the start, with the help of a visiting German meteorologist, Fritz Möller. Manabe had a model of the relevant heat exchanges in the full height of the atmosphere working by 1967, and together with his colleague, Richard Wetherald, published what is now recognized as the first accurate computational experiment of climate change [Manabe and Wetherald, 1967].

Running the general circulation model for this experiment was still too computationally expensive, so they ignored all horizontal heat exchanges, and instead built a one dimensional model of just a single column of atmosphere. The model could be run with 9 or 18 layers, and included the effects of upwards and downwards radiation through the column, exchanges of heat through convection, and the latent heat of evaporation and condensation of water. Manabe and Wetherald first tested the model with current atmospheric conditions, to check it could reproduce the correct vertical distribution of temperatures in the atmosphere, which it did very well. They then doubled the amount of carbon dioxide in the model and ran it again. They found temperatures rose throughout the lower atmosphere, with a rise of about 2°C at the surface, while the stratosphere showed a corresponding cooling. This pattern—warming in the lower atmosphere and cooling in the stratosphere—shows up in all the modern global climate models, but wasn’t confirmed by satellite readings until the 2000s.

By the mid 1970s, a broad community of scientists were replicating Manabe and Wetherald’s experiment in a variety of simplified models, although it would take nearly a decade before anyone could run the experiment in a full 3-dimensional GCM. But the community was beginning to use the term climate modelling to describe their work—a term given much greater impetus when it was used as the title of a comprehensive survey of the field by two NCAR scientists, Steven Schneider and Robert Dickinson in 1975. Remarkably, their paper [Schneider and Dickinson, 1974] charts a massive growth of research, citing the work of over 150 authors who published work on climate modelling the in period from 1967-1975, after Manabe and Wetherald’s original experiment.

It took some time, however, to get the general circulation models to the point where they could also run a global climate change experiment. Perhaps unsurprisingly, Manabe and Wetherald were also the first to do this, in 1975. Their GCM produced a higher result for the doubled CO2 experiment—an average surface warming of 3°C—and they attributed this to the snow-albedo feedback, which is included in the GCM, but not in their original single column model. Their experiment [Manabe and Wetherald 1975] also showed an important effect first noted by Arrhenius: a much greater warming at the poles than towards the equator—because polar temperatures are much more sensitive to changes in the rate at which heat escapes to space. And their model predicted another effect—global warming would speed up evaporation and precipitation, and hence produce more intense rainfalls. This prediction has already been demonstrated in the rapid uptick of extreme weather events in the 2010s.

In hindsight, Manabe’s simplified models produced remarkably accurate predictions of future climate change. Manabe used his early experiments to predict a temperature rise of about 0.8°C by the year 2000, assuming a 25% increase in CO2 over the course of the twentieth century. Manabe’s assumption about the rate that CO2 would increase was almost spot on, and so was his calculation for the resulting temperature rise. CO2 levels rose from about 300ppm in 1900 to 370ppm in 2000, a rise of 23%. The change in temperature over this period, calculated as the change in decadal means in the HadCRUT5 dataset was 0.82°C. [Hausfather et al 2020].

References

Arnold, J. R., & Anderson, E. C. (1957). The Distribution of Carbon-14 in Nature. Tellus, 9(1), 28–32.

Craig, H. (1957). The Natural Distribution of Radiocarbon and the Exchange Time of Carbon Dioxide Between Atmosphere and Sea. Tellus, 9(1), 1–17.

Hausfather, Z., Drake, H. F., Abbott, T., & Schmidt, G. A. (2020). Evaluating the Performance of Past Climate Model Projections. Geophysical Research Letters, 47(1), 2019GL085378.

Manabe, S., & Wetherald, R. T. (1967). Thermal Equilibrium of the Atmosphere with a Given Distribution of Relative Humidity. Journal of the Atmospheric Sciences.

Manabe, S., & Wetherald, R. T. (1975). The Effects of Doubling the CO2 Concentration on the Climate of a General Circulation Model. Journal of the Atmospheric Sciences, 32(1), 3–15.

Revelle, R., & Suess, H. E. (1957). Carbon Dioxide Exchange Between Atmosphere and Ocean and the Question of an Increase of Atmospheric CO 2 during the Past Decades. Tellus, 9(1), 18–27.

Schneider, S. H., & Dickinson, R. E. (1974). Climate modeling. Reviews of Geophysics, 12(3), 447.

The meteorologist, Norman Phillips died last week, at the grand old age of 95. As I’ve written about his work in my forthcoming book, Computing the Climate, I’ve extracted this piece from the manuscript, to honour his contribution to climate modelling—not only did he create the first ever global circulation model, but the ideas in his model sparked off a revolution in how we use computers to model the climate system. We join the story shortly after the success of the first numerical forecast model, developed by Jule Charney and his team of meteorologists at Princeton in the late 1940s. Among the team was a young Norman Phillips…

In the 1950s, a team of meteorologists led by Jule Charney at Princeton’s Institute for Advanced Studies (IAS) had turned the equations of motion into a program that could compute the weather. Flushed with the success of a trial run of their forecast model on ENIAC in March 1950, they were keen to figure out how to extend the range of their forecasts. Within a couple of years, they had produced some reasonably good forecasts for 24-hours, and sometimes even 36-hours, although in the early 1950s, they couldn’t yet do this consistently. For better forecasts, they would need better models and better data.

Because of limited computing power, and limited observational data, their early models were designed to cover only a part of the globe—the region over North America. This meant they were simulating an “open” system. In the real world, the part of the atmosphere included in the model interacts with parts outside the model, exchanging mass and energy freely. If a storm system from elsewhere moved into the region, the model could not simulate this, as it has no information on what is happening beyond its boundaries.

In his initial models, Charney had ignored this problem, and treated the boundary conditions as fixed. He added an extra strip of grid points at each edge of the model’s main grid, where conditions were treated as constant. When the simulation calculated the next state of the atmosphere for each point within the grid, these edge points just kept their initial values. This simplification imposed a major limitation on the accuracy of the weather forecasts. As the simulation proceeded, the values at these edge points would become less and less like the real conditions, and these errors would propagate inwards, across the grid. To get longer forecasts—say for weeks, instead of days—a better solution was needed. For long-range forecasting, the computer would need to think outside the box.

The obvious way to do this was to extend the grid to cover the entire globe, making it a “closed” system. This would leave only two, simpler boundaries. At the top of the atmosphere, energy arrives from the sun, and is lost back to space. But no air mass crosses this boundary, which means there are no significant boundary disturbances. At the bottom, where the atmosphere meets the surface of the planet, things are more complicated, as the both heat and moisture cross the boundary, with water evaporating from the land and oceans, and eventually being returned as rain and snow. But this effect is small compared to movements within the atmosphere, so it could be ignored, at least for the coarse-grained models of the 1950s—later models would incorporate this exchange between surface and atmosphere directly in the simulation.

Among the group at Princeton, Norman Philips was the first to create a working global model. Because the available computer power was still relatively tiny, extending the grid for an existing forecast model wasn’t feasible. Instead, Phillips took a different approach. He removed so many of the features of the real planet, the model barely resembled the earth at all.

To simplify things, he treated the surface of the earth as smooth and featureless. He used a 17×16 grid, not unlike the original ENIAC model, but connected the cells on the eastern edge with the cells on the western edge, so that instead of having fixed boundaries to the east and the west, the grid wrapped around, as though it were a cylindrical planet [1]. At the north and south edges of the grid, the model behaved as if there were solid walls—movement of the atmosphere against the wall would be reflected back again. This overall shape simplified things: by connecting the east and west edges, model could simulate airflows that circulate all the way around the planet, but Phillips didn’t have to figure out the complex geometry where grid cells converge at the poles.

The dimensions of this simulated cylindrical planet were similar to those of Charney’s original weather model, as it used the same equations. Phillips’ grid points were 375km apart in the east-west direction and 625km apart in the north-south. This gave a virtual planet whose circumference was less than 1/6th of the circumference of the earth, but whose height was almost the same as the height of the earth from pole to pole. A tall, thin, cylindrical earth.

To simplify things even more, Phillip’s cylindrical model represented only one hemisphere of earth. He included a heating effect at the southern end of the grid, to represent the equator receiving the most energy from the sun, and a cooling effect at the northern end of the model, to represent the arctic cooling as it loses heat to space [2]. The atmosphere was represented as two layers of air, and each layer was a version of Charney’s original one-layer model. The grid therefore had 17x16x2 cells in total, and it ran on a machine with 5Kbytes of RAM and 10Kbytes of magnetic drum memory. The choice of this grid is not an accident: the internal memory of the IAS machine could store 1,024 numbers (it had 1024 words, each 40-bits long). Phillip’s choice of grid meant a single state of the global atmosphere could be represented with about 500 variables [3], thus taking up just under half of the machine’s memory, leaving the other half available for calculating the next state.

To initialize the model, Phillips decided not to bother with observational data at all. That would have been hard anyway, as the geometry of the model didn’t resemble planet earth. Instead, he started with a uniform atmosphere at rest. In other words, every grid point started with the same values, as though there was no wind anywhere. Starting a simulation model with the atmosphere at rest and hoping the equations would start to generate realistic weather patterns was a bold, and perhaps crazy idea.

It is also the ultimate test of the equations in the model: if they could get the virtual atmosphere moving in a realistic way, it means nothing important has been left out. Today, we call this a spin-up run. The ocean and atmosphere components of today’s global climate models are regularly started in this way. Spin-up runs for today’s models are expensive though, because they require a lot of time on the supercomputer, and until the model settles into a stable pattern the simulation results are unusable. Oceans in particular have tremendous inertia, so modern ocean models can take hundreds of years of simulation time to produce stable and realistic ocean currents, which typically requires many weeks to run on a supercomputer. Therefore, the spin-up is typically run just once, and the state at the end of this spin-up is used as a start state for all the science experiments to be run on the model.

By 1955, Phillip had his global simulation model running successfully. Once the run started, the simulated atmosphere didn’t stay at rest. The basic equations of the model included terms for forces that would move the atmosphere: gravity, the Coriolis force, expansion and contraction when air warms and cools, and the movement of air from high pressure areas to lower pressure areas. As heat entered the atmosphere towards the southern edge, the equations in the model made this air expand, rise and move northwards, just as it does in real life. Under the effect of the Coriolis force, this moving air mass slowly curled towards the east. The model developed its own stable jet stream.
In his early tests, Phillips was able to run the model for a month of simulation time, during which the model developed a realistic jet stream and gave good results for monthly and seasonal weather statistics. Unfortunately, getting the model to run longer than a month proved to be difficult, as numerical errors in the algorithms would accumulate. In later work, Phillips was able to fix these problems, but by then a whole generation of more realistic global climate models were emerging.

Phillips’ model wasn’t a predictive model, as it didn’t attempt to match any real conditions of the earth’s atmosphere. But the fact that it could simulate realistic patterns made it an exciting scientific model. It opened the door to the use of computer models to improve our understanding of the climate system. As the model could generate typical weather patterns from first principles, models like this could start to answer questions about the factors that shape the climate and drive regional differences. Clearly, long range simulation models were possible, for scientists who are interested in the general patterns—the climate—rather than the actual weather on any specific day.

Despite its huge over-simplifications, Phillips’ model was regarded as a major step forward, and is now credited as the first General Circulation Model. The head of IAS, John von Neumann was so excited that within a few months he persuaded the US Weather Bureau, Air Force, and Army to jointly fund a major new research program to develop the work further, at what in today’s money would be $2 million per year. The new research program, initially known as the General Circulation Research Section [4], and housed at the Weather Bureau’s computing facility in Maryland, eventually grew to become today’s Geophysical Fluid Dynamics Lab (GFDL), one of the world’s leading research labs for climate modelling. Von Neumann then convened a conference in Princeton, in October 1955, to discuss prospects for General Circulation Modelling. Phillips’ model was the highlight of the conference, but the topics also included stability of the numerical algorithms, how to improve forecasting of precipitation (rain and snow), and the need to include in the models the role of role of greenhouse gases.

In his opening speech to the conference, von Neumann divided weather prediction into three distinct problems. Short term weather prediction, over the span of a few days, he argued, was completely dominated by the initial values. Better data would soon provide better forecasts. In contrast, long-term prediction, like Phillips’ model, is largely unaffected by initial conditions. Von Neumann argued that by modelling the general circulation patterns for the entire globe, an “infinite forecast” would be possible—a model that could reproduce the large scale patterns of the climate system indefinitely. But the hardest prediction problem, he suggested, lay in between these two: intermediate range forecasts, which are shaped by both initial conditions and general circulation patterns. His assessment was correct: short term weather forecasting and global circulation modelling both developed rapidly in the ensuing decades, whereas intermediate forecasting (on the scale of months) is still a major challenge today.

Unfortunately, von Neumann didn’t live long enough to see his prediction play out. That same year he was diagnosed with cancer, and died two years later in February 1957, at the age of 53. The meteorology team no longer had a champion on the faculty at Princeton. Charney and Phillips left to take up positions at MIT, where Phillips would soon be head of the Department of Meteorology. The IAS meteorology project that had done so much to kick-start computerized weather forecasting was soon closed. However, its influence lived on, as a whole generation of young meteorologists established new research labs around the world to develop the techniques.


Notes:

[1] Although the geometry of the grid could be considered a cylinder, Phillips used a variable Coriolis factor suitable for a spherical planet, which means his artificial planet didn’t spin like a cylinder – the Coriolis force would get stronger, the further north you moved. This is essential for the formation of a jet stream. Strictly speaking, a cylindrical planet, if it could exist at all, wouldn’t have a Coriolis force, as the effect comes from the curvature towards the poles. Phillips included it in the equations anyway, to see if it would still produce a jet stream. For details see: Lewis, J. M. (1998). Clarifying the Dynamics of the General Circulation: Phillips’s 1956 Experiment. Bulletin of the American Meteorological Society, 79(1), 39–60.

[2] This was implemented in the model using a heating parameter as a linear function of latitude, with maximum heating at the southern edge, and maximum cooling at the northern edge, with points in between scaled accordingly. As Phillips points out, this is not quite like the real planet, but it was sufficient to generate stable circulation patterns similar to those in the real atmosphere. See Phillips, N. A. (1956). The general circulation of the atmosphere: A numerical experiment. Quarterly Journal of the Royal Meteorological Society, 82(352), 123–164.

[3] Actually, the grid was only 17×15, because it wrapped around, with the westernmost grid points being the same as the easternmost ones. So each of the two atmospheric levels could be represented as a geopotential array of 255 elements. (See Lewis, 1998)

[4] Joseph Smagorinsky, another member of the team that had run the ENIAC forecasts, was appointed head of this project. See Aspray, W. (1990). John von Neumann and the Origins of Modern Computing. MIT Press. Note that von Neumann’s original proposal is reproduced in full in Smagorinsky, J. (1983). The beginnings of numerical weather prediction and general circulation modelling: Early recollections. Advances in Geophysics, 25, 3–38.

Here’s another excerpt from the draft manuscript of my forthcoming book, Computing the Climate.

The idea that the temperature of the planet could be analyzed as a mathematical problem was first suggested by the French mathematician, Joseph Fourier, in the 1820s. Fourier had studied the up-and-down cycles of temperature between day and night, and between summer and winter, and had measured how deep into the ground these heating and cooling cycles reach. It turns out they don’t go very deep. At about 30 meters below the surface, temperatures remain constant all year round, showing no sign of daily or annual change. Today, Fourier is perhaps best remembered for his work on the mathematics of such cycles, and the Fourier transform, a technique for discovering cyclic waveforms in complex data series, was named in his honour.

The temperature of any object is due to the balance of heat entering and leaving it. If more heat is entering, the object warms up, and if more heat is leaving, it cools down. For the planet as a whole, Fourier pointed out there are only three possible sources of heat: the sun, the earth’s core, and background heat from space. His measurements showed that the heat at the earth’s core no longer warms the surface, because the diffusion of heat through layers of rock is too slow to make a noticeable difference. He thought that the temperature of space itself was probably about the same as the coldest temperatures on earth, as that would explain the temperature reached at the poles in the long polar winters. On this point, he was wrong—we now know space is close to absolute zero, a couple of hundred degrees colder than anywhere on earth. But he was correct about the sun being the main source of heat at the earth’s surface.

Fourier also realized there must be more to the story than that, otherwise the heat from the sun would escape to space just as fast as it arrived, causing night-time temperatures to drop back down to the temperature of space—and yet they don’t. We now know this is what happens on the moon, where temperatures drop by hundreds of degrees after the lunar sunset. So why doesn’t this happen on Earth?

The solution lay in the behaviour of ‘dark heat’, an idea that was new and mysterious to the scientists of the early nineteenth century. Today we call it infra-red radiation. Fourier referred to it as ‘radiant heat’ or ‘dark rays’ to distinguish it from ‘light heat’, or visible light. But really, they’re just different parts of the electromagnetic spectrum. Any object that’s warmer than its surroundings continually radiates some of its heat to those surroundings. If the object is hot enough, say a stove, you can feel this ‘dark heat’ if you put your hand near it, although it has to get pretty hot before we can feel the infra-red it gives off. As you heat up an object, the heat it radiates spreads up the spectrum from infra-red to visible light—it starts to glow red, and then, eventually white hot.

Fourier’s theory was elegantly simple. Because the sun is so hot, much of its energy arrives in the form of visible light, which passes through the atmosphere relatively easily, and warms the earth’s surface. As the earth’s surface is warm, it also radiates energy. The earth is cooler than the sun, so the energy the earth radiates is in the form of dark heat. Dark heat doesn’t pass though the atmosphere anywhere near as easily as light heat, so this slows the loss of energy back to space.

The surface temperature of the earth is determined by the balance between the incoming heat from the sun (shortwave rays, mainly in the visible light and ultra-violet) and the outgoing infra-red, radiated in all directions from the earth. The incoming short-wave rays passes through the atmosphere much more easily than the long-wave outgoing infra-red.

To explain the idea, Fourier used an analogy with the hotbox, a kind of solar oven, invented by the explorer Horace Bénédicte de Saussure. The hotbox was a very well-insulated wooden box, painted black inside, with three layers of glass in the lid. De Saussure had demonstrated that the sun would heat the inside of the box to over 100°C, and that this temperature remains remarkably consistent, even at the top of Mont Blanc, where the outside air is much colder. The glass lets the sun’s rays through, but slows the rate at which the heat can escape. Fourier argued that layers of air in the atmosphere play a similar role to the panes of glass in the hotbox, by trapping the outgoing heat; like the air in the hotbox, the planet would stay warmer than its surroundings. A century later, Fourier’s theory came to be called the ‘greenhouse effect’, perhaps because a greenhouse is more familiar to most people than a hotbox.

While Fourier had observed that air does indeed trap some of the dark heat from the ground, it wasn’t clear why, until the English scientist John Tyndall conducted a series of experiments in the 1850s to measure how well this ‘dark heat’ passes through different gases. Tyndall’s experiments used a four foot brass tube, sealed at both ends with transparent disks of salt crystal—glass was no good as it also blocks the dark heat. The tube could be filled with different kinds of gas. A tub of boiling water at one end provided a source of heat, and a galvanometer at the other compared the heat received through the tube with the heat from a second tub of boiling water.

Tyndall’s experimental equipment for testing the absorption properties of different gases. The brass tube was first evacuated, and the equipment calibrated by moving the screens until the temperature readings from the two heat sources were equal. Then the gas to be tested was pumped into the brass tube, and change in deflection of the galvanometer noted. (Figure adapted from Tyndall, 1861)

When Tyndall filled the tube with dry air, or oxygen, or nitrogen, there was very little change. But when he filled it with the hydrocarbon gas ethene, the temperature at the end of the tube dropped dramatically. This was so surprising that he first suspected something had gone wrong with the equipment—perhaps the gas had reacted with the salt, making the ends opaque? After re-testing every aspect of the equipment, he finally concluded that it was the ethene gas itself that was blocking the heat. He went on to test dozens of other gases and vapours, and found that more complex chemicals such as vapours of alcohols and oils were the strongest heat absorbers, while pure elements such as oxygen and nitrogen had the least effect.

Why do some gases allow visible light through, but block infra-red? It turns out that the molecules of each gas react to different wavelengths of light, depending on the molecule’s shape, similar to the way sound waves of just the right wavelength can cause a wine glass to resonate. Each type of molecule will vibrate when certain wavelengths of light hit it, making it stretch, contract, or rotate. So the molecule gains a little energy, and the light rays lose some. Scientists use this to determine which gases are in distant stars, because each gas makes a distinct pattern of dark lines across the spectrum from white light that has passed though it.

Tyndall noticed that gases made of more than one element, such as water vapour (H2O) or carbon dioxide (CO2), tend to absorb more energy from the infra-red rays than gases made of a single type of element, such as hydrogen or oxygen. He argued this provides evidence of atomic bonding: it wouldn’t happen if water was just a mixture of oxygen and hydrogen atoms. On this, he was partially right. We now know that what matters isn’t just the existence of molecular bonds, but whether the molecules are asymmetric—after all, oxygen gas molecules (O2) are also pairs of atoms bonded together. The more complex the molecular structure, the more asymmetries it has, and the more modes of vibration and spin the bonds have, allowing them to absorb energy at more different wavelengths. Today, we call any gas that absorbs parts of the infra-red spectrum a greenhouse gas. Compounds such as methane (CH4) and ethene (C2H4) absorb energy at more wavelengths than carbon dioxide, making them stronger greenhouse gases.

Tyndall’s experiments showed that greenhouse gases absorb infra-red even when the gases are only present in very small amounts. Increasing the concentration of the gas increases the amount of energy absorbed, but only up to a point. Once the concentration is high enough, adding more gas molecules has no further effect—all of the rays in that gas’s absorption bands have been blocked, while rays of other wavelengths pass through unaffected. Today, we call this saturation.

Tyndall concluded that, because of its abundance in the atmosphere, water vapour is responsible for most of the heat trapping effect, with carbon dioxide second. Some of the other vapours he tested have a much stronger absorption effect, but are so rare in the atmosphere they contribute little to the overall effect. Tyndall clearly understood the implications of his experiments for the earth’s climate, arguing that it explains why, for example, temperatures in dry regions such as deserts drop overnight far more than in more humid regions. In the 1861 paper describing his experimental results, Tyndall argued that any change in the levels of water vapour and carbon dioxide, “must produce a change of climate”. He speculated that “Such changes in fact may have produced all the mutations of climate which the researches of geologists reveal”.

I was doing some research on Canada’s climate targets recently, and came across this chart, presented as part of Canada’s Intended Nationally Determined Contribution (INDC) under the Paris Agreement:

canada-indc-pledge

Looks good right? Certainly it conveys a message that Canada’s well on track, and that the target for 2030 is ambitious (compared to a business as usual pathway). Climate change solved, eh?

But the chart is an epic example of misdirection. Here’s another chart that pulls the same trick, this time from the Government’s Climate Change website, and apparently designed to make the 2030 target look bravely ambitious:

ghg_emissions_trends_2016_en

So I downloaded the data and produced my own chart, with a little more perspective added. I wanted to address several ways in which the above charts represent propaganda, rather than evidence:

  • By cutting off the Y axis at 500 Mt, the chart hides the real long-term evidence-based goal for climate policy: zero emissions;
  • Canada has consistently failed to meet any of it’s climate targets in the past, while the chart seems to imply we’re doing rather well;
  • The chart conflates two different measures. The curves showing actual emissions exclude net removal from forestry (officially known as Land Use, Land Use Change, and Forestry LULUCF), while Canada fully intends to include this in its accounting for achieving the 2030 target. So if you plot the target on the same chart with emissions, honesty dictates you should adjust the target accordingly.

Here’s my “full perspective” chart. Note that the first target shown here in grey was once Liberal party policy in the early 1990s; the remainder were official federal government targets. Each is linked to the year they were first proposed. The “fair effort” for Canada comes from ClimateActionTracker’s analysis:

canada-climate-targets2

The correct long term target for carbon emissions is, of course zero. Every tonne of CO2 emitted makes the problem worse, and there’s no magic fairy that removes these greenhouse gases from the atmosphere once we’ve emitted them. So until we get to zero emissions, we’re making the problem worse, and the planet keeps warming. Worse still, the only plausible pathways to keep us below the UN’s upper limit of 2°C of warming requires us to do even better than this: we have to go carbon negative before the end of the century.

Misleading charts from the government of Canada won’t help us get on the right track.

This is an excerpt from the draft manuscript of my forthcoming book, Computing the Climate.

While models are used throughout the sciences, the word ‘model’ can mean something very different to scientists from different fields. This can cause great confusion. I often encounter scientists from outside of climate science who think climate models are statistical models of observed data, and that future projections from these models must be just extrapolations of past trends. And just to confuse things further, some of the models used in climate policy analysis are like this. But the physical climate models that underpin our knowledge of why climate change occurs are fundamentally different from statistical models.

A useful distinction made by philosophers of science is between models of phenomena, and models of data. The former include models developed by physicists and engineers to capture cause-and-effect relationships. Such models are derived from theory and experimentation, and have explanatory power: the model captures the reasons why things happen. Models of data, on the other hand, describe patterns in observed data, such as correlations and trends over time, without reference to why they occur. Statistical models, for example, describe common patterns (distributions) in data, without saying anything about what caused them. This simplifies the job of describing and analyzing patterns: if you can find a statistical model that matches your data, you can reduce the data to a few parameters (sometimes just two: a mean and a standard deviation). For example, the heights of any large group of people tend to follow a normal distribution—the bell-shaped curve—but this model doesn’t explain why heights vary in that way, nor whether they always will in the future. New techniques from machine learning have extended the power of these kinds of models in recent years, allowing more complex patterns to be discovered by “training” an algorithm to find more complex kinds of pattern.

Statistical techniques and machine learning algorithms are good at discovering patterns in data (eg “A and B always seems to change together”), but hopeless at explaining why those patterns occur. To get over this, many branches of science use statistical methods together with controlled experiments, so that if we find a pattern in the data after we’ve carefully manipulated the conditions, we can argue that the changes we introduced in the experiment caused that pattern. The ability to identify a causal relationship in a controlled experiment has nothing to do with the statistical model used—it comes from the logic of the experimental design. Only if the experiment is designed properly will statistical analysis of the results provide any insights into cause and effect.

Unfortunately, for some scientific questions, experimentation is hard, or even impossible. Climate change is a good example. Even though it’s possible to manipulate the climate (as indeed we are currently doing, by adding more greenhouse gases), we can’t set up a carefully controlled experiment, because we only have one planet to work with. Instead, we use numerical models, which simulate the causal factors—a kind of virtual experiment. An experiment conducted in a causal model won’t necessarily tell us what will happen in the real world, but it often gives a very useful clue. If we run the virtual experiment many times in our causal model, under slightly varied conditions, we can then turn back to a statistical model to help analyze the results. But without the causal model to set up the experiment, a statistical analysis won’t tell us much.

Both traditional statistical models and modern machine learning techniques are brittle, in the sense that they struggle when confronted with new situations not captured in the data from which the models were derived. An observed statistical trend projected into the future is only useful as a predictor if the future is like the past; it will be a very poor predictor if the conditions that cause the trend change. Climate change in particular is likely to make a mess of all of our statistical models, because the future will be very unlike the past. In contrast, a causal model based on the laws of physics will continue to give good predictions, as long as the laws of physics still hold.

Modern climate models contain elements of both types of model. The core elements of a climate model capture cause-and-effect relationships from basic physics, such as the thermodynamics and radiative properties of the atmosphere. But these elements are supplemented by statistical models of phenomena such as clouds, which are less well understood. To a large degree, our confidence in future predictions from climate models comes from the parts that are causal models based on physical laws, and the uncertainties in these predictions derive from the parts that are statistical summaries of less well-understood phenomena. Over the years, many of the improvements in climate models have come from removing a component that was based on a statistical model, and replacing it with a causal model. And our confidence in the causal components in these models comes from our knowledge of the laws of physics, and from running a very large number of virtual experiments in the model to check whether we’ve captured these laws correctly in the model, and whether they really do explain climate patterns that have been observed in the past.

One of the biggest challenges in understanding climate change is that the timescales involved are far longer than most people are used to thinking about. Garvey points out that this makes climate change different from any other ethical question, because both the causes and consequences are smeared out across time and space:

“There is a sense in which my actions and the actions of my present fellows join with the past actions of my parents, grandparents and great-grandparents, and the effects resulting from our actions will still be felt hundreds, even thousands of years in the future. It is also true that we are, in a way, stuck with the present we have because of our past. The little actions I undertake which keep me warm and dry and fed are what they are partly because of choices made by people long dead. Even if I didn’t want to burn fossil fuels, I’m embedded in a culture set up to do so.” (Garvey, 2008, p60)

Part of the problem is that the physical climate system is slow to respond to our additional greenhouse gas emissions, and similarly slow to respond to reductions in emissions. The first part of this is core to a basic understanding of climate change, as it’s built into the idea of equilibrium climate sensitivity (roughly speaking, the expected temperature rise for each doubling of CO2 concentrations in the atmosphere). The extra heat that’s trapped by the additional greenhouse gases builds up over time, and the planet warms slowly, but the oceans have such a large thermal mass, it takes decades for this warming process to complete.

Unfortunately, the second part, that the planet takes a long time to respond to reductions in emissions, is harder to explain, largely because of the common assumption that CO2 will behave like other pollutants, which wash out of the atmosphere fairly quickly once we stop emitting them. This assumption underlies much of the common wait-and-see response to climate change, as it gives rise to the myth that once we get serious about climate change (e.g. because we start to see major impacts), we can fix the problem fairly quickly. Unfortunately, this is not true at all, because CO2 is a long-lived greenhouse gas. About half of human CO2 emissions are absorbed by the oceans and soils, over a period of several decades. The remainder stays in the atmosphere  There are several natural processes that remove the remaining CO2 from the atmosphere, but they take thousands of years, which means that even with zero greenhouse gas emissions, we’re likely stuck with the consequences of life on a warmer planet for centuries.

So the physical climate system presents us with two forms of inertia, one that delays the warming due to greenhouse gas emissions, and, one that delays the reduction in that warming in response to reduced emissions:

  1. The thermal inertia of the planet’s surface (largely due to the oceans), by which the planet can keep absorbing extra heat for years before it makes a substantial difference to surface temperatures. (scale: decades)
  2. The carbon cycle inertia by which CO2 is only removed from the atmosphere very slowly, and has a continued warming effect for as long as it’s there. (scale: decades to millennia)

For more on how these forms of inertia affect future warming scenarios, see my post on committed warming.

But these are not the only forms of inertia that matter. There are also various kinds of inertia in the socio-economic system that slow down our response to climate change. For example, Davis et. al. attempt to quantify the emissions from all the existing energy infrastructure (power plants, factories, cars, buildings, etc that already exist and are in use), because even under the most optimistic scenario, it will take decades to replace all this infrastructure with clean energy alternatives. Here’s an example of their analysis, under the assumption that things we’ve already built will not be retired early. This assumption is reasonable because (1) its rare that we’re willing to bear the cost of premature retirement of infrastructure and (2) it’s going to be hard enough building enough new clean energy infrastructure fast enough to replace stuff that has worn out while meeting increasing demand.

infrastructural-inertia

Expected ongoing carbon dioxide emissions from existing infrastructure. Includes primary infrastructure only – i.e. infrastructure that directly releases CO2 (e.g. cars & trucks), but not infrastructure that encourages the continued production of devices that emit CO2. (e.g. the network of interstate highways in the US). From Davis et al, 2010.

So that gives us our third form of inertia:

  1. Infrastructural inertia from existing energy infrastructure, as emissions of greenhouse gases will continue from everything we’ve built in the past, until it can be replaced. (scale: decades)

We’ve known about the threat of climate change for decades, and various governments and international negotiations have attempted to deal with it, and yet have made very little progress. Which suggests there are more forms of inertia that we ought to be able to name and quantify. To do this, we need to look at the broader socio-economic system that ought to allow us as a society to respond to the threat of climate change. Here’s a schematic of that system, as a systems dynamic model:

The socio-geophysical system. Arrows labelled ‘+’ are positive influence links (“A rise in X tends to cause a rise in Y, and a fall in X tends to cause a fall in Y”). Arrows labelled ‘-‘ represent negative links, where a rise in X tends to cause a fall in Y, and vice versa. The arrow labelled with a tap (faucet) is an accumulation link: Y will continue to rise even while X is falling, until X reaches net zero.

Broadly speaking, decarbonization will require both changes in technology and changes in human behaviour. But before we can do that, we have to recognize and agree that there is a problem, develop an agreed set of coordinated actions to tackle it, and then implement the policy shifts and behaviour changes to get us there.

At first, this diagram looks promising: once we realise how serious climate change is, we’ll take the corresponding actions, and that will bring down emissions, solving the problem. In other words, the more carbon emissions go up, the more they should drive a societal response, which in turn (eventually) will reduce emissions again. But the diagram includes a subtle but important twist: the link from carbon emissions to atmospheric concentrations is an accumulation link. Even as emissions fall, the amount of greenhouse gases in the atmosphere continue to rise. The latter rise only stops when carbon emissions reach zero. Think of a tap on the bathtub – if you reduce the inflow of water, the level of water in the tub still rises, until you turn the tap off completely.

Worse still, there are plenty more forms of inertia hidden in the diagram, because each of the causal links takes time to operate. I’ve given these additional sources of inertia names:

Sources of inertia in the socio-geophysical climate system

Sources of inertia in the socio-geophysical climate system

For example, there are forms of inertia that delay the impacts of increased temperatures, both on ecosystems and on human society. Most of the systems that are impacted by climate change can absorb smaller changes in the climate without much noticeable difference, but then reach a threshold whereby they can no longer be sustained. I’ve characterized two forms of inertia here:

  1. Natural variability (or “signal to noise”) inertia, which arises because initially, temperature increases due to climate change are much smaller than internal variability with daily and seasonal weather patterns. Hence it takes a long time for the ‘signal’ of climate change to emerge from the noise of natural variability. (scale: decades)
  2. Ecosystem resilience. We tend to think of resilience as a good thing – defined informally as the ability of a system to ‘bounce back’ after a shock. But resilience can also mask underlying changes that push a system closer and closer to a threshold beyond which it cannot recover. So this form of inertia acts by masking the effect of that change, sometimes until it’s too late to act. (scale: years to decades)

Then, once we identify the impacts of climate change (whether in advance or after the fact), it takes time for these to feed into the kind of public concern needed to build agreement on the need for action:

  1. Societal resilience. Human society is very adaptable. When storms destroy our buildings, we just rebuild them a little stronger. When drought destroys our crops, we just invent new forms of irrigation. Just as with ecosystems, there is a limit to this kind of resilience, when subjected to a continual change. But our ability to shrug and get on with things causes a further delay in the development of public concern about climate change. (scale: decades?)
  2. Denial. Perhaps even stronger than human resilience is our ability to fool ourselves into thinking that something bad is not happening, and to look for other explanations than the ones that best fit the evidence. Denial is a pretty powerful form of inertia. Denial stops addicts from acknowledging they need to seek help to overcome addiction, and it stops all of us from acknowledging we have a fossil fuel addiction, and need help to deal with it. (scale: decades to generations?)

Even then, public concern doesn’t immediately translate into effective action because of:

  1. Individualism. A frequent response to discussions on climate change is to encourage people to make personal changes in their lives: change your lightbulbs, drive a little less, fly a little less. While these things are important in the process of personal discovery, by helping us understanding our individual impact on the world, they are a form of voluntary action only available to the privileged, and hence do not constitute a systemic solution to climate change. When the systems we live in drive us towards certain consumption patterns, it takes a lot of time and effort to choose a low-carbon lifestyle. So the only way this scales is through collective political action: getting governments to change the regulations and price structures that shape what gets built and what we consume, and making governments and corporations accountable for cutting their greenhouse gas contributions. (scale: decades?)

When we get serious about the need for coordinated action, there are further forms of inertia that come into play:

  1. Missing governance structures. We simply don’t have the kind of governance at either the national or international level that can put in place meaningful policy instruments to tackle climate change. The Kyoto process failed because the short term individual interests of the national governments who have the power to act always tend to outweigh the long term collective threat of climate change. The Paris agreement is woefully inadequate for the same reason. Similarly, national governments are hampered by the need to respond to special interest groups (especially large corporations), which means legislative change is a slow, painful process. (scale: decades!)
  2. Bureaucracy. Hampers implementation of new policy tools. It takes time to get legislation formulated and agreed, and it takes time to set up the necessary institutions to ensure they are implemented. (scale: years)
  3. Social Resistance. People don’t like change, and some groups fight hard to resist changes that conflict with their own immediate interests. Every change in social norms is accompanied by pushback. And even when we welcome change and believe in it, we often slip back into old habits. (scale: years? generations?)

Finally, development and deployment of clean energy solutions experience a large number of delays:

  1. R&D lag. It takes time to ramp up new research and development efforts, due to the lack of qualified personnel, the glacial speed that research institutions such as universities operate, and the tendency, especially in academia, for researchers to keep working on what they’ve always worked on in the past, rather than addressing societally important issues. Research on climate solutions is inherently trans-disciplinary, and existing research institutions tend to be very bad at supporting work that crosses traditional boundaries. (scale: decades?)
  2. Investment lag: A wholescale switch from fossil fuels to clean energy and energy efficiency will require huge upfront investment. Agencies that have funding to enable this switch (governments, investment portfolio managers, venture capitalists) tend to be very risk averse, and so prefer things that they know offer a return on investment – e.g. more oil wells and pipelines rather than new cleantech alternatives (scale: years to decades)
  3. Diffusion of innovation: new technologies tend to take a long time to reach large scale deployment, following the classic s-shaped curve, with a small number of early adopters, and, if things go well, a steadily rising adoption curve, followed by a tailing off as laggards resist new technologies. Think about electric cars: while the technology has been available for years, they still only constitute less than 1% of new car sales today. Here’s a study that predicts this will rise to 35% by 2040. Think about that for a moment – if we follow the expected diffusion of innovation pattern, two thirds of new cars in 2040 will still have internal combustion engines. (scale: decades)

All of these forms of inertia slow the process of dealing with climate change, allowing the warming to steadily increase while we figure out how to overcome them. So the key problem isn’t how to address climate change by switching from the current fossil fuel economy to a carbon-neutral one – we probably have all the technologies to do this today. The problem is how to do it fast enough. To stay below 2°C of warming, the world needs to cut greenhouse gas emissions by 50% by 2030, and achieve carbon neutrality in the second half of the century. We’ll have to find a way of overcoming many different types of inertia if we are to make it.

I’ve been exploring how Canada’s commitments to reduce greenhouse gas emissions stack up against reality, especially in the light of the government’s recent decision to stick with the emissions targets set by the previous administration.

Once upon a time, Canada was considered a world leader on climate and environmental issues. The Montreal Protocol on Substances that Deplete the Ozone Layer, signed in 1987, is widely regarded as the most successful international agreement on environmental protection ever. A year later, Canada hosted a conference on The Changing Atmosphere: Implications for Global Security, which helped put climate change on the international political agenda. This conference was one of the first to identify specific targets to avoid dangerous climate change, recommending a global reduction in greenhouse gas emissions of 20% by 2005. It didn’t happen.

It took another ten years before an international agreement to cut emissions was reached: the Kyoto Protocol in 1997. Hailed as a success at the time, it became clear over the ensuing years that with non-binding targets, the agreement was pretty much a sham. Under Kyoto, Canada agreed to cut emissions to 6% below 1990 levels by the 2008-2012 period. It didn’t happen.

At the Copenhagen talks in 2009, Canada proposed an even weaker goal: 17% below 2005 levels (which corresponds to 1.5% above 1990 levels) by 2020. Given that emissions have risen steadily since then, it probably won’t happen. By 2011, facing an embarrassing gap between its Kyoto targets and reality, the Harper administration formally withdrew from Kyoto – the only country ever to do so.

Last year, in preparation for the Paris talks, the Harper administration submitted a new commitment: 30% below 2005 levels by 2030. At first sight it seems better than previous goals. But it includes a large slice of expected international credits and carbon sequestered in wood products, as Canada incorporates Land Use, Land Use Change and Forestry (LULUCF) into its carbon accounting. In terms of actual cuts in greenhouse gas emissions, the target represents approximately 8% above 1990 levels.

The new government, elected in October 2015, trumpeted a renewed approach to climate change, arguing that Canada should be a world leader again. At the Paris talks in 2015, the Trudeau administration proudly supported both the UN’s commitment to keep global temperatures below 2°C of warming (compared to the pre-industrial average), and voiced strong support for an even tougher limit of 1.5°C. However, the government has chosen to stick with the Harper administration’s original Paris targets.

It is clear that that the global commitments under the Paris agreement fall a long way short of what is needed to stay below 2°C, and Canada’s commitment has been rated as one of the weakest. Based on IPCC assessments, to limit warming below 2°C, global greenhouse gas emissions will need to be cut by about 50% by 2030, and eventually reach zero net emissions globally (which will probably mean zero use of fossil fuels, as assumptions about negative emissions seem rather implausible). As Canada has much greater wealth and access to resources than most nations, much greater per capita emissions than all but a few nations, and much greater historical responsibility for emissions than most nations, a “fair” effort would have Canada cutting emissions much faster than the global average, to allow room for poorer nations to grow their emissions, at least initially, to alleviate poverty. Carbon Action Tracker suggests 67% below 1990 emissions by 2030 is a fair target for Canada.

Here’s what all of this looks like – click for bigger version. Note: emissions data from Government of Canada; the Toronto 1988 target was never formally adopted, but was Liberal party policy in the early 90’s. Global 2°C pathway 2030 target from SEI;  Emissions projection, LULUCF adjustment, and “fair” 2030 target from CAT.

Canada's Climate Targets

Several things jump out at me from this chart. First, the complete failure to implement policies that would have allowed us to meet any of these targets. The dip in emissions from 2008-2010, which looked promising for a while, was due to the financial crisis and economic downturn, rather than any actual climate policy. Second, the similar slope of the line to each target, which represents the expected rate of decline from when the target was proposed to when it ought to be attained. At no point has there been any attempt to make up lost ground after each failed target. Finally, in terms of absolute greenhouse gas emissions, each target is worse than the previous ones. Shifting the baseline from 1990 to 2005 masks much of this, and shows that successive governments are more interested in optics than serious action on climate change.

At no point has Canada ever adopted science-based targets capable of delivering on its commitment to keep warming below 2°C.

Today I’ve been tracking down the origin of the term “Greenhouse Effect”. The term itself is problematic, because it only works as a weak metaphor: both the atmosphere and a greenhouse let the sun’s rays through, and then trap some of the resulting heat. But the mechanisms are different. A greenhouse stays warm by preventing warm air from escaping. In other words, it blocks convection. The atmosphere keeps the planet warm by preventing (some wavelengths of) infra-red radiation from escaping. The “greenhouse effect” is really the result of many layers of air, each absorbing infra-red from the layer below, and then re-emitting it both up and down. The rate at which the planet then loses heat is determined by the average temperature of the topmost layer of air, where this infra-red finally escapes to space. So not really like a greenhouse at all.

So how did the effect acquire this name? The 19th century French mathematician Joseph Fourier is usually credited as the originator of the idea in the 1820’s. However, it turns out he never used the term, and as James Fleming (1999) points out, most authors writing about the history of the greenhouse effect cite only secondary sources on this, without actually reading any of Fourier’s work. Fourier does mention greenhouses in his 1822 classic “Analytical Theory of Heat”, but not in connection with planetary temperatures. The book was published in French, so he uses the french “les serres”, but it appears only once, in a passage on properties of heat in enclosed spaces. The relevant paragraph translates as:

In general the theorems concerning the heating of air in closed spaces extend to a great variety of problems. It would be useful to revert to them when we wish to foresee and regulate the temperature with precision, as in the case of green-houses, drying-houses, sheep-folds, work-shops, or in many civil establishments, such as hospitals, barracks, places of assembly” [Fourier, 1822; appears on p73 of the edition translated by Alexander Freeman, published 1878, Cambridge University Press]

In his other writings, Fourier did hypothesize that the atmosphere plays a role in slowing the rate of heat loss from the surface of the planet to space, hence keeping the ground warmer than it might otherwise be. However, he never identified a mechanism, as the properties of what we now call greenhouse gases weren’t established until John Tyndall‘s experiments in the 1850’s. In explaining his hypothesis, Fourier refers to a “hotbox”, a device invented by the explorer de Saussure, to measure the intensity of the sun’s rays. The hotbox had several layers of glass in the lid which allowed the sun’s rays to enter, but blocked the escape of the heated air via convection. But it was only a metaphor. Fourier understood that whatever the heat trapping mechanism in the atmosphere was, it didn’t actually block convection.

Svante Arrhenius was the first to attempt a detailed calculation of the effect of changing levels of carbon dioxide in the atmosphere, in 1896, in his quest to test a hypothesis that the ice ages were caused by a drop in CO2. Accordingly, he’s also sometime credited with inventing the term. However, he also didn’t use the term “greenhouse” in his papers, although he did invoke a metaphor similar to Fourier’s, using the Swedish word “drivbänk”, which translates as hotbed (Update: or possibly “hothouse” – see comments).

So the term “greenhouse effect” wasn’t coined until the 20th Century. Several of the papers I’ve come across suggest that the first use of the term “greenhouse” in this connection in print was in 1909, in a paper by Wood. This seems rather implausible though, because the paper in question is really only a brief commentary explaining that the idea of a “greenhouse effect” makes no sense, as a simple experiment shows that greenhouses don’t work by trapping outgoing infra-red radiation. The paper is clearly reacting to something previously published on the greenhouse effect, and which Wood appears to take way too literally.

A little digging produces a 1901 paper by Nils Ekholm, a Swedish meteorologist who was a close colleague of Arrhenius, which does indeed use the term ‘greenhouse’. At first sight, he seems to use the term more literally than is warranted, although in subsequent paragraphs, he explains the key mechanism fairly clearly:

The atmosphere plays a very important part of a double character as to the temperature at the earth’s surface, of which the one was first pointed out by Fourier, the other by Tyndall. Firstly, the atmosphere may act like the glass of a green-house, letting through the light rays of the sun relatively easily, and absorbing a great part of the dark rays emitted from the ground, and it thereby may raise the mean temperature of the earth’s surface. Secondly, the atmosphere acts as a heat store placed between the relatively warm ground and the cold space, and thereby lessens in a high degree the annual, diurnal, and local variations of the temperature.

There are two qualities of the atmosphere that produce these effects. The one is that the temperature of the atmosphere generally decreases with the height above the ground or the sea-level, owing partly to the dynamical heating of descending air currents and the dynamical cooling of ascending ones, as is explained in the mechanical theory of heat. The other is that the atmosphere, absorbing but little of the insolation and the most of the radiation from the ground, receives a considerable part of its heat store from the ground by means of radiation, contact, convection, and conduction, whereas the earth’s surface is heated principally by direct radiation from the sun through the transparent air.

It follows from this that the radiation from the earth into space does not go on directly from the ground, but on the average from a layer of the atmosphere having a considerable height above sea-level. The height of that layer depends on the thermal quality of the atmosphere, and will vary with that quality. The greater is the absorbing power of the air for heat rays emitted from the ground, the higher will that layer be, But the higher the layer, the lower is its temperature relatively to that of the ground ; and as the radiation from the layer into space is the less the lower its temperature is, it follows that the ground will be hotter the higher the radiating layer is.” [Ekholm, 1901, p19-20]

At this point, it’s still not called the “greenhouse effect”, but this metaphor does appear to have become a standard way of introducing the concept. But in 1907, the English scientist, John Henry Poynting confidently introduces the term “greenhouse effect”, in his criticism of Percival Lowell‘s analysis of the temperature of the planets. He uses it in scare quotes throughout the paper, which suggests the term is newly minted:

Prof. Lowell’s paper in the July number of the Philosophical Magazine marks an important advance in the evaluation of planetary temperatures, inasmuch as he takes into account the effect of planetary atmospheres in a much more detailed way than any previous wrlter. But he pays hardly any attention to the “blanketing effect,” or, as I prefer to call it, the “greenhouse effect” of the atmosphere.” [Poynting, 1907, p749]

And he goes on:

The ” greenhouse effect” of the atmosphere may perhaps be understood more easily if we first consider the case of a greenhouse with horizontal roof of extent so large compared with its height above the ground that the effect of the edges may be neglected. Let us suppose that it is exposed to a vertical sun, and that the ground under the glass is “black” or a full absorber. We shall neglect the conduction and convection by the air in the greenhouse. [Poynting, 1907, p750]

He then goes on to explore the mathematics of heat transfer in this idealized greenhouse. Unfortunately, he ignores Ekholm’s crucial observation that it is the rate of heat loss at the upper atmosphere that matters, so his calculations are mostly useless. But his description of the mechanism does appear to have taken hold as the dominant explanation. The following year, Frank Very published a response (in the same journal), using the term “Greenhouse Theory” in the title of the paper. He criticizes Poynting’s idealised greenhouse as way too simplistic, but suggests a slightly better metaphor is a set of greenhouses stacked one above another, each of which traps a little of the heat from the one below:

It is true that Professor Lowell does not consider the greenhouse effect analytically and obviously, but it is nevertheless implicitly contained in his deduction of the heat retained, obtained by the method of day and night averages. The method does not specify whether the heat is lost by radiation or by some more circuitous process; and thus it would not be precise to label the retaining power of the atmosphere a “greenhouse effect” without giving a somewhat wider interpretation to this name. If it be permitted to extend the meaning of the term to cover a variety of processes which lead to identical results, the deduction of the loss of surface heat by comparison of day and night temperatures is directly concerned with this wider “greenhouse effect.” [Very, 1908, p477]

Between them, Poynting and Very are attempting to pin down whether the “greenhouse effect” is a useful metaphor, and how the heat transfer mechanisms of planetary atmospheres actually work. But in so doing, they help establish the name. Wood’s 1909 comment is clearly a reaction to this discussion, but one that fails to understand what is being discussed. It’s eerily reminiscent of any modern discussion of the greenhouse effect: whenever any two scientists discuss the details of how the greenhouse effect works, you can be sure someone will come along sooner or later claiming to debunk the idea by completely misunderstanding it.

In summary, I think it’s fair to credit Poynting as the originator of the term “greenhouse effect”, but with a special mention to Ekholm for both his prior use of the word “greenhouse”, and his much better explanation of the effect. (Unless I missed some others?)

References

Arrhenius, S. (1896). On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground. Philosophical Magazine and Journal of Science, 41(251). doi:10.1080/14786449608620846

Ekholm, N. (1901). On The Variations Of The Climate Of The Geological And Historical Past And Their Causes. Quarterly Journal of the Royal Meteorological Society, 27(117), 1–62. doi:10.1002/qj.49702711702

Fleming, J. R. (1999). Joseph Fourier, the “greenhouse effect”, and the quest for a universal theory of terrestrial temperatures. Endeavour, 23(2), 72–75. doi:10.1016/S0160-9327(99)01210-7

Fourier, J. (1822). Théorie Analytique de la Chaleur (“Analytical Theory of Heat”). Paris: Chez Firmin Didot, Pere et Fils.

Fourier, J. (1827). On the Temperatures of the Terrestrial Sphere and Interplanetary Space. Mémoires de l’Académie Royale Des Sciences, 7, 569–604. (translation by Ray Pierrehumbert)

Poynting, J. H. (1907). On Prof. Lowell’s Method for Evaluating the Surface-temperatures of the Planets; with an Attempt to Represent the Effect of Day and Night on the Temperature of the Earth. Philosophical Magazine, 14(84), 749–760.

Very, F. W. (1908). The Greenhouse Theory and Planetary Temperatures. Philosophical Magazine, 16(93), 462–480.

Wood, R. W. (1909). Note on the Theory of the Greenhouse. Philosophical Magazine, 17, 319–320. Retrieved from http://scienceblogs.com/stoat/2011/01/07/r-w-wood-note-on-the-theory-of/

This week I’m reading my way through three biographies, which neatly capture the work of three key scientists who laid the foundation for modern climate modeling: Arrhenius, Bjerknes and Callendar.

Arrhenius-bookAppropriatingWeatherCallFullJacket#3.indd

Crawford, E. (1996). Arrhenius: From Ionic Theory to the Greenhouse Effect. Science History Publications.
A biography of Svante Arrhenius, the Swedish scientist who, in 1895, created the first computational climate model, and spent almost a full year calculating by hand the likely temperature changes across the planet for increased and decreased levels of carbon dioxide. The term “greenhouse effect” hadn’t been coined back then, and Arrhenius was more interested in the question of whether the ice ages might have been caused by reduced levels of CO2. But nevertheless, his model was a remarkably good first attempt, and produced the first quantitative estimate of the warming expected from human’s ongoing use of fossil fuels.
Friedman, R. M. (1993). Appropriating the Weather: Vilhelm Bjerknes and the Construction of a Modern Meteorology. Cornell University Press.
A biography of Vilhelm Bjerknes, the Norwegian scientist, who, in 1904, identified the primitive equations, a set of differential equations that form the basis of modern computational weather forecasting and climate models. The equations are, in essence, an adaption of the equations of fluid flow and thermodynamics, adapted to represent the atmosphere as a fluid on a rotating sphere in a gravitational field. At the time, the equations were little more than a theoretical exercise, and we had to wait half a century for the early digital computers, before it became possible to use them for quantitative weather forecasting.
Fleming, J. R. (2009). The Callendar Effect: The Life and Work of Guy Stewart Callendar (1898-1964). University of Chicago Press.
A biography of Guy S. Callendar, the British scientist, who, in 1938, first compared long term observations of temperatures with measurements of rising carbon dioxide in the atmosphere, to demonstrate a warming trend as predicted by Arrhenius’ theory. It was several decades before his work was taken seriously by the scientific community. Some now argue that we should use the term “Callendar Effect” to describe the warming from increased emissions of carbon dioxide, because the term “greenhouse effect” is too confusing – greenhouse gases were keeping the planet warm long before we started adding more, and anyway, the analogy with the way that glass traps heat in a greenhouse is a little inaccurate.

Not only do the three form a neat ABC, they also represent the three crucial elements you need for modern climate modelling: a theoretical framework to determine which physical processes are likely to matter, a set of detailed equations that allow you to quantify the effects, and comparison with observations as a first step in validating the calculations.