The meteorologist, Norman Phillips died last week, at the grand old age of 95. As I’ve written about his work in my forthcoming book, Computing the Climate, I’ve extracted this piece from the manuscript, to honour his contribution to climate modelling—not only did he create the first ever global circulation model, but the ideas in his model sparked off a revolution in how we use computers to model the climate system. We join the story shortly after the success of the first numerical forecast model, developed by Jule Charney and his team of meteorologists at Princeton in the late 1940s. Among the team was a young Norman Phillips…
In the 1950s, a team of meteorologists led by Jule Charney at Princeton’s Institute for Advanced Studies (IAS) had turned the equations of motion into a program that could compute the weather. Flushed with the success of a trial run of their forecast model on ENIAC in March 1950, they were keen to figure out how to extend the range of their forecasts. Within a couple of years, they had produced some reasonably good forecasts for 24-hours, and sometimes even 36-hours, although in the early 1950s, they couldn’t yet do this consistently. For better forecasts, they would need better models and better data.
Because of limited computing power, and limited observational data, their early models were designed to cover only a part of the globe—the region over North America. This meant they were simulating an “open” system. In the real world, the part of the atmosphere included in the model interacts with parts outside the model, exchanging mass and energy freely. If a storm system from elsewhere moved into the region, the model could not simulate this, as it has no information on what is happening beyond its boundaries.
In his initial models, Charney had ignored this problem, and treated the boundary conditions as fixed. He added an extra strip of grid points at each edge of the model’s main grid, where conditions were treated as constant. When the simulation calculated the next state of the atmosphere for each point within the grid, these edge points just kept their initial values. This simplification imposed a major limitation on the accuracy of the weather forecasts. As the simulation proceeded, the values at these edge points would become less and less like the real conditions, and these errors would propagate inwards, across the grid. To get longer forecasts—say for weeks, instead of days—a better solution was needed. For long-range forecasting, the computer would need to think outside the box.
The obvious way to do this was to extend the grid to cover the entire globe, making it a “closed” system. This would leave only two, simpler boundaries. At the top of the atmosphere, energy arrives from the sun, and is lost back to space. But no air mass crosses this boundary, which means there are no significant boundary disturbances. At the bottom, where the atmosphere meets the surface of the planet, things are more complicated, as the both heat and moisture cross the boundary, with water evaporating from the land and oceans, and eventually being returned as rain and snow. But this effect is small compared to movements within the atmosphere, so it could be ignored, at least for the coarse-grained models of the 1950s—later models would incorporate this exchange between surface and atmosphere directly in the simulation.
Among the group at Princeton, Norman Philips was the first to create a working global model. Because the available computer power was still relatively tiny, extending the grid for an existing forecast model wasn’t feasible. Instead, Phillips took a different approach. He removed so many of the features of the real planet, the model barely resembled the earth at all.
To simplify things, he treated the surface of the earth as smooth and featureless. He used a 17×16 grid, not unlike the original ENIAC model, but connected the cells on the eastern edge with the cells on the western edge, so that instead of having fixed boundaries to the east and the west, the grid wrapped around, as though it were a cylindrical planet [1]. At the north and south edges of the grid, the model behaved as if there were solid walls—movement of the atmosphere against the wall would be reflected back again. This overall shape simplified things: by connecting the east and west edges, model could simulate airflows that circulate all the way around the planet, but Phillips didn’t have to figure out the complex geometry where grid cells converge at the poles.
The dimensions of this simulated cylindrical planet were similar to those of Charney’s original weather model, as it used the same equations. Phillips’ grid points were 375km apart in the east-west direction and 625km apart in the north-south. This gave a virtual planet whose circumference was less than 1/6th of the circumference of the earth, but whose height was almost the same as the height of the earth from pole to pole. A tall, thin, cylindrical earth.
To simplify things even more, Phillip’s cylindrical model represented only one hemisphere of earth. He included a heating effect at the southern end of the grid, to represent the equator receiving the most energy from the sun, and a cooling effect at the northern end of the model, to represent the arctic cooling as it loses heat to space [2]. The atmosphere was represented as two layers of air, and each layer was a version of Charney’s original one-layer model. The grid therefore had 17x16x2 cells in total, and it ran on a machine with 5Kbytes of RAM and 10Kbytes of magnetic drum memory. The choice of this grid is not an accident: the internal memory of the IAS machine could store 1,024 numbers (it had 1024 words, each 40-bits long). Phillip’s choice of grid meant a single state of the global atmosphere could be represented with about 500 variables [3], thus taking up just under half of the machine’s memory, leaving the other half available for calculating the next state.
To initialize the model, Phillips decided not to bother with observational data at all. That would have been hard anyway, as the geometry of the model didn’t resemble planet earth. Instead, he started with a uniform atmosphere at rest. In other words, every grid point started with the same values, as though there was no wind anywhere. Starting a simulation model with the atmosphere at rest and hoping the equations would start to generate realistic weather patterns was a bold, and perhaps crazy idea.
It is also the ultimate test of the equations in the model: if they could get the virtual atmosphere moving in a realistic way, it means nothing important has been left out. Today, we call this a spin-up run. The ocean and atmosphere components of today’s global climate models are regularly started in this way. Spin-up runs for today’s models are expensive though, because they require a lot of time on the supercomputer, and until the model settles into a stable pattern the simulation results are unusable. Oceans in particular have tremendous inertia, so modern ocean models can take hundreds of years of simulation time to produce stable and realistic ocean currents, which typically requires many weeks to run on a supercomputer. Therefore, the spin-up is typically run just once, and the state at the end of this spin-up is used as a start state for all the science experiments to be run on the model.
By 1955, Phillip had his global simulation model running successfully. Once the run started, the simulated atmosphere didn’t stay at rest. The basic equations of the model included terms for forces that would move the atmosphere: gravity, the Coriolis force, expansion and contraction when air warms and cools, and the movement of air from high pressure areas to lower pressure areas. As heat entered the atmosphere towards the southern edge, the equations in the model made this air expand, rise and move northwards, just as it does in real life. Under the effect of the Coriolis force, this moving air mass slowly curled towards the east. The model developed its own stable jet stream.
In his early tests, Phillips was able to run the model for a month of simulation time, during which the model developed a realistic jet stream and gave good results for monthly and seasonal weather statistics. Unfortunately, getting the model to run longer than a month proved to be difficult, as numerical errors in the algorithms would accumulate. In later work, Phillips was able to fix these problems, but by then a whole generation of more realistic global climate models were emerging.
Phillips’ model wasn’t a predictive model, as it didn’t attempt to match any real conditions of the earth’s atmosphere. But the fact that it could simulate realistic patterns made it an exciting scientific model. It opened the door to the use of computer models to improve our understanding of the climate system. As the model could generate typical weather patterns from first principles, models like this could start to answer questions about the factors that shape the climate and drive regional differences. Clearly, long range simulation models were possible, for scientists who are interested in the general patterns—the climate—rather than the actual weather on any specific day.
Despite its huge over-simplifications, Phillips’ model was regarded as a major step forward, and is now credited as the first General Circulation Model. The head of IAS, John von Neumann was so excited that within a few months he persuaded the US Weather Bureau, Air Force, and Army to jointly fund a major new research program to develop the work further, at what in today’s money would be $2 million per year. The new research program, initially known as the General Circulation Research Section [4], and housed at the Weather Bureau’s computing facility in Maryland, eventually grew to become today’s Geophysical Fluid Dynamics Lab (GFDL), one of the world’s leading research labs for climate modelling. Von Neumann then convened a conference in Princeton, in October 1955, to discuss prospects for General Circulation Modelling. Phillips’ model was the highlight of the conference, but the topics also included stability of the numerical algorithms, how to improve forecasting of precipitation (rain and snow), and the need to include in the models the role of role of greenhouse gases.
In his opening speech to the conference, von Neumann divided weather prediction into three distinct problems. Short term weather prediction, over the span of a few days, he argued, was completely dominated by the initial values. Better data would soon provide better forecasts. In contrast, long-term prediction, like Phillips’ model, is largely unaffected by initial conditions. Von Neumann argued that by modelling the general circulation patterns for the entire globe, an “infinite forecast” would be possible—a model that could reproduce the large scale patterns of the climate system indefinitely. But the hardest prediction problem, he suggested, lay in between these two: intermediate range forecasts, which are shaped by both initial conditions and general circulation patterns. His assessment was correct: short term weather forecasting and global circulation modelling both developed rapidly in the ensuing decades, whereas intermediate forecasting (on the scale of months) is still a major challenge today.
Unfortunately, von Neumann didn’t live long enough to see his prediction play out. That same year he was diagnosed with cancer, and died two years later in February 1957, at the age of 53. The meteorology team no longer had a champion on the faculty at Princeton. Charney and Phillips left to take up positions at MIT, where Phillips would soon be head of the Department of Meteorology. The IAS meteorology project that had done so much to kick-start computerized weather forecasting was soon closed. However, its influence lived on, as a whole generation of young meteorologists established new research labs around the world to develop the techniques.

Notes:
[1] Although the geometry of the grid could be considered a cylinder, Phillips used a variable Coriolis factor suitable for a spherical planet, which means his artificial planet didn’t spin like a cylinder – the Coriolis force would get stronger, the further north you moved. This is essential for the formation of a jet stream. Strictly speaking, a cylindrical planet, if it could exist at all, wouldn’t have a Coriolis force, as the effect comes from the curvature towards the poles. Phillips included it in the equations anyway, to see if it would still produce a jet stream. For details see: Lewis, J. M. (1998). Clarifying the Dynamics of the General Circulation: Phillips’s 1956 Experiment. Bulletin of the American Meteorological Society, 79(1), 39–60.
[2] This was implemented in the model using a heating parameter as a linear function of latitude, with maximum heating at the southern edge, and maximum cooling at the northern edge, with points in between scaled accordingly. As Phillips points out, this is not quite like the real planet, but it was sufficient to generate stable circulation patterns similar to those in the real atmosphere. See Phillips, N. A. (1956). The general circulation of the atmosphere: A numerical experiment. Quarterly Journal of the Royal Meteorological Society, 82(352), 123–164.
[3] Actually, the grid was only 17×15, because it wrapped around, with the westernmost grid points being the same as the easternmost ones. So each of the two atmospheric levels could be represented as a geopotential array of 255 elements. (See Lewis, 1998)
[4] Joseph Smagorinsky, another member of the team that had run the ENIAC forecasts, was appointed head of this project. See Aspray, W. (1990). John von Neumann and the Origins of Modern Computing. MIT Press. Note that von Neumann’s original proposal is reproduced in full in Smagorinsky, J. (1983). The beginnings of numerical weather prediction and general circulation modelling: Early recollections. Advances in Geophysics, 25, 3–38.