{"id":2032,"date":"2010-11-30T00:03:56","date_gmt":"2010-11-30T05:03:56","guid":{"rendered":"http:\/\/www.easterbrook.ca\/steve\/?p=2032"},"modified":"2010-12-05T15:17:13","modified_gmt":"2010-12-05T20:17:13","slug":"validating-climate-models","status":"publish","type":"post","link":"http:\/\/www.easterbrook.ca\/steve\/2010\/11\/validating-climate-models\/","title":{"rendered":"Validating Climate Models"},"content":{"rendered":"<p>In my last two posts, I <a title=\"Do Climate Models Need Independent Verification and Validation?\" href=\"http:\/\/www.easterbrook.ca\/steve\/?p=1556\" target=\"_blank\">demolished<\/a> the idea that climate models need Independent Verification and Validation (IV&amp;V), and I described the idea of a <a title=\"The difference between verification and validation\" href=\"http:\/\/www.easterbrook.ca\/steve\/?p=2030\" target=\"_blank\">toolbox approach<\/a> to V&amp;V. Both posts were attacking myths: in the first case, the myth that an independent agent should be engaged to perform IV&amp;V on the models, and in the second, the myth that you can critique the V&amp;V of climate models without knowing anything about how they are currently built and tested.<\/p>\n<p>I now want to expand on the latter point, and explain how the day-to-day practices of climate modellers taken together constitute a robust validation process, and that the only way to improve this validation process is just to do more of it (i.e. give the modeling labs more funds to expand their current activities, rather than to do something very different).<\/p>\n<p>The most common mistake made by people discussing validation of climate models is to assume that a climate model is a thing-in-itself, and that the goal of validation is to demonstrate that some property holds of this thing. And whatever that property is, the assumption is that such measurement of it can be made without reference to its scientific milieu, and in particular without reference to its history and the processes by which it was constructed.<\/p>\n<p>This mistake leads people to talk of validation in terms of how well <em>&#8220;the model&#8221;<\/em> matches observations, or how well <em>&#8220;the model&#8221;<\/em> matches the processes in some real world system. This approach to validation is, as\u00a0<a title=\"Oreskes N, Shrader-Frechette K, Belitz K. Verification, validation, and confirmation of numerical models in the earth sciences. Science. 1994;263(5147):641.\" href=\"http:\/\/www.sciencemag.org\/cgi\/content\/abstract\/sci;263\/5147\/641\" target=\"_blank\">Oreskes <em>et al<\/em> pointed out<\/a>, quite impossible. The models are numerical approximations of complex physical phenomena. You can <em>verify<\/em> that the underlying equations are coded correctly in a given version of the model, but you can never <em>validate<\/em> that a given model accurately captures real physical processes, because it never will accurately capture them. Or as George Box summed it up: &#8220;<a title=\"George Box at wikipedia\" href=\"http:\/\/en.wikiquote.org\/wiki\/George_Box\" target=\"_blank\">All models are wrong&#8230;<\/a>&#8221; (we&#8217;ll come back to the second half of the quote later).<\/p>\n<p>The problem is that there is no such thing as <em>&#8220;the model&#8221;<\/em>. The body of code that constitutes a modern climate model actually represents an enormous number of possible models, each corresponding to a different way of configuring that code for a particular run. Furthermore, this body of code isn&#8217;t a static thing. The code is changed on a daily basis, through a continual process of experimentation and model improvement. Often these changes are done in parallel, so that there are multiple version at any given moment, being developed along multiple lines of investigation. Sometimes these lines of evolution are merged, to bring a number of useful enhancements together into a single version. Occasionally, the lines diverge enough to cause a fork: a point at which they are different enough that it just becomes too hard to reconcile them\u00a0(See for example,\u00a0<a title=\"GFDL ocean model genealogy\" href=\"http:\/\/climate.lanl.gov\/Models\/POP\/#history\" target=\"_blank\">this visualization of the evolution of ocean models<\/a>). A forked model might at some point be given a new name, but the process by which a model gets a new name is rather arbitrary.<\/p>\n<p>Occasionally, a modeling lab will label a particular snapshot of this evolving body of code as an &#8220;official release&#8221;. An official release has typically been tested much more extensively, in a number of standard configurations for a variety of different platforms. It&#8217;s likely to be more reliable, and therefore easier for users to work with. By more reliable here, I mean relatively free from coding defects. In other words, it is better <em>verified<\/em> than other versions, but not necessarily better <em>validated <\/em>(I&#8217;ll explain why shortly). In many cases, official releases also contain some significant new science (e.g. new parameterizations), and these scientific enhancements will be described in a set of published papers.<\/p>\n<p>However, an official release isn&#8217;t a single model either. Again it&#8217;s just a body of code that can be configured to run as any of a huge number of different models, and it&#8217;s not unchanging either &#8211; as with all software, there will be occasional bugfix releases applied to it. Oh, and did I mention that to run a model, you have to make use of a huge number of ancillary datafiles, which define everything from the shape of the coastlines and land surfaces, to the specific carbon emissions scenario to be used. Any change to these effectively gives a different model too.<\/p>\n<p>So, if you&#8217;re hoping to validate &#8220;the model&#8221;, you have to say which one you mean: which configuration of which code version of which line of evolution, and with which ancillary files. I suppose the response from those clamouring for something different in the way of model validation would say &#8220;well, the one used for the IPCC projections, of course&#8221;. Which is a little tricky, because each lab produces a large number of different runs for the CMIP process that provides input to the IPCC, and each of these is a likely to involve a different model configuration.<\/p>\n<p>But let&#8217;s say for sake of argument that we could agree on a specific model configuration that ought to be &#8220;validated&#8221;. What will we do to validate it? What does validation actually mean? The Oreskes paper I mentioned earlier already demonstrated that comparison with real world observations, while interesting, does not constitute &#8220;validation&#8221;. The model will never match the observations exactly, so the best we&#8217;ll ever get along these lines is an argument that, on balance, given the sum total of the places where there&#8217;s a good match and the places where there&#8217;s a poor match, that the model does better or worse than some other model. This isn&#8217;t validation, and furthermore it isn&#8217;t even a sensible way of thinking about validation.<\/p>\n<p>At this point many commentators stop, and argue that if validation of a model isn&#8217;t possible, then the models can&#8217;t be used to support the science (or more usually, they mean they can&#8217;t be used for IPCC projections). But this is a strawman argument, based on a fundamental misconception of what validation is all about. Validation isn&#8217;t about checking that a given instance of a model satisfies some given criteria. Validation is about about fitness for purpose, which means it&#8217;s not about the model at all, but about the relationship <em>between<\/em> a model and the purposes to which it is put. Or more precisely, its about the relationship between <em>particular ways of building and configuring models<\/em> and the<em> ways in which runs produced by those models are used<\/em>.<\/p>\n<p>Furthermore, the purposes to which models are put and the processes by which they are developed co-evolve. The models evolve continually, and our ideas about what kinds of runs we might use them for evolve continually, which means validation must take this ongoing evolution into account. To summarize, validation isn&#8217;t about a property of some particular model instance; its about the whole\u00a0process of developing and using models, and how this process evolves over time.<\/p>\n<p>Let&#8217;s take a step back a moment, and ask what is the purpose of a climate model. The second half of the George Box quote is &#8220;&#8230;but some models are useful&#8221;. Climate models are tools that allow scientists to explore their current understanding of climate processes, to build and test theories, and to explore the consequences of those theories. In other words we&#8217;re dealing with three distinct systems:<\/p>\n<div id=\"attachment_2041\" style=\"width: 560px\" class=\"wp-caption alignnone\"><a href=\"http:\/\/www.easterbrook.ca\/steve\/wp-content\/3systems.jpg\"><img aria-describedby=\"caption-attachment-2041\" decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-2041 \" title=\"3systems\" src=\"http:\/\/www.easterbrook.ca\/steve\/wp-content\/3systems.jpg\" alt=\"\" width=\"550\" height=\"103\" srcset=\"http:\/\/www.easterbrook.ca\/steve\/wp-content\/3systems.jpg 550w, http:\/\/www.easterbrook.ca\/steve\/wp-content\/3systems-300x56.jpg 300w\" sizes=\"(max-width: 550px) 100vw, 550px\" \/><\/a><p id=\"caption-attachment-2041\" class=\"wp-caption-text\">We&#39;re dealing with relationships between three different systems<\/p><\/div>\n<p>There does not need to be any clear relationship between the calculational system and the observational system &#8211; I didn&#8217;t include such a relationship in my diagram. For example, climate models can be run in configurations that don&#8217;t match the real world at all: e.g. a waterworld with no landmasses, or a world in which interesting things are varied: the tilt of the pole, the composition of the atmosphere, etc. These models are useful, and the experiments performed with them may be perfectly valid, even though they differ deliberately from the observational system.<\/p>\n<p>What really matters is the <em>relationship<\/em> between the <em>theoretical system<\/em> and the <em>observational system<\/em>: in other words, how well does our current understanding\u00a0(i.e. our theories)\u00a0of climate explain the available observations (and of course the inverse: what additional observations might we make to help test our theories). When we ask questions about likely future\u00a0climate\u00a0changes, we&#8217;re not asking this question of the the calculational system, we&#8217;re asking it of the theoretical system; the models are just a convenient way of probing the theory to provide answers.<\/p>\n<p>By the way, when I use the term theory, I mean it in exactly the way it&#8217;s used in throughout all sciences: <a title=\"McComas W. The principal elements of the nature of science: Dispelling the myths. In: The nature of science in science education. Kluwer Academic Publishers; 1998:53-\u0080\u009370.\" href=\"http:\/\/coehp.uark.edu\/pase\/TheMythsOfScience.pdf\" target=\"_blank\">a theory is the best current explanation of a given set of phenomena<\/a>. The word &#8220;theory&#8221; doesn&#8217;t mean knowledge that is somehow more tentative than other forms of knowledge; a theory is actually the kind of knowledge that has the strongest epistemological basis of any kind of knowledge, because it is supported by the available evidence, and best explains that evidence. A theory might not be capable of providing quantitative predictions (but it&#8217;s good when it does), but it must have explanatory power.<\/p>\n<p>In this context, the calculational system is <em><strong>valid<\/strong><\/em> as long as it can offer insights that help to understand the relationship between the theoretical system and the observational system. A model is useful as long as it helps to improve our understanding of climate, and to further the development of new (or better) theories. So a model that might have been useful (and hence valid) thirty years ago might not be useful today. If the old approach to modelling no longer matches current theory, then it has lost some or all of its validity. The model&#8217;s correspondence (or lack of) to the observations hasn&#8217;t changed (*), nor has its predictive power. But its utility as a scientific tool has changed, and hence its validity has changed.<\/p>\n<p><em>[(*) except that that accuracy of the observations may have changed in the meantime, due to the ongoing process of discovering and resolving anomalies in the historical record.]<\/em><\/p>\n<p>The key questions for validation then, are to do with how well the current generation of models (plural) support the discovery of new theoretical knowledge, and whether the ongoing process of improving those models continues to enhance their utility as scientific tools. We could focus this down to specific things we could measure by asking whether each individual change to the model is theoretically justified, and whether each such change makes the model more useful as a scientific tool.<\/p>\n<p>To do this requires a detailed study of day-to-day model development practices, the extent to which these are closely tied with the rest of climate science (e.g. field campaigns, process studies, etc). It also takes in questions such as how modeling centres decide on their priorities (e.g. which new bits of science to get into the models sooner), and how each individual change is evaluated. In this approach, validation proceeds by checking whether the individual steps taken to construct and test changes to the code add up to a sound scientific process, and how good this process is at incorporating the latest theoretical ideas. And we ought to be able to demonstrate a steady improvement in the theoretical basis for the model. An interesting quirk here is that sometimes an improvement to the model from a theoretical point of view reduces its skill at matching observations; this happens particularly when we&#8217;re replacing bits of the model that were based on empirical parameters with an implementation that has a stronger theoretical basis, because the empirical parameters were tuned to give a better climate simulation, without necessarily being well understood. In the approach I&#8217;m describing, this would be an indicator of an improvement in validity, even while reduces the correspondence with observations. If on the other hand we based our validation on some measure of correspondence with observations, such a step would reduce the validity of the model!<\/p>\n<p>But what does all of this tell us about whether it&#8217;s &#8220;valid&#8221; to use the models to produce projections of climate change into the future? Well, recall that when we ask for projections of future climate change, we&#8217;re not asking the question of the calculational system, because all that would result in is a number, or range of numbers, that are impossible to interpret, and therefore meaningless. Instead we&#8217;re asking the question of the theoretical system: given the sum total of our current theoretical understanding of climate, what is likely to happen in the future, under various scenarios for expected emissions and\/or concentrations of greenhouse gases? If the models capture our current theoretical understanding well, then running the scenario on the model is a valid thing to do. If the models do a poor job of capturing our theoretical understanding, then running the models on these scenarios won&#8217;t be very useful.<\/p>\n<p>Note what is happening here: when we ask climate scientists for future projections, we&#8217;re asking the question of the scientists, not of their models. The scientists will apply their judgement to select appropriate versions\/configurations of the models to use, they will set up the runs, and they will interpret the results in the light of what is known about the models&#8217; strengths and weaknesses and about any gaps between the comptuational models and the current theoretical understanding. And they will add all sorts of caveats to the conclusions they draw from the model runs when they present their results.<\/p>\n<p>And how do we know whether the models capture our current theoretical understanding? By studying the processes by which the models are developed (i.e. continually evolved) be the various modeling centres, and examining how good each centre is at getting the latest science into the models. And by checking that whenever there are gaps between the models and the theory, these are adequately described by the caveats in the papers published about experiments with the models.<\/p>\n<p>Summary: It is a mistake to think that validation is a post-hoc process to be applied to an individual &#8220;finished&#8221; model to ensure it meets some criteria for fidelity to the real world. In reality, there is no such thing as a finished model, just many different snapshots of a large set of model configurations, steadily evolving as the science progresses. And fidelity of a model to the real world is impossible to establish, because the models are approximations. In reality, climate models are tools to probe our current theories about how climate processes work. Validity is the extent to which climate models match our current theories, and the extent to which the process of improving the models keeps up with theoretical advances.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In my last two posts, I demolished the idea that climate models need Independent Verification and Validation (IV&amp;V), and I described the idea of a toolbox approach to V&amp;V. Both posts were attacking myths: in the first case, the myth that an independent agent should be engaged to perform IV&amp;V on the models, and in [&hellip;]<\/p>\n","protected":false},"author":392,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[10],"tags":[],"aioseo_notices":[],"jetpack_sharing_enabled":true,"jetpack_featured_media_url":"","_links":{"self":[{"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/posts\/2032"}],"collection":[{"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/users\/392"}],"replies":[{"embeddable":true,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/comments?post=2032"}],"version-history":[{"count":16,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/posts\/2032\/revisions"}],"predecessor-version":[{"id":2072,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/posts\/2032\/revisions\/2072"}],"wp:attachment":[{"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/media?parent=2032"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/categories?post=2032"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.easterbrook.ca\/steve\/wp-json\/wp\/v2\/tags?post=2032"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}