This will keep me occupied with good reads for the next few weeks – this month’s issue of the Journal Studies in History and Philosophy of Modern Physics is a special on climate modeling. Here’s the table of contents:

Some very provocative titles there. I’m curious to see how much their observations cohere with my own…

3 Comments

  1. From the last sentence of the abstract on Spencer Weart’s article:
    By 2007 nearly all climate experts accepted that the climate simulations represented reality well enough to impel strong action to restrict gas emissions.

    This is good. It highlights some important parts of validation (AIAA/ASME sense not IEEE sense).
    1. Recognize an intended use: impel action to restrict gas emissions
    2. Identify a metric: represent reality well enough (of course when we get down into the details this is rigorous and quantitative)
    3. Decide on an accreditation authority: climate experts

    The problems with the approach summed up by that neat little sentence are that the validation metrics (or credibility building) in climate modeling is based on a process of continuous calibration and hindcasting, but the intended use is forward looking (you can’t act to change the past). The intended use and the validation metrics should match (validation also usually implies “out of sample”). It’s also customary for the accreditation authority to be the one assuming the risk (decision maker) rather than the one developing and running the model. I think those two basic flaws in approach are part of the reason for “blow-up” of policy efforts in this arena.

  2. I think I understand the points j-stults raised.

    Forward-looking evaluation of climate models is possible about parts of the models (about the process of radiative transfer, for example) with simulations of short duration, but intrinsically impossible about temporal evolution of the climate system in the time scale directly relevant to the issue of climate change. Since we know that climate has decadal variability which seems chaotic, 10-year simulation seems to be too short, but we cannot wait more. Here is a situation of trans-science (in terms of Alvin Weinberg): a question which science can ask but it cannot answer. Still, I think, having knowledge from simulations is better than nothing. But it is a difficult problem how much public expense is justified.

    It is reasonable that evaluation should be done by people in the side of decision makers rather than climate modelers. But I do not think it should be done by those who are trained as politicians or executives. The evaluators need special training, primarily with policy making and secondarily with science of climate. In terms of Collins and Evans (2007) “Rethinking Expertise”, they should be contributory experts in policy making as well as interactional experts of climate science.

  3. Kooiti:

    How do you forsee outside experts evaluating climate models?
    One of the problems with climate models for forward planning (in high-CO2 environments, for exampe) is the assumptions ‘baked-in’ to the models: parameterizations that have been tested against current or historical conditions, but may not be valid in x2 CO2, etc. Outside climate scientists may be aware of the _science_, but not be knowledgeable of the assumptions within the models. Tempting tests, such as comparing the models output against observations won’t work to check out its inbuilt assumptions.

    I think you do need climate modelling experience to evaluate the models. The best I can think of at the moment is those involved in the intercomparison experiments: you need detailed knowledge of what went into each model, knowing that the same components / assumptions are not in two different models under comparison, but to be outside any given modelling team.

Leave a Reply

Your email address will not be published. Required fields are marked *