Survey studies are hard to do well. I’ve been involved in some myself, and have helped many colleagues to design them, and we nearly always end up with problems when it comes to the data analysis. They are a powerful way of answering base-rate questions (i.e. the frequency or severity of some phenomena) or for exploring subjective opinion (which is, of course, what opinion polls do). But most people who design surveys don’t seem to know what they are doing. My checklist for determining if a survey is the right way to approach a particular research question includes the following:

  • Is it clear exactly what population you are interested in?
  • Is there a way to get a representative sample of that population?
  • Do you have resources to obtain a large enough sample?
  • Is it clear what variables need to be measured?
  • Is it clear how to measure them?

Most research surveys have serious problems getting enough people to respond to ensure the results really are representative, and the people who do respond are likely to be a self-selecting group with particularly strong opinions about the topic. Professional opinion pollsters put a lot of work into adjustments for sampling bias, and still often get it wrong. Researchers rarely have the resources to do this (and almost never repeat a survey, so never have the data to do such adjustments anyway). There are also plenty of ways to screw up on the phrasing of the questions and answer modes, such that you can never be sure people have all understood the questions in the same way, and that the available response modes aren’t biasing their responses. (Kitchenham has a good how-to guide)

ClimateSight recently blogged about a fascinating, unpublished survey of whether climate scientists think the IPCC AR4 is an accurate representation our current understanding of climate science. The authors themselves blog about their efforts to get the survey published here, here and here. Although they acknowledge some weaknesses to do with sampling size and representativeness, they basically think the survey itself is sound. Unfortunately, it’s not. As I commented on ClimateSight’s post, methodologically, this survey is a disaster. Here’s why:

The core problem with the paper is the design of the question and response modes. At the heart of their design is a 7-point Likert scale to measure agreement with the conclusions of the IPCC AR4. But this doesn’t work as a design for many reasons:

1) The IPCC AR4 is a massive document, which a huge number of different observations. Any climate scientist will be able to point to bits that are done better and bits that are done worse. Asking about agreement with it, without spelling out which of its many conclusions you’re asking about is hopeless. When people say they agree or disagree with it, you have no idea which of its many conclusions they are reacting to.

2) The response mode used in the study has a built in bias. If the intent is to measure the degree to which scientists think the IPCC accurately reflects, say, the scale of the global warming problem (whatever that means), then central position on the 7-point scale should be “the IPCC got it right”. In the study, this is point 5 on the scale, which immediately introduces a bias because there are twice as many available response modes available in to the left of this position (“IPCC overstates the problem”) than there are to the right (“IPCC understates the problem”). In other words, the scale itself is biased towards one particular pole.

3) The study authors gave detailed descriptive labels to each position on the scale. Although it’s generally regarded as a good idea to give clear labels to each point on a Likert scale, the idea is that this should help users to understand that the intervals on the scale are to be interpreted as roughly equivalent. The labels need to be very simple. The set of labels in this study end up conflating a whole bunch of different ideas, each of which should be tested with a different question and a separate scale. For example, the labels in include ideas such as:

  • fabrication of the science,
  • false hypotheses,
  • natural variation,
  • validity of models,
  • politically motivated scares,
  • divertion of attention,
  • uncertainties,
  • scientists who know what they’re doing,
  • urgency of action,
  • damage to the environment,

…and so on. Conflating all of these onto a single scale makes analysis impossible, because you don’t know which of the many ideas associated with each response mode each respondent is agreeing or disagreeing with. A good survey instrument would ask about only one of these issues at once.

4) Point 5 on the scale (the one interpreted as agreeing with the IPCC) includes the phrase “the lead scientists know what they are doing”. Yet the survey is sent out to select group that includes many such lead scientists and their immediate colleagues. This form of wording immediately biases this group towards this response, regardless of what they think about the overall IPCC findings. Again, asking specifically about different findings in the IPCC report is much more likely to find out what they really think; this study is likely to mask the range of opinions.

5) And finally, as other people have pointed out, the sampling method is very suspect. Although the authors acknowledge that they didn’t do random sampling, and that this limits the kinds of analysis they can do, it also means that any quantitative summary of the responses is likely to be invalid. There’s plenty of reason to suspect that significant clusters of opinion chose not to participate because they saw the questionnaire (especially given some of the wording) as suspect. Given the context for this questionnaire, within a public discourse where everything gets distorted sooner or later, many climate scientists would quite rationally refuse to participate in any such study. Which means really we have no idea if the distribution shown in the study represents the general opinion of any particular group of scientists at all.

So, it’s not surprising no-one wants to publish it. Not because of any concerns for the impact of its findings, but simply because it’s not a valid scientific study. The only conclusions that can be drawn from this study are existence ones:

  1. there exist some people who think the IPCC underestimated (some unspecified aspect of) climate change;
  2. there exist some people who think the IPCC overestimated (some unspecified aspect of) climate change and
  3. there exist some people who think the IPCC scientists know what they are doing.

The results really say nothing about the relative sizes of these three groups, nor even whether the three groups overlap!

Now, the original research question is very interesting, and worth pursuing. Anyone want to work on a proper scientific survey to answer it?

1 Comment

  1. Is this perhaps more typical of the product that would be sold to political groups rather than published in the journals? There’s a lot of what could be called ‘political advocacy science’ out there — probably done specifically for a client for a payment, not intended to go into a research journal.

    What little of that stuff I’ve seen, seems typically much like this particular poll — clearly biased to anyone outside the environment in which it’s normal.

Leave a Reply

Your email address will not be published. Required fields are marked *