A few people today have pointed me at the new paper by Dan Kahan & colleagues (1), which explores competing explanations for why lots of people don’t regard climate change as a serious problem. I’ve blogged about Dan’s work before – the studies they do are very well designed, and address important questions. If you’re familiar with their work, the new study isn’t surprising. They find that people’s level of concern over climate change doesn’t correlate with their level of science literacy, but does correlate with their cultural philosophies. In the experiment, there was no difference in science literacy between people who are concerned about climate change and those who are not. The interesting conclusion is therefore that giving people more facts is not likely to change their minds:
A communication strategy that focuses only on transmission of sound scientific information, our results suggest, is unlikely to do that.
Instead we have to understand how different people see the world, and how they filter information based on how well it coheres with their existing worldview:
Effective strategies include use of culturally diverse communicators, whose affinity with different communities enhances their credibility, and information-framing techniques that invest policy solutions with resonances congenial to diverse groups
Naturally, some disreputable newsites spun the basic finding as a “Global warming skeptics as knowledgeable about science as climate change believers, study says“. Which is not what the study says at all, because it didn’t measure what people know about the science.
The problem is that there’s an unacknowledged construct validity problem in the study. At the beginning of the paper, the authors talk about “the science comprehension thesis (SCT)”:
As members of the public do not know what scientists know, or think the way scientists think, they predictably fail to take climate change as seriously as scientists believe they should.
…which they then claim their study disproves. But when you get into the actual study design, they quickly switch to talking about science literacy:
We measured respondents’ science literacy with National Science Foundation’s (NSF) Science and Engineering Indicators. Focused on physics and biology (for example, ‘Electrons are smaller than atoms [true/false]’; ‘Antibiotics kill viruses as well as bacteria [true/false]’), the NSF Indicators are widely used as an index of public comprehension of basic science.
But the problem is, science comprehension cannot be measured by asking a whole bunch of true/false questions about scientific “facts”! All that measures is the ability to do well in a pub trivia quizzes.
Unfortunately, this mistake is widespread, and leads to an education strategy that fills students’ heads with a whole bunch of disconnected science trivia, and no appreciation for what science is really all about. When high school students learn chemistry, for example, they have to follow recipes from a textbook, and get the “right” results. If their results don’t match the textbooks, they get poor marks. When they’re given science tests (like the NSF one used in this study), they’re given the message that there’s a right and wrong answer to each question, and you just gotta know it. But that’s about as far from real science as you can get! It’s when the experiment gives surprising results that the real scientific process kicks in. Science isn’t about getting the textbook answer, it’s about asking interesting questions, and finding sound methods for answering them. The myths about science are ground into kids from an early age by people who teach science as a bunch of facts (3).
At the core of the problem is a failure to make the distinction that Maienschein points out between science literacy and scientific literacy (2). The NSF instrument measures the former. But science comprehension is about the latter – it…
…emphasizes scientific ways of knowing and the process of thinking critically and creatively about the natural world.
So, to Kahan’s interpretation of the results, I would add another hypothesis: we should actually start teaching people about what scientists do, how they work, and what it means to collect and analyze large bodies of evidence. How results (yes, even published ones) often turn out to be wrong, and what matters is the accumulation of evidence over time, rather than any individual result. After all, with Google, you can now quickly find a published result to support just about any crazy claim. We need to teach people why that’s not science.
Update (May 30, 2012): Several people suggested I should also point out that the science literacy test they used is for basic science questions across physics and biology; they did not attempt to test in any way people’s knowledge of climate science. So that seriously dents their conclusion: The study says nothing about whether giving people more facts about climate science is likely to make a difference.
Update2 (May 31, 2012): Some folks on Twitter argued with my statement “concern over climate change doesn’t correlate with … level of science literacy”. Apparently none of them have a clue how to interpret statistical analysis in the behavioural sciences. Here’s Dan Kahan on the topic. (h/t to Tamsin).
(1) Kahan, D., Peters, E., Wittlin, M., Slovic, P., Ouellette, L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks Nature Climate Change DOI: 10.1038/nclimate1547
(2) Maienschein, J. (1998). Scientific Literacy Science, 281 (5379), 917-917 DOI: 10.1126/science.281.5379.917
(3) William F. McComas (1998). The Principle Elements of the Nature of Science: Dispelling the Myths The Nature of Science in Science Education