29. May 2012 · 5 comments · Categories: education

A few people today have pointed me at the new paper by Dan Kahan & colleagues (1), which explores competing explanations for why lots of people don’t regard climate change as a serious problem. I’ve blogged about Dan’s work before – their earlier studies are well designed, and address important questions. If you’re familiar with their work, the new study isn’t surprising. They find that people’s level of concern over climate change doesn’t correlate with their level of science literacy, but does correlate with their cultural philosophies. In this experiment, there was no difference in science literacy between people who are concerned about climate change and those who are not. They use this to build a conclusion that giving people more facts is not likely to change their minds:

A communication strategy that focuses only on transmission of sound scientific information, our results suggest, is unlikely to do that.

Which is reasonable advice, because science communication must address how different people see the world, and how people filter information based on their existing worldview:

Effective strategies include use of culturally diverse communicators, whose affinity with different communities enhances their credibility, and information-framing techniques that invest policy solutions with resonances congenial to diverse groups

Naturally, some disreputable newsites spun the basic finding as a “Global warming skeptics as knowledgeable about science as climate change believers, study says“. Which is not what the study says at all, because it didn’t measure what people know about the science.

The problem is that there’s an unacknowledged construct validity problem in the study. At the beginning of the paper, the authors talk about “the science comprehension thesis (SCT)”:

As members of the public do not know what scientists know, or think the way scientists think, they predictably fail to take climate change as seriously as scientists believe they should.

…which they then claim their study disproves. But when you get into the actual study design, they quickly switch to talking about science literacy:

We measured respondents’ science literacy with National Science Foundation’s (NSF) Science and Engineering Indicators. Focused on physics and biology (for example, ‘Electrons are smaller than atoms [true/false]’; ‘Antibiotics kill viruses as well as bacteria [true/false]’), the NSF Indicators are widely used as an index of public comprehension of basic science.

But the problem is, science comprehension cannot be measured by asking a whole bunch of true/false questions about scientific “facts”! All that measures is the ability to do well in a pub trivia quizzes.

Unfortunately, this mistake is widespread, and leads to an education strategy that fills students’ heads with a whole bunch of disconnected science trivia, and no appreciation for what science is really all about. When high school students learn chemistry, for example, they have to follow recipes from a textbook, and get the “right” results. If their results don’t match the textbooks, they get poor marks. When they’re given science tests (like the NSF one used in this study), they’re given the message that there’s a right and wrong answer to each question, and you just gotta know it. But that’s about as far from real science as you can get! It’s when the experiment gives surprising results that the real scientific process kicks in. Science isn’t about getting the textbook answer, it’s about asking interesting questions, and finding sound methods for answering them. The myths about science are ground into kids from an early age by people who teach science as a bunch of facts (3).

At the core of the problem is a failure to make the distinction that Maienschein points out between science literacy and scientific literacy (2). The NSF instrument measures the former. But science comprehension is about the latter – it…

…emphasizes scientific ways of knowing and the process of thinking critically and creatively about the natural world.

So, to Kahan’s interpretation of the results, I would add another hypothesis: we should actually start teaching people about what scientists do, how they work, and what it means to collect and analyze large bodies of evidence. How results (yes, even published ones) often turn out to be wrong, and what matters is the accumulation of evidence over time, rather than any individual result. After all, with Google, you can now quickly find a published result to support just about any crazy claim. We need to teach people why that’s not science.

Update (May 30, 2012): Several people suggested I should also point out that the science literacy test they used is for basic science questions across physics and biology; they did not attempt to test in any way people’s knowledge of climate science. So that seriously dents their conclusion: The study says nothing about whether giving people more facts about climate science is likely to make a difference.

Update2 (May 31, 2012): Some folks on Twitter argued with my statement “concern over climate change doesn’t correlate with … level of science literacy”. Apparently none of them have a clue how to interpret statistical analysis in the behavioural sciences. Here’s Dan Kahan on the topic. (h/t to Tamsin).

References:

(1) Kahan, D., Peters, E., Wittlin, M., Slovic, P., Ouellette, L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks Nature Climate Change DOI: 10.1038/nclimate1547
(2) Maienschein, J. (1998). Scientific Literacy Science, 281 (5379), 917-917 DOI: 10.1126/science.281.5379.917
(3) William F. McComas (1998). The Principle Elements of the Nature of Science: Dispelling the Myths The Nature of Science in Science Education

What I do (in simple words)
A warmer earth is a more "electrifying" world

5 Comments

  1. What shifts in views do you find when you survey people/students before vs. after they take a college climate science course?

    And how do we teach – and assess – scientific literacy? re the teaching, how do you amass and present scientific evidence that science does in fact progress? (Some members of the public really aren’t clear that there’s a meaningful difference between science and PR.)

    The existence of fringe leftie views (paranormal, GMO, homeopathy, astrology etc) gives us subject matter that conservatives would find congenial for debunking.

    I’d really like to see “Wondering Mind” science societies doing the Cafe Scientique thing, including actually running experiments, and reporting back. Maybe NPR’s Science Friday could be at the center of this?

    It needs to happen, and it needs to reach a wide spectrum of society.

  2. First we need widespread science literacy, then political leaders willing to set public policy, then a large enough social movement to launch changes, then international co-operation sufficient to follow through with trans-generational commitments.

    So without science literacy, we will need learn widespread stoic acceptance, etc…

  3. Pingback: The Blackboard » Statistical Parsomatics: Easterbrook Kahan.

  4. Any idea of what is driving the full court press from the usual suspects against you?

  5. @Eli Rabett : Probably because I keep getting into Twitter arguments with them. I’m amused that none of them are willing to come over here and comment – they prefer just to play to their own peanut galleries.

  6. I’ll be a little heretical and suggest that science literacy, however defined, is less significant than a commitment to reality-based decision making.

    For science literacy, you’ve got two sides, neither of which exactly addresses the public policy issues. One part of science is the accumulated body of knowledge. So it isn’t unfair to test science literacy by see what/whether people know something from that store. It’s also incomplete, because science is also a method of working towards understanding the universe. Knowledge of that, and not in the jr. high sense, is also important and part of scientific literacy. This is seldom, if ever, tested, not least because it’s not obvious how you would go about doing so.

    Perfect knowledge of both body and practice of science, perfect science literacy, would not, my heresy, change, say, North Carolina’s decision to outlaw using our best knowledge of science on sea level. Support for legislation like this isn’t because people don’t understand the science on sea level, it is because they don’t like the answer that science gives. And they have decided to give priority to their preference over what science has to say.

Join the discussion: