I’ve been thinking a lot recently about why so few people seem to understand the severity of the climate crisis. Joe Romm argues that it’s a problem of rhetoric: the deniers tend to be excellent at rhetoric, while scientists are lousy at it. He suggests that climate scientists are pretty much doomed to lose in any public debates (and hence debates on this are a bad idea).
But even away from the public stage, it’s very frustrating trying to talk to people who are convinced climate change isn’t real, because, in general, they seem unable to recognize fallacies in their own arguments. One explanation seems to be the Dunning-Kruger effect – a cognitive bias in people’s subjective assessment of their (in)competence. The classic paper is “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments” (for which title, Dunning and Kruger were awarded an IgNoble award in 2000 🙂 ). There’s also a brilliant youtube movie that explains the paper. The bottom line is that people who are most wrong are the least able to perceive it.
In a follow-up paper, they describe a number of experiments that investigate why people fail to recognize their own incompetence. Turns out one of the factors is that people take a “top-down” approach in assessing their competence. People tend to assess how well they did in some task based on their preconceived notions of how good they are at the skills needed, rather than any reflection on their actual performance. For example, in one experiment, the researchers gave subjects a particular test. Some were told it was a test of abstract reasoning ability (which the subjects thought they were good at). Others were told it was a test of programming ability (which the subjects thought they were bad at). It was, of course, the same test, and the subjects from both groups did equally well on it. But their estimation of how well they did depended on what kind of test they thought it was.
There’s also an interesting implication for why women tend to drop out of science and technology careers – women tend to rate themselves as less scientifically talented as men, regardless of their actual performance. This means that, when women are performing just as well as their male colleagues, they will still tend to rate themselves as doing less well, because of this top-down assessment bias.
To me, the most interesting part of the research is a whole bunch of graphs that look like this:
People who are in the bottom quartile of actual performance tend to dramatically over-estimate how well they did. People in the top quartile tend to slightly under-estimate how well they did.
This explains why scientists have a great difficulty convincing the general public about important matters such as climate change. The most competent scientists will systematically under-estimate their ability, and will be correspondingly modest when presenting their work. The incompetent (e.g. those who don’t understand the science) will tend to vastly over-inflate their professed expertise when presenting their ideas. No wonder the public can’t figure out who to believe. Furthermore, people who just don’t understand basic science also tend not to realize it.
Which also leads me to suggest that if you want to find the most competent people, look for people who tend to underestimate their abilities. It’s rational to believe people who admit they might be wrong.
Disclaimer: psychology is not my field – I might have completely misunderstood all this.
“Disclaimer: psychology is not my field – I might have completely misunderstood all this.”
Ha!
I think our media conventions amplify this effect –we as audience are given (we want to hear) both sides of every story, and we tend to think the truth is probably somewhere in between.
The most competent scientists will systematically under-estimate their ability, and will be correspondingly modest when presenting their work.
Psychology is obviously not my field either.
I’m not sure D-K would predict this half of the effect.
First of all, the effect of the of competent subjects’ underestimating their ranking mostly goes away after the grading task, and you’d expect that most climate scientists have had this sort of experience — they review peers’ papers and watch politicians on TV. (the effect of the competent subjects’ underestimating their absolute rather than relative abilities doesn’t always show up, and is at least partially explainable by regression effect when it does)
I think there’s also an issue with conflating confidence in the science and confidence in oneself. A “top” climate scientist might think they’re, let’s say, in the 80-th percentile when actually they’re in the 95-th, but it’s not clear that they’ll be less sure of their results because of this — assessing the reliability of one’s results, unlike assessing one’s relative ranking, is a core competence in science. So it’s not clear, I think, that D-K predict that they’ll be more modest when presenting their results (D-K perhaps would predict that they would be more modest than they need to be at the reception after the presentation).
That video is amazing…all should watch.
Pingback: Initial value vs. boundary value problems | Serendipity
Pingback: Another cognitive bug: Attempting to correct a misperception often reinforces it | Serendipity