Overconfidence about overconfidence

Recently, someone I follow on Twitter linked to a story by Katie MacBride titled “Overconfidence kills: The CDC and WHO still haven’t learned how to effectively communicate uncertainty.” That seemed right to me, so I clicked ❤️, followed the link, and read the story. Despite agreeing with the thesis, the actual article gets so much wrong that I went back and withdrew my ❤️ from the Tweet linking to it. This isn’t something I do very often.

The first sentence is, “If there’s one communications lesson from the pandemic, it’s the importance of navigating uncertainty.” That seems entirely right to me. Science communicators face the challenge of conveying the subtle and provisional nature of science to a public that expects science to deliver the truth, in a political context where lots of leaders only want to use science as a post hoc justification for whatever they’ve already decided to do.

Yet MacBride’s piece gets it wrong in two important respects.

She asks why the CDC and WHO took so long to recognize that CoViD might be airborne. She quotes one infectious disease expert who says that “many assumptions that we had about this virus were proven false” but objects that “the issue is not that the WHO made incorrect assumptions about the virus, it’s that they made assumptions about it in the first place.” Although approaching a novel virus with an open mind sounds commendable, approaching it with no assumptions is simply not an option. An important lesson from the philosophy of science is that scientific inference is always material— that is, it always relies on assumptions about the objects of inference. One might object that the WHO made the wrong assumptions or clung too tightly to them, but that’s not the claim MacBride makes.

Effectively communicating uncertainty requires acknowledging— even highlighting— the fact that scientific inference always involves assumptions.

MacBride makes another unfair complaint about the messaging. She writes, “No reasonable person could expect that the WHO would be able to speak with 100 percent confidence about a virus that hadn’t existed only a few months before. Yet that’s precisely what the WHO and CDC did: They spoke with complete confidence about how the virus was transmitted, all while evidence was piling up about the reality of airborne transmission.”

I don’t recall anyone ever using the phrase “100% certainty” in the early days of the pandemic— or at all, really. Moreover, the contrast in this passage is problematic. Nobody in the early days could have been expected to speak with 100% confidence, she writes, suggesting that such confidence might be expected later or possibly even now.

Scientific results are never established with certainty. I would even say that, outside of cases with well-defined statistics, it is misleading to report percentages.1 Scientists can say more about the virus than they could two years ago. The evidence allows greater confidence. But let’s not pretend to certainty.

100% confidence is the wrong way to talk about science.

  1. In Bayesian models, confidence is always a percentage. But in cases without well-defined statistics, the precise number is subjective rather than merely reporting what the science tells us.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.