Civil wars of words are playing out across the pages of peer-reviewed journals every quarter, largely hidden from public view. Allow me to pull back the curtain on one of these, concerning the concept of scientific consensus.
Peer-reviewed journals are not just compilations of studies. In the collegiate world, they’re also very slow-motion conversations: the snail-est of scholastic snail mail—if snail mail were on public display. Journals are the place to publish not only studies, but letters and essays: responses and rebuttals to other scholars’ studies and essays. The New England Journal of Medicine, one of the most prestigious medical journals in the country, regularly publishes editorials (opinion pieces). Other journals often include essays, reviewer explanation pieces, or even “advanced reviews”—peer-reviewed, article-length summaries of ongoing scientific conversation and controversy. Today, I’m exploring just such an advanced review by Texas Tech’s Dr. Asheley Landrum and Dr. Matthew H. Slater of Bucknell University.
The authors begin by acknowledging the debate, both within the (web)pages of journals and behind-the-scenes, among researchers on the question of how scientific consensus messaging ought to be studied. That is, there’s considerable uncertainty around the issue of just how proclaiming that scientists have reached consensus on a particular issue affects consumers of that proclamation. It’s a phenomenon that still has a lot of creases to iron out. The science is in, the consensus reached, and the papers written on issues as diverse as climate change, vaccines, and genetically modified food; what’s left is to see how the public absorbs and reacts to this information. And on that point, there is anything but consensus.
Some schools of thought, the authors detail, have sought support for the “Gateway Belief Model” (GBM), which suggests that messages regarding scientific consensus create a gateway belief, “indirectly influencing public opinion and policy support by increasing public perceptions of scientific consensus.” Others suggest this model falls short by downplaying the crucial role of motivated reasoning in the processing of such consensus messages—partisanship is a formidable force that they argue is not considered in the GBM. The authors themselves take a step back to raise the lingering questions that remain to be taken into account.
First, how should communicators and researchers convey scientific consensus? Typically, consensus messages are not only messages of agreement, but also implicit nods to authority and expertise—“yet research has demonstrated that authority commands like these can induce backfire effects,” with readers reacting against the appeal to authority. While most Americans tend to trust scientists—in general—“credibility is mutable and context-dependent,” and polarization is rampant, muddying the waters. The authors propose that one means of potentially sidestepping the backfire effect is to emphasize that consensus is distinct from agreement. “Agreement,” they write, “can be achieved in many ways. That all members of a cult agree that their spaceship arrives tomorrow should not increase our credence that it will, given our background beliefs about how such an agreement was likely formed. Agreement formed by more democratic or adversarial processes [like that of the scientific method], by contrast, is often indicative of something beyond mere social cohesion. In the case of scientific consensus, that something is presumably the weight of the evidence for a given proposition.”
A large part of the difficulty with proclaiming that a particular percentage of scientists agree on a given topic is that it sidesteps the more important point: that there is a compelling reason for that agreement—"the epistemic value of a consensus is in the way consensus is actually formed," the authors write. "As John Cook phrased the objection: 'our understanding of climate change is based on empirical measurements, not a show of hands' (2014)." It is therefore imperative that consensus messages, when they are deployed, could not possibly leave readers with the impression that consensus is a matter of scientists' shared opinions, but is rather an expression of shared understandings based upon the weight of the evidence. Science is defined, in no small part, by its rigorous process, including "organised skepticism (Merton, 1942), competition (Kuhn, 1962), and peer review." It's an environment where data is vigorously vetted, and one in which consensus is meaningful—but not in and of itself: it's meaningful because of the process of arriving at it—because of the rigors baked into the structure of science.
Another key question raised is: what criteria determine that a consensus message has been successful? The seemingly simple query is a can of worms disguised as chicken noodle soup. The authors point out, “Setting the real world aside, a minimum requirement for demonstrating potential efficacy [of a consensus message] would be favorable and replicable results in experiments, where exposure to a consensus message is controlled. However,” and here’s the kicker: “researchers disagree about what constitutes favorable results.” Effects thus far observed are inconsistent across different groups, or have different effects on attitudes and/or policy support or last for an indeterminate amount of time, so the debate over what constitutes success endures. “If there was clear, consistent evidence of a direct effect where participants exposed to a consensus message report greater concern about climate change and greater support for climate change mitigation policies (outcome variables) than those exposed to a control message,” the authors observe, “this debate over the effectiveness of consensus messaging would not exist.” As it stands, different subgroups respond differently to consensus messaging, some responses include negative effects, and whether or not a number is used to convey the consensus can alter results (saying “97% of scientists…” is categorically different from saying “an overwhelming majority of scientists…” for example). A further complication is that participants in studies are not all in agreement about what number would constitute a consensus (the average of the widely disparate responses was 62% agreement = consensus).
Negative reactions to consensus messaging do not necessarily constitute failures either: “reactance in response to consensus messages by certain subgroups does not necessarily invalidate the potential for consensus messaging to move public opinion,” the authors write. “It just underlines a well understood truism of communication: different people respond to messages differently.”
In short, the jury is still out on—but working toward finding out—just what the case may be. We’re in the midst of discovery. As with all science, the answer will take time and effort to reach understanding and—eventually—consensus on.
Natasha (Strydhorst) Unsworth
is a first-year doctoral student at Texas Tech University’s College of Media & Communication. Her research is focused on science communication—specifically the factors contributing to and consequences associated with science illiteracy.