Anyone in the sciences (and most in any other field) could tell you: we will never run out of questions. There’s a lot we don’t know—but we do know the questions will keep coming. There are countless things to be curious about: amyloid proteins, biofilms, curiosity itself. Even having “focused” my studies in science illiteracy, there are myriad facets (yet more questions) to dive into (what makes science interesting to some and not others? What level of literacy would be sufficient to engage in the science issues most relevant to one’s community? How do we reconcile science education—which, for a vast majority, ends in high school or college—with the unending march of science and science knowledge (among experts)?)
STEM communication researchers here at Texas Tech University are asking questions just like these every day—and chasing the answers for months or years. Highlighting their work is the heart of what this blog is meant to be about (with reflections on research, musings about science, and insights into doctoral school life thrown in).
This space is for questions and answers—we’ll highlight the publications and presentations of the Media & Communication College’s STEM communication research—but it’s also a space for the wonder that accompanies those queries and discoveries. We’ll wax eloquent about fascinating findings in the science world and ponder the imponderables we come across along the way. We’ll cultivate a comfort with uncertainty while being certain of one thing: we’re not going to run out of questions any time soon.
In the words of one of my favorite authors of all time, Bill Bryson: “We live on a planet that has a more or less infinite capacity to surprise. What reasoning person could possibly want it any other way?”
It occurs to me that a blog about "STEM communication" ought to provide something of a rundown of what STEM actually is, so here goes:
“STEM” is something of a buzzword (buzz acronym?) in the educational milieu. It stands for “Science, Technology, Engineering, and Mathematics.” STEM communication, then, is the presentation (via articles, videoclips, press releases, podcasts, TED Talks, etc.) of the research, findings, histories, and understandings making up each of these expansive and ever-developing fields.
STEM communication usually refers to efforts and techniques to communicate technical, dense, scientific material produced by experts in STEM fields to non-experts. That is the intention of this very blog, though some of the research we look forward to featuring here may well focus on STEM communication by and for experts in the fields of science and communication (and the juncture at which they meet); that is why much of what we will feature here will take the form of peer-reviewed journal articles—a gold standard of science communication among experts, but not designed to be perused by a general audience.
Even though STEM communication is largely understood as the communication of science to non-science audiences, a more holistic understanding is that STEM communication is the circulation of scientific information through the whole body of the system: it's scientists speaking to other scientists, scientists publishing in journals, foundations collaborating with scientists, journals disseminating science to members of the public (and those passing it along to other public arenas), scientists communicating with the media, and the media passing science along to the public—and all the other combinations of these stakeholders imaginable. As with any complex system, breakdowns can and do occur.
STEM and communication can go rather cheerfully hand-in-hand, though the gulf that may rise up between them has been the source of myriad problems in recent years. Perhaps the most concerning of these is the burgeoning scourge of misinformation (rapidly and widely—if ignorantly—disseminated falsehoods) and disinformation (a more sinisterly intentional spreading of false information). Misinformation and disinformation are hardly unique to scientific communication, but they do pose unique challenges to this field. Speculations and rank falsities about COVID-19’s origins, transmissibility, severity, and treatment have been even more rampant than the virus, resulting in widespread distrust of experts, disregarding of public health guidelines, and distortion of perceptions about science and its practitioners around the world. Other scientific fields—notably environmental sciences—have similar stories. These stories are a particularly poignant reminder that STEM communication is vital to a society that depends, for so much, on the developments arising from the science, technology, engineering, and math fields around the world.
Civil wars of words are playing out across the pages of peer-reviewed journals every quarter, largely hidden from public view. Allow me to pull back the curtain on one of these, concerning the concept of scientific consensus.
Peer-reviewed journals are not just compilations of studies. In the collegiate world, they’re also very slow-motion conversations: the snail-est of scholastic snail mail—if snail mail were on public display. Journals are the place to publish not only studies, but letters and essays: responses and rebuttals to other scholars’ studies and essays. The New England Journal of Medicine, one of the most prestigious medical journals in the country, regularly publishes editorials (opinion pieces). Other journals often include essays, reviewer explanation pieces, or even “advanced reviews”—peer-reviewed, article-length summaries of ongoing scientific conversation and controversy. Today, I’m exploring just such an advanced review by Texas Tech’s Dr. Asheley Landrum and Dr. Matthew H. Slater of Bucknell University.
The authors begin by acknowledging the debate, both within the (web)pages of journals and behind-the-scenes, among researchers on the question of how scientific consensus messaging ought to be studied. That is, there’s considerable uncertainty around the issue of just how proclaiming that scientists have reached consensus on a particular issue affects consumers of that proclamation. It’s a phenomenon that still has a lot of creases to iron out. The science is in, the consensus reached, and the papers written on issues as diverse as climate change, vaccines, and genetically modified food; what’s left is to see how the public absorbs and reacts to this information. And on that point, there is anything but consensus.
Some schools of thought, the authors detail, have sought support for the “Gateway Belief Model” (GBM), which suggests that messages regarding scientific consensus create a gateway belief, “indirectly influencing public opinion and policy support by increasing public perceptions of scientific consensus.” Others suggest this model falls short by downplaying the crucial role of motivated reasoning in the processing of such consensus messages—partisanship is a formidable force that they argue is not considered in the GBM. The authors themselves take a step back to raise the lingering questions that remain to be taken into account.
First, how should communicators and researchers convey scientific consensus? Typically, consensus messages are not only messages of agreement, but also implicit nods to authority and expertise—“yet research has demonstrated that authority commands like these can induce backfire effects,” with readers reacting against the appeal to authority. While most Americans tend to trust scientists—in general—“credibility is mutable and context-dependent,” and polarization is rampant, muddying the waters. The authors propose that one means of potentially sidestepping the backfire effect is to emphasize that consensus is distinct from agreement. “Agreement,” they write, “can be achieved in many ways. That all members of a cult agree that their spaceship arrives tomorrow should not increase our credence that it will, given our background beliefs about how such an agreement was likely formed. Agreement formed by more democratic or adversarial processes [like that of the scientific method], by contrast, is often indicative of something beyond mere social cohesion. In the case of scientific consensus, that something is presumably the weight of the evidence for a given proposition.”
A large part of the difficulty with proclaiming that a particular percentage of scientists agree on a given topic is that it sidesteps the more important point: that there is a compelling reason for that agreement—"the epistemic value of a consensus is in the way consensus is actually formed," the authors write. "As John Cook phrased the objection: 'our understanding of climate change is based on empirical measurements, not a show of hands' (2014)." It is therefore imperative that consensus messages, when they are deployed, could not possibly leave readers with the impression that consensus is a matter of scientists' shared opinions, but is rather an expression of shared understandings based upon the weight of the evidence. Science is defined, in no small part, by its rigorous process, including "organised skepticism (Merton, 1942), competition (Kuhn, 1962), and peer review." It's an environment where data is vigorously vetted, and one in which consensus is meaningful—but not in and of itself: it's meaningful because of the process of arriving at it—because of the rigors baked into the structure of science.
Another key question raised is: what criteria determine that a consensus message has been successful? The seemingly simple query is a can of worms disguised as chicken noodle soup. The authors point out, “Setting the real world aside, a minimum requirement for demonstrating potential efficacy [of a consensus message] would be favorable and replicable results in experiments, where exposure to a consensus message is controlled. However,” and here’s the kicker: “researchers disagree about what constitutes favorable results.” Effects thus far observed are inconsistent across different groups, or have different effects on attitudes and/or policy support or last for an indeterminate amount of time, so the debate over what constitutes success endures. “If there was clear, consistent evidence of a direct effect where participants exposed to a consensus message report greater concern about climate change and greater support for climate change mitigation policies (outcome variables) than those exposed to a control message,” the authors observe, “this debate over the effectiveness of consensus messaging would not exist.” As it stands, different subgroups respond differently to consensus messaging, some responses include negative effects, and whether or not a number is used to convey the consensus can alter results (saying “97% of scientists…” is categorically different from saying “an overwhelming majority of scientists…” for example). A further complication is that participants in studies are not all in agreement about what number would constitute a consensus (the average of the widely disparate responses was 62% agreement = consensus).
Negative reactions to consensus messaging do not necessarily constitute failures either: “reactance in response to consensus messages by certain subgroups does not necessarily invalidate the potential for consensus messaging to move public opinion,” the authors write. “It just underlines a well understood truism of communication: different people respond to messages differently.”
In short, the jury is still out on—but working toward finding out—just what the case may be. We’re in the midst of discovery. As with all science, the answer will take time and effort to reach understanding and—eventually—consensus on.
2017 marked the first International (one might say global) Flat Earth Conference. Participants arrived from around the world (though across might better fit the zeitgeist) to share camaraderie and speculation that the Earth is as flat as it appears from our limited perspective on this blue planet.
If those who read this are a representative sample, about 84 percent of you share the majority belief that the Earth is a globe, according to YouGov (2018), but around 5 percent are doubtful, and around 2 percent are flat-out convinced this big blue marble more resembles a pancake. Just think—if they were right, it would make cartographers’ jobs so much easier.
Science may have a lot to say about the shape of the Earth, but little research has investigated the group claiming otherwise—“Flat Earthers.” Some of Texas Tech’s science of science communication researchers set out to change that with a study published this last summer: “Flat-Smacked! Converting to Flat Eartherism.”
The authors—Alex Olshansky, Robert M. Peaslee, and Asheley R. Landrum—report previous work uncovering that upwards of 50 percent of the U.S. population is liable to latch onto at least one form of conspiracy theory—they have “conspiracy mentality.” This mentality is emotion-based, sprouting from paranoia, cynicism, distrust (particularly of government, institutions, and people in positions of power), and a sense of one’s own relative powerlessness. Flat Earthers are reported to have higher-than-average conspiracy mentalities and to have encountered Flat Earth-supporting YouTube videos in their recommended feed after first watching other conspiracy clips (such as those claiming the Sandy Hook Elementary School shooting and 1969 moon landing were fabricated).
To further investigate how converting from spherical Earth to flat Earth belief takes place, the researchers interviewed 20 Flat Earthers at the 2018 Flat Earth International Conference. Conversion to Flat Eartherism was gradual for the interviewees. The most common path from mainstream theory to conspiracy theory routes through multiple Flat Earth videos (initially accompanied by doubt), and followed by a creeping awareness of one’s inability to debunk the videos’ claims.
Those with higher-than-average conspiracy mentalities and lower science knowledge are more vulnerable to credulity when they encounter Flat Earth videos, but what brings most people to the videos in the first place appears to be viewing other conspiracy theory videos, and subsequently having Flat Earth ones become recommended viewing.
“Once you perceive you’ve been lied to,” the authors write, “a natural instinct is to want to know what else you’ve been lied to about; you are then motivated to dig for the ‘truth.’”
Natasha (Strydhorst) Unsworth
is a first-year doctoral student at Texas Tech University’s College of Media & Communication. Her research is focused on science communication—specifically the factors contributing to and consequences associated with science illiteracy.