With all of the known, and frequently intentional, faults of our social media, schools, and the Web, nevertheless far more information gets out than disinfo. Research now shows that the harms of disinfo are concentrated, and could be reined in. The biggest problem is among those who want to be misinformed, who want their prejudices and their inherited privilege confirmed.
I try to inform a classmate of mine via an alumni mailing list. He is a lawyer who doesn’t believe in the rules of evidence, who doubts that scientists can actually do science. I have had a lot of discussions with hard-shell Creationism Supremacists, too.
I was also one of many who got RFK jr’s bogus anti-vaxx children’s charity bounced off Facebook.
What we do — and don’t — know about how misinformation spreads online
There are gaps in our understanding of how and why digital misinformation propagates. To help design effective interventions to minimize the spread of falsehoods, researchers need data and transparency from online platforms.
Yet common perceptions about misinformation and what well-grounded research tells us
don’t always agree, as Ceren Budak at the University of Michigan School of Information in Ann Arbor and her colleagues point out in a Perspective article
4. The degree to which people are exposed tends to be overestimated, as does the influence of algorithms in dictating this exposure. And a focus on social media often means that wider societal and technological trends that contribute to misinformation are ignored.
Misunderstanding the harms of online misinformation
The controversy over online misinformation and social media has opened a gap between public discourse and scientific research. Public intellectuals and journalists frequently make sweeping claims about the effects of exposure to false content online that are inconsistent with much of the current empirical evidence. Here we identify three common misperceptions:
- that average exposure to problematic content is high,
- that algorithms are largely responsible for this exposure and
- that social media is a primary cause of broader social problems such as polarization.
In our review of behavioural science research on online misinformation, we document a pattern of low exposure to false and inflammatory content that is concentrated among a narrow fringe with strong motivations to seek out such information.
In response, we recommend holding platforms accountable for facilitating exposure to false and extreme content in the tails of the distribution, where consumption is highest and the risk of real-world harm is greatest. We also call for increased platform transparency, including collaborations with outside researchers, to better evaluate the effects of online misinformation and the most effective responses to it. Taking these steps is especially important outside the USA and Western Europe, where research and data are scant and harms may be more severe.
The Social Media Typhoons aren’t listening. We will need Congress to act, which should be possible next year. Lawsuits help in the most egregious cases, as with Alex Jones being forced to divest from InfoWars.
Misunderstanding Misinformation
By focusing narrowly on problematic content, researchers are failing to understand the increasingly sizable number of people who create and share this content, and also overlooking the larger context of what information people actually need. Academics are not going to effectively strengthen the information ecosystem until we shift our perspective from classifying every post to understanding the social contexts of this information, how it fits into narratives and identities, and its short-term impacts and long-term harms.
Navigating a Polluted Information Ecosystem
Rumors Have Rules
Decades-old research about how and why people share rumors is even more relevant in a world with social media.
Both uncertainty and significance are rooted in the “basic law of rumor” introduced by scholars Gordon W. Allport and Leo Postman in 1946: the strength of a rumor is proportional to its significance to the listener multiplied by the ambiguity of the evidence around it. The condition of diminished trust stems from an idea of sociologist Tomatsu Shibutani from 1966, that informal communication surges in the absence of timely official information. The familiarity/repetition dimension arises from the “illusory truth effect,” identified in the 1970s, that repetition increases believability. The seemingly contradictory feature of novelty tracks to work in 1990 showing that rumors lose value over time.
We initially developed this framework for research to guide our “rapid response” research. After conversations with local and state election officials who were struggling for guidance about when and how to address false claims about their processes and procedures, we adapted the framework for their perspective. Since then, we have presented it to a small number of local and state election officials for feedback. We aim to develop, deploy, and evaluate trainings based on the framework for 2024.