Dangerous denialism about election results, be they the 2020 election results or those yet ahead in the midterms, is sky-high among Republicans and that disinformation is buoyed on a near-daily basis by individuals like former President of the United States Donald Trump.
His efforts championing the Big Lie were aided by his toadies in the House and Senate and were often echoed and amplified by right-wing conspiracy theorists and charlatans on the news and, critically, on social media platforms.
The use of these platforms, including Facebook/Meta, YouTube, Twitter, and TikTok as a tool to amplify lies of this sort has become ubiquitous in the aftermath of Trump’s presidency. As a new study published Monday by New York University notes, "the claim that Joe Biden did not legitimately win the 2020 election has evolved from a backward-looking Big Lie to an article of faith in the Republican Party that elections in the United States are generally corrupt.”
Seemingly unable to defeat Democrats off the mere strength of their own ideas or actual policies, this turn toward denialism by those on the right has allowed distrust to flourish and “fueled harassment and death threats aimed at election administrators, causing many experienced officials to make plans to leave their jobs before the 2024 presidential election,” the 24-page study notes.
Spreading the Big Lie Social Media Sites Report NYU by Daily Kos on Scribd
“If even a handful of Republican deniers are elected this year to state offices that oversee presidential elections—such as governor and secretary of state—the 2024 process could descend into chaos and violence, making the events of 2020 -2021 seem tame by comparison,” the NYU study continues.
Campaign Action
Right-wing officials, conspiracy theorists, and election deniers alike do not stand on street corners alone largely ignored as they yell into a void. They use massive platforms to hawk disinformation and intended or not, sow chaos in the real world. Trump’s use and abuse of social media to peddle lies from the top-most seat of power in the U.S. changed the digital dynamic for politicians but it also spurred waves of intense criticism and scrutiny on the role each platform played in spreading disinformation or misinformation about election results.
The NYU study published Monday was intended not just to assess the role social media platforms play or played in spreading disinformation. It is also intended to expose the still-lingering weaknesses of each company. Recommendations were provided for each.
The study found that Facebook, for example, still exempts politicians from the platform’s own fact-checking program.
According to NYUs review, there were zero fact-checks done by Facebook/Meta on politician statements since 2019.
Notably, when the platform banned Trump in January 2021 following his incitement of insurrection at the U.S. Capitol, it noted the “danger of his inciting violence” but failed to address the inherent danger of his lies about election fraud.
Twitter lackadaisically enforces its “civic integrity policy” and has allowed election denialism to flourish steadily by applying standards for users in an “on-again-off-again” approach.
A whistleblower report from former Twitter security chief Peiter Zatko made public this August alleged that Twitter lied to regulators about its security methods and overwhelmingly failed to stem disinformation on its platform because of poor management, a lack of resources, and an obsessive focus on “putting out public relations fires.”
TikTok has enforced some of its own political misinformation rules but does so “haphazardly” and “erratically,” researchers found.
“Research for this report found that the platform was blocking users from searching for the term “election fraud” but allowing them to scour the site for videos about “ballot trafficking,” which collectively attracted millions of views,” the study notes.
It wasn’t until after TikTok was questioned about this directly that the platform began blocking searches for ballot trafficking.
YouTube is not much better off. That platform is still rife with conspiracy theory content about elections in the U.S.
“YouTube, a subsidiary of Google, was the last major platform to reveal its midterms strategy—a continuation of its pattern of hanging back to allow rivals to absorb the brunt of critics attention. The video platform’s past election-season performance does not provide reason for optimism. For example, it belatedly announced in December 2020 that it would remove misleading claims that “widespread fraud or errors changed the outcome” of that year’s election. But YouTube applied the policy only to content uploaded after December 9, which allowed untold numbers of denialist videos to remain available, contributing to the erosion of trust in democracy,” the study notes.
Republican nominees for political office who denied the 2020 election results are positioned to have significant influence. In election battleground states like Arizona, Pennsylvania, Michigan, Wisconsin, and Nevada, for example, almost two-thirds of the GOP’s nominees for midterm primaries there are election deniers.
While Trump may deserve the lion’s share of the blame for the uptick in this latest cynical and dishonest approach to politics, social media platforms shoulder some of that responsibility too, Monday’s report noted.
“First, social media platforms are not the sole engine driving election denialism. Without Donald Trump’s uniquely corrosive attacks on U.S. elections, law enforcement, and other public institutions, we would not face such a dire situation. Republican leaders also have played a key role by echoing Trump’s shameless claims or remaining silent. This obeisance reflects fear of Trump’s wrath, but also a calculation that denialism helps justify restrictive voting laws that lower Democratic turnout and boost Republican success at the polls,” the report states. “Fox News and even-more extreme right-wing cable outlets also have exacerbated denialism, as have-Trump talk radio, podcasts, and websites. Still, it’s important to remember that these other disinformation sources energetically seek to amplify their content via social media and that versions of the denialist narrative ricochet among platforms, gaining credibility on the political right by means of sheer repetition.”
The NYU study also acknowledges that it is no small feat to tackle an ever-moving target on platforms that refresh daily. Automated moderation can only go so far and there are even more limits on human moderators.
“But most of them work for third-party vendors for modest pay and under sometimes unsettling circumstances,” researchers note.
Nonetheless, the NYU study highlighted hat the “malady of election denialism in the U.S. has become one of the most dangerous byproducts of social media, and it is past time for the industry to do more to address it.”
All of the social media platforms studied by NYU responded—to varying degrees—to inquiries.
YouTube and TikTok were the most cagey, however.
NYU noted that YouTube did, however, release a statement on Sept. 1 saying that its search function algorithm favored “authoritative new sources” like PBS, ABC, NBC, Univision, and The Wall Street Journal over others.
YouTube videos “encouraging interference in the democratic process, inciting violence or advancing certain types of election misinformation” are actively removed, too, the social media giant said. It promised to launch a “media literacy campaign” as well ahead of the midterms. A representative for YouTube did not immediately return a request for comment to Daily Kos.
Meta, or Facebook, spokesperson Tom Reynolds, highlighted how the platform has since stopped recommending political Facebook Groups to users. The platform claims it is also tackling the “coordinated harassment and threats of violence against election officials and poll workers.”
“Beyond labeling and demoting content that has been deemed false by its outside fact-checking network, Meta notifies users before they try to share such content. The company also informs people if something they have shared is later determined to be false,” Meta said.
Twitter vowed in August to refine its algorithms so that misinformation-laden tweets are not recommended to users and the company announced too that it was going to use something it calls “prebunks” to get ahead of disinformation.
TikTok’s response was more canned when NYU came asking questions. In a general statement, the company said it was “committed to protecting the integrity of our platform and have dedicated teams working to safeguard TikTok during elections.”
“We prohibit and remove election misinformation and other violations of our policies, work with accredited fact-checkers who help us assess content, and partner with authoritative sources to provide access to election information,” TikTok said.
The platform is largely used to post and share viral, mostly unpolitical content but in August, the company updated its election policies including labeling content specifically related to the 2022 midterms and allowing users to click through for information about political races in their state.
TikTok said too that it would also “inform viewers of content that fact-checkers deem ‘unsubstantiated’ and prompt them to reconsider before sharing the potentially misleading information.”
According to NYU, the primary focus of disinformation on social media platforms today tends to revolve around the myth of ballot trafficking.
Ex-felon and Trump-pardon recipient Dinesh D’Souza’s “2000 Mules” film made much of this conspiracy theory, suggesting that paid “mules” illegally stuffed some 400,000 ballots into drop boxes in swing states. His “proof” relied on cell phone pings. They “purported to cross-reference cellphone data and surveillance tape to show that unnamed individuals repeatedly traveled from the offices of liberal nonprofits to drop boxes in Arizona, Georgia, Michigan, Pennsylvania, and Wisconsin.”
The film was widely panned and debunked.
Per NYU: “Ballot trafficking is a more ominous-sounding version of what has traditionally been referred to as ballot collecting or harvesting. The concern is that elections can be undermined by the bundling of substantial numbers of mail-in or absentee ballots and delivery of these ballots to polling places or drop boxes. The implication is that the named voters did not fill out the ballots themselves or may not even exist. Illegal ballot collecting does occur, but only rarely, and there is no evidence that it tends to favor Democrats.”
The “furor” over ballot tracking, according to the University of Washington’s Center for an Informed Public, coincided directly with its trending use on Twitter.
“From December 2020 through April 2022, they found that while intensifying online activity around the term may appear organic, in fact, it was “high-follower accounts from leaders, conservative activist organizations, and right-wing media outlets and pundits [that] participated in—and in many cases, helped to seed and/or catalyze—the spread of ‘ballot trafficking’ claims.’ In other words, there was a concerted effort to amplify the allegation of ballot trafficking on social media,” the report states.
Republican National Committee chairwoman Ronna McDaniel amplified the ballot tracking myth on Twitter in March 2021. Disinformation peddlers like Breitbart and other right-wing media outlets followed immediately, publishing articles about so-called ballot trafficking studied by the conservative-leaning True the Vote organization.
It was the True the Vote “data” that was also used by D’Souza in his film. Other right-wing outlets like The Gateway Pundit picked up the thread and the conspiracy theory-addled Q-Anon movement seized on the false messaging too, spreading it far and wide on the messaging app Telegram.
Speaking on D’Souza’s film during a sworn deposition before members of the Jan. 6 committee, former Trump-era Attorney General Bill Barr said that the ping data demonstrated “at most” that certain cellphone customers were in the vicinity of drop boxes at multiple times. This was likely due to the fact that many of the boxes were in pedestrian-friendly locations.
The Georgia Bureau of Investigation was equally "unimpressed with [the cellphone data], and I was similarly unimpressed with it,” Barr told the select committee.
“If you take two million cellphones and figure out where they are physically in a big city like Atlanta, just by definition, you’re going to find many hundreds of them have passed by and spent time in the vicinity of these boxes. The premise that if you go by a box, five boxes, or whatever was, you know that’s a ‘mule,’ its just indefensible,” Barr said.
Sadly, the disinformation seized on by politicians and other opportunists has had an outsized impact on communities of color because it is used to rationalize overly strict voting laws. According to NYU’s Brennan Center for Justice, since Jan. 2021, Republicans in nearly 20 states have enacted some 34 laws that restrict access to the polls. That includes restrictions on early voting and additional voter ID requirements.
Social media platforms have been slow to self-regulate and their commitments to manage dangerous disinformation on its sites have waffled, historically. “Cynical calculations” are being made according to some experts like Richard Hasen, a professor of election law at the University of California Los Angeles.
In an interview with NYU, Hasen said Meta was making a calculation that Republicans would regain total control of Congress. CEOs like Mark Zuckerberg don’t enjoy being at odds with Republicans who may soon write laws regulating its industry.
An unidentified Twitter executive conceded in an interview with NYU that it was not ready to declare ‘mission accomplished’ around the spread of election denialism or other disinformation on its platform.
NYU’s Stern Center for Business has advanced a proposal that ups consumer protection measures by increasing the Federal Trade Commission’s powers to oversee social media. But there is an abundance of First Amendment questions that must still be factored in.
With that in mind, researchers have recommended the passage of the Platform Accountability and Transparency Act (PATA), a bipartisan bill that requires social media companies to provide independent researchers and the public with access to their platform data. The study also calls for the outright removal of “demonstrably false content” on Facebook but with preservation methods in place so that researchers and journalists and anyone else who may be curious, can study the patterns and dissemination of disinformation.
Enhanced fact-checking is “not a panacea,” NYU researchers noted.
But some lies are so demonstrable—like the claim of widespread fraud affecting the outcome of the 2020 election or that the Holocaust did not happen—social media can assuredly do more.
“Research has shown that fact-checking has a positive effect on people’s ability to distinguish truth from lies. Research has also shown that fact-checking has an even more positive effect if people previously exposed to the lie in question are informed of that fact—a retrospective boost for the truth that social media platforms have the technological ability to accomplish,” Monday’s study highlighted.