Right-wing extremists have always made a kind of art out of gaming social media platforms’ attempts at moderating their hateful content and disinformation. For them, it’s a game—a relatively easy one—to just devise simple workarounds to the platforms’ usually hamhanded attempts at weeding out their toxic contributions: adjusting prohibited hashtags with a single letter, shifting accounts to similar personas, evading bans by having followers post content created elsewhere by banned actors, counting on inconsistent enforcement to find cracks. It’s worked well at major platforms such as Twitter, Facebook, and Instagram.
The popular video platform TikTok is no exception. In recent months, white-nationalist “Groypers” from Nick Fuentes’ “America First” movement have been flooding TikTok with content created by Fuentes and his cohorts, even though Fuentes himself was banned from the platform in late June. Officials at the company, meanwhile, seem disinclined to attempt any solutions to the problem beyond the ineffective reactive measures they have taken.
The Groypers are not the only far-right extremists who are exploiting TikTok’s lax moderation. A recent study by the Institute for Strategic Dialogue (ISD) found that a broad range of white-supremacist and other extremist content (including ISIS recruitment videos) remains readily available on the platform.
"TikTok offers a platform for this explicit content to reach new audiences and potentially reach younger audiences, which was quite worrying," ISD researcher Ciaran O'Connor told Politico.
Working from a core sample of 1,030 videos from accounts that featured extremist content, the study identified 312 videos—comprising 30% of their sample—promoting white supremacy, and 246 videos featuring support for extremist figures like Adolf Hitler or terrorists, including the 2019 Christchurch, New Zealand, mass killer. The videos included far-right topics such as “white genocide” and “replacement theory,” TikTok Sounds posts featuring white-power bands, as well as clips of extremists like Fuentes or white nationalist streamer Paul Miller (currently awaiting trial on weapons charges) hurling racial abuse.
One of these videos had over 2 million views. Three of Miller’s videos, all in the top 10 of the survey, had garnered a collective 3.5 million views.
The study noted that extremists love to leverage TikTok’s systemic organization—particularly its “For You” video-recommendation function, operated via algorithm, as well as its so-called switch function, which allows people to use others’ videos in their own content. Most of all, it observed that these extremists had relatively little trouble working around TikTok’s feeble attempts to clamp down.
Evasion tactics to avoid takedowns are simple but effective. Such strategies include banned users returning to TikTok with almost-identical usernames, using privacy functions and comment restrictions strategically or alternative hashtag spellings, and making use of profiles’ video grid layout to promote hatred.
This, as Alex Kaplan recently reported for Media Matters, is exactly how the Groypers have latched on to TikTok as a platform of choice. After his ban in June, Fuentes went to work actively encouraging supporters to post his content there.
At his Epik-hosted podcast, Fuentes urged his audience to “get on TikTok, start chopping up my content, and spamming it on TikTok with hashtags.” He also instructed them on tactics to evade detection such as intentionally misspelling hashtags, and urged them to “get me on the ‘For You’ page”.
Fuentes calls it “Operation NickTok,” and has boasted (without evidence) that content from his “team” on TikTok had gathered more than 15 million views.
TikTok’s detection mechanisms have proven inadequate for these simple evasion techniques. While a search for “Nick Fuentes” is banned at TikTok, Kaplan found that a search for “Nicholas J Fuentes” returns a banquet of white-nationalist content. The hashtag “#NickFuente” has over 2.7 million views on TikTok.
Similarly, a “groypher” page at TikTok reveals a full menu of white-nationalist videos with a collective 138 million views. The bio suggests a range of white-nationalist accounts to check out, and suggests a long list of related hashtags, such as “#gropher, #grypher, #goopher, #gopher, #groyperparty” and others.
As Kaplan observes:
Simply banning the search terms “groypers” or “Nick Fuentes” is a wholly insufficient solution to the complex problem of an emerging white nationalist ideology on TikTok, which has a new, young audience. TikTok’s “For You” page algorithm provides a uniquely dangerous radicalization pipeline, where far-right extremist videos are fed to users who interact with affiliated content. It’s not hard to see how niche content posted by small accounts, such as the ones that uploaded Fuentes’ videos, gets millions of views.
These strategies reflect the shift in tactics by far-right extremists since the Jan. 6, 2021, insurrection, a dynamic created by the decision of most major social media platforms to deplatform many of the far right’s most visible and toxic influencers.
“There’s always going to be this synergistic relationship between the content moderation failures of Facebook, Twitter and alt tech platforms like Parler. So we should absolutely expect that going into the 2022 midterms, especially in battleground states where things are extremely polarized, we will see a similar dynamic,” Candace Rondeaux, director of the Future Frontlines program at the think tank New America, told The Hill.
“A lot of the activity that is happening on those platforms is still reactive to things that are happening on mainstream platforms. So really understanding that dynamic and not treating it as this completely separate and distinct factor when we think about the internet I also think is important,” said Jared Holt, a resident fellow at the Digital Forensic Research Lab (DFRLab).
Paul Barrett of the NYU Stern Center for Business and Human Rights told The Hill that these platforms’ reactive strategies, long proven to be wholly inadequate, make it “much more likely that in 2022 and 2024 we’ll see renewed mayhem online and in the real world.”
“Even after Jan. 6, and even after banning President Trump either indefinitely or permanently, it strikes me that the platforms still are more prone to reacting to what they see as public relations crises, then they are inclined to address these very serious problems in a forward looking and comprehensive way,” Barrett said.
“There just hasn’t been any kind of concerted, industry-wide effort to say that, ‘Look we are part of the problem. Not because we intend to be part of the problem, but it turns out inadvertently, we have created tools that are being misused in connection with absolutely vital institutions and processes,’” he added.