Extremists are using gaming platforms to recruit young followers
08-01-2025

Extremists are using gaming platforms to recruit young followers

For many people online gaming is a harmless escape – a chance to unwind with friends, trade tips on strategy, and watch favorite streamers test new releases.

Yet a new study suggests that the same vibrant, hyper-connected space has become fertile ground for extremist recruiters. They see voice chat, livestreams, and in-game banter as open doors to fresh audiences.

Gaming platforms fly under the radar

Over the past decade, legislators have poured energy into policing social media platforms like Facebook, X, TikTok, and YouTube.

By comparison, gaming-adjacent platforms such as Discord, Steam communities, or Twitch “have largely flown under the radar, while at the same time becoming digital playgrounds for extremists to exploit,” said lead author William Allchorn from Anglia Ruskin University’s International Policing and Public Protection Research Institute.

These services enable real-time chat, meme sharing, and private voice channels – making moderation especially difficult. The research team – Allchorn and senior fellow Elisa Orofino – spoke with content moderators, tech-industry specialists, and practitioners who work on preventing violent extremism.

Interviewees repeatedly described a funneling process: a user first encounters edgy jokes or borderline rhetoric on mainstream services, then an extremist invites them to a more lightly policed space tied to gaming culture.

“That’s where you have matchmaking. It’s where you can build quick rapport with people,” one participant said. “But that’s the stuff that very quickly moves to adjacent platforms, where there’s sort of less monitoring.”

Extremist messages flood gaming platforms

According to the study, the most common ideology circulating in these spaces is far-right extremism – white-supremacist propaganda, neo-Nazi symbols, Holocaust denial, and an undercurrent of misogyny and anti-LGBTQ rhetoric.

Islamist material appears too, though less frequently, and moderators also reported “extremist-adjacent” content that glorifies school shootings or promotes conspiracy theories such as QAnon.

All of it violates platform terms of service, yet often evades detection because the language is coded or hidden behind memes.

Hyper-masculine titles – first-person shooters or combat games that prize aggression, quick reflexes, and leaderboards – offer particularly welcoming environments.

Newcomers can slip into a team channel with strangers, bond over shared tactics, then encounter racist or antisemitic slurs delivered as “jokes.” If a young player laughs along, recruiters test deeper waters. They may share links to fringe forums or private Discord servers where rhetoric becomes more explicit.

Moderation systems overwhelmed

Most gaming companies rely on a mix of automated filters and volunteer or lightly paid moderators. Interviewees told Allchorn and Orofino they feel overwhelmed.

In-game voice chat lacks moderation, while text channels flood with slang and sarcasm that overwhelm AI filters.

A phrase like “I’m going to kill you” can be normal trash talk during a match but a genuine threat in a different context. Many users don’t know how to report extremist content, and those who do often feel their reports are ignored.

Moderators themselves expressed frustration at “inconsistent enforcement policies” and the heavy burden of deciding when to escalate a user to law enforcement.

Meanwhile, extremists exchange tips on how to dodge bans. These include using numbers or symbols in slurs and encoding messages with historical references or Nordic runes.

Younger gamers in the crosshairs

A recurrent anxiety among professionals interviewed was the vulnerability of minors. Livestreamers who weave extremist talking points into entertaining gameplay accrue followers in the thousands; teenage viewers may tune in daily, forming parasocial bonds.

Once the streamer invites them to a private server, those young fans are isolated from counter-speech.

Yet parents and teachers rarely recognize gaming platforms as potential vectors of radicalization. The authors argue that digital literacy campaigns must expand beyond traditional social media to cover the unique cultures of gaming.

Solutions span tech and parenting

Allchorn believes decisive action is possible. “Strengthening moderation systems, both AI and human, is essential,” he said, but technologies must be paired with clearer platform rules that target content that is harmful but technically lawful.

The paper urges companies to invest in symbol libraries that evolve as extremist language shifts, to give moderators rapid-response channels to local police, and to provide transparent feedback loops when users report violations.

Law enforcement agencies also need to learn how gaming subcultures operate so they don’t dismiss genuine threats as mere smack talk. Educators and parents can play a role by discussing with children why hateful messages spread and by encouraging reporting mechanisms within games.

Ultimately, the study paints gaming platforms as a double-edged sword. They build global community, yet also allow fringe movements to thrive unnoticed.

“These gaming-adjacent platforms offer extremists direct access to large, often young and impressionable audiences,” Allchorn warned.

Recognizing the phenomenon and naming its tactics is the first step toward keeping virtual playgrounds safe for play rather than propaganda.

The study is published in the journal Frontiers in Psychology.

—–

Like what you read? Subscribe to our newsletter for engaging articles, exclusive content, and the latest updates. 

Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.

—–

News coming your way
The biggest news about our planet delivered to you each day
Subscribe