Here’s the uncomfortable truth that most discussions about online extremism get wrong: the content isn’t the problem. We’ve spent a decade debating whether to deplatform, demonetize, or downrank extremist content — and radicalization keeps happening. Not because we haven’t removed enough content, but because we’ve been treating a systems design problem as a content moderation problem. The real danger isn’t what people see online. It’s how platforms structure the experience of seeing it.
Social Psychology · Digital Culture · Platform Design
- Online radicalization isn’t linear — it’s a feedback loop where algorithms, echo chambers, emotional engagement, and social rewards mutually reinforce each other
- Digital identity fluidity paradoxically accelerates convergence toward rigid extremist identities, especially for those with low self-complexity offline
- The critical risk factor is experience design, not content — platform interaction patterns shape identity trajectories more than any individual piece of content
We regulate content when we should be regulating interaction patterns. Algorithmic drift — the gradual, unintentional nudging of users toward more extreme content through engagement optimization — can create radicalization pathways without any extremist actor or intentional design. The algorithm doesn’t need to be malicious. It just needs to optimize for engagement.
🧠 The Identity Paradox: How Online Freedom Creates Rigid Extremism
The internet was supposed to liberate identity. Pseudonyms, avatars, and anonymous accounts let people experiment with multiple selves — trying on identities the way you’d try on clothes. And in many cases, it does exactly that. But here’s what the techno-optimists missed: the same fluidity that enables identity exploration also makes people vulnerable to identity capture.
When your sense of self is fluid, you’re open to new possibilities. You’re also open to being absorbed by communities that offer something increasingly rare in modern life: a clear, unambiguous answer to the question “Who am I?”
A psychological state where the boundary between personal self and group identity becomes porous — “I am the group, the group is me.” Unlike standard group identification, identity fusion amplifies individual agency rather than suppressing it: the fused individual acts on behalf of the group with intensified personal conviction. Research by Swann et al. shows it is the strongest predictor of willingness to fight and die for a group.
Here’s the mechanism that makes this dangerous online: people with low self-complexity in their offline lives — those whose identity rests on few pillars (just their job, or just their political affiliation) — are disproportionately drawn to online communities that provide a clear “us versus them” narrative. And once inside, the digital identity doesn’t just supplement the offline self. It consumes it.
Online pseudonymity allows experimentation with multiple selves — but also makes individuals vulnerable to identity capture by rigid communities.
When the self-group boundary dissolves, personal agency fuels extreme group commitment — not despite individuality, but through it.
The digital persona overtakes the offline self — what started as play becomes the primary identity, with the “real world” self becoming secondary.
As Cynthia Miller-Idriss has documented, extremist communities are masterful at lowering the entry barrier through memes, humor, and subcultural codes — while simultaneously building rigid identity boundaries on the inside. You laugh at a meme. You share it. You find yourself in a Discord server. Six months later, you’re using vocabulary that would have horrified you a year ago, and it feels completely natural.
🔍 The Staircase Gets an Elevator: Radicalization at Digital Speed
In 2005, psychologist Fathali Moghaddam proposed the “Staircase to Terrorism” — a model where radicalization unfolds floor by floor: perceived deprivation → external attribution of blame → moral justification → group membership → violent action. Each floor requires time, social contact, and reinforcement. The whole process, Moghaddam argued, takes months or years.
Then the internet gave the staircase an elevator.
“What once required years of face-to-face contact in underground cells can now happen in weeks through algorithmically curated content and parasocial community dynamics.”
— The Digital Compression of Radicalization
In digital environments, the path from grievance to radicalized identity can collapse from years to days or weeks. The sequence — grievance → search → algorithmic recommendation → extreme content exposure → community engagement → identity transformation — doesn’t require any human recruiter. The platform does the recruiting through its recommendation engine.
An individual experiencing anger, alienation, or perceived injustice searches for explanations. The search query itself signals emotional vulnerability to the algorithm.
Recommendation engines surface progressively more extreme content — not because they’re programmed to radicalize, but because extreme content drives engagement metrics. Each click trains the algorithm to go further.
The user enters a community (subreddit, Telegram group, Discord server) where identity fusion begins. Social rewards — belonging, status, shared language — reinforce the new identity framework.
The digital identity becomes primary. Moral disengagement mechanisms normalize previously unthinkable positions. The staircase has been traversed — and the individual genuinely believes they arrived here through free choice.
What RAND Europe’s research revealed in 2013 — and what remains underappreciated — is that “self-radicalization” is a misnomer. There is no purely individual radicalization. Even when no human recruiter is present, the algorithm constructs a pseudo-social environment that mimics the group dynamics of traditional radicalization. The user is never alone; they’re in an algorithmically curated community they didn’t consciously choose to join.
Think about your own content consumption. Have you ever noticed your feed gradually shifting in a particular direction after engaging with certain types of content? That’s not paranoia — that’s the algorithm learning your engagement patterns and optimizing for more of the same.

🚨 Algorithmic Drift: The Radicalization Nobody Designed
This is where it gets genuinely unsettling. Algorithmic drift is the process by which engagement-optimized recommendation systems gradually move users toward more extreme content — not through any deliberate design, but as an emergent property of optimizing for clicks, watch time, and shares.
Nobody at YouTube, TikTok, or Twitter sat in a meeting and said “let’s radicalize people.” They said “let’s increase engagement.” But engagement optimization and radicalization share the same fuel: strong emotional arousal. Content that makes you angry, outraged, or afraid keeps you on the platform longer. And the next piece of content that the algorithm serves needs to be slightly more arousing than the last to maintain your attention. This is a ratchet that only turns one direction.
Users with homogeneous beliefs form closed information environments, structurally reinforcing confirmation bias. The group creates the filter. You know you’re in one — but you believe the other side has the echo chamber, not you.
Algorithms select information based on past behavior, reducing information diversity without user awareness. The system creates the filter. You don’t know you’re in one — that’s precisely what makes it effective.
Now here’s the number that should recalibrate this debate. Bisbee et al.’s 2024 study of YouTube found that only 1 in 100,000 users who started with moderate political content ended up consuming far-right extremist content. That sounds reassuring until you do the math: YouTube has over 2 billion monthly users. One in 100,000 is 20,000 people. And as Christchurch, El Paso, and Buffalo demonstrated, it takes exactly one radicalized individual to produce catastrophic real-world violence.
💥 The Affect Economy: How Platforms Weaponize Your Emotions
Let me be honest about something that I found deeply uncomfortable when I first encountered this research: ideological persuasion is not the primary driver of online radicalization. Emotional mobilization is. People don’t get radicalized because they read a compelling manifesto. They get radicalized because they feel something — anger, humiliation, victimhood, alienation — and then find a community that validates and amplifies those feelings.
Digital platforms are, at their core, affect economies. They extract emotional reactions — likes, anger reactions, shares, comments — and convert them into engagement metrics, which drive advertising revenue. The platform doesn’t care what you feel. It cares that you feel, because feeling keeps you scrolling.
“Extremist groups don’t create the anger. They harvest anger that platforms have already cultivated.”
Extremist organizations have learned to exploit this structure with surgical precision. They construct victimhood narratives that reframe individual dissatisfaction as collective identity threat: “You’re not just personally struggling — your people are under attack.” This framing converts private grievance into group-level rage, and group-level rage into identity fusion. The platform’s engagement algorithm amplifies the most emotionally charged versions of this narrative, creating a feedback loop between platform design and extremist strategy.
Author’s Note
“I want to be careful here not to suggest that platforms are solely responsible. Individual agency, pre-existing beliefs, mental health, and social context all matter. But what the research increasingly shows is that platform design creates the conditions under which existing vulnerabilities are exploited most efficiently. The question isn’t whether people have agency. It’s whether the environment they’re exercising that agency in has been designed to lead them somewhere they wouldn’t otherwise go.”
Next time you feel a surge of outrage while scrolling, pause. Ask yourself: am I angry because this is genuinely important, or because the platform has optimized my feed to serve me outrage-inducing content? The fact that you can’t easily tell the difference is exactly the problem.
🎯 What Actually Works — And What Doesn’t
If the problem is experience design rather than content, then the solutions need to target design rather than content. Here’s what the evidence says — and I’ll be blunt about what’s working and what’s theatre.
- Algorithmic transparency requirements (EU DSA model)
- Friction-based design: adding delays, diverse source suggestions
- EXIT-style mentorship programs using former extremists
- System-level risk assessments of recommendation engines
- Content-only moderation (whack-a-mole problem)
- Counter-narrative campaigns (often don’t reach target audience)
- Platform self-regulation (conflict of interest with engagement metrics)
- Individual-level “digital literacy” without system-level change
The most promising approaches share a common thread: they target the interaction pattern, not just the content. The EU’s Digital Services Act and the UK’s Online Safety Act represent a paradigm shift — requiring platforms to assess systemic risks in their recommendation algorithms, not just remove individual pieces of harmful content.
The regulatory framework must evolve from content-based to system-design-based. We don’t regulate cars by banning specific crash trajectories — we require seatbelts, airbags, and structural safety standards. Similarly, we need to regulate the structural properties of recommendation systems, not just the content they surface.
At the individual level, the concept of “cognitive hygiene” offers a partial defense. This means deliberately monitoring your own information consumption patterns: intentionally seeking out opposing viewpoints, noting when your feed seems to be narrowing, and periodically resetting your algorithmic profile. It’s not a solution — individual hygiene can’t fix a contaminated system — but it’s a meaningful practice while we wait for systemic changes.
For one week, try this: after every social media session, spend 5 minutes deliberately searching for a perspective that contradicts what you just consumed. Notice how different the content feels — not just intellectually, but emotionally. That emotional dissonance is your confirmation bias making itself visible.

❓ Frequently Asked Questions
📚 References & Further Reading
-
Moghaddam, F. M., “The Staircase to Terrorism,” American Psychologist, 60(2), 2005
→ The foundational model of gradual radicalization. Essential for understanding how digital environments compress this process. -
Miller-Idriss, C., Hate in the Homeland, Princeton University Press, 2022
→ How the far right uses memes, fashion, and subcultural codes to lower entry barriers while building rigid internal identity boundaries. -
Sageman, M., Leaderless Jihad, University of Pennsylvania Press, 2008
→ Demonstrated the critical role of social networks in radicalization — a role now performed by platform algorithms. -
RAND Europe, Radicalisation in the Digital Era, RAND Corporation, 2013
→ Established that “self-radicalization” is a misnomer — online radicalization is always a socially mediated process. -
Bisbee, J. et al., “Evaluating Echo Chambers and Radicalization Pathways on YouTube,” 2024
→ The 1-in-100,000 finding. Small probability, catastrophic potential — a number that should shape policy thinking.
What unsettles me most about this research isn’t the extremists — it’s the realization that the same psychological mechanisms that drive radicalization also drive my own daily media consumption. Identity fusion, emotional engagement, confirmation bias — these aren’t pathologies exclusive to extremists. They’re features of human psychology that platforms have learned to exploit at scale. The difference between a radicalized individual and the rest of us may be less about who we are and more about which algorithmic pathway we happened to stumble onto. That thought should make all of us uncomfortable — and maybe that discomfort is exactly the right starting point.
“The most dangerous algorithm isn’t one designed to radicalize. It’s one designed to engage — because in the attention economy, engagement and radicalization share the same neural pathway.”
— On the Convergence of Commerce and Extremism
Have you noticed algorithmic drift in your own content consumption? What strategies do you use to maintain information diversity? Share your experience in the comments below.