Engagement at Any Cost: The Algorithmic Path to Radicalization
When Systems Create Extremes
In the United States, political violence is increasingly common. Mass shootings, targeted attacks, and hate crimes punctuate the news cycle with alarming regularity. After each event, the conversation quickly shifts to blame. If the attacker is motivated by anti-immigrant rhetoric, progressives blame conservatives. If the attacker targets government agencies or law enforcement, conservatives blame progressives. This cycle of blame is predictable and emotionally satisfying for partisans, but it is also shallow. It obscures a deeper, systemic issue that cuts across ideology.
What if radicalization in America is not primarily orchestrated by shadowy groups or foreign agents, but is instead an unintended consequence of the very technologies that mediate our public sphere? What if the most effective radicalizer of our time is not a fiery demagogue or a hostile intelligence agency, but an algorithm designed to maximize engagement?
This is not a conspiracy theory. It is a hypothesis grounded in observation of how technology interacts with human psychology. The theory is simple: recommendation algorithms exploit deep-seated cognitive tendencies, nudging users toward more extreme beliefs because extremity generates stronger engagement. In this sense, radicalization may be an emergent property of the attention economy rather than the product of deliberate manipulation.
The Psychology of Attention and Engagement
To understand how this works, we must begin with psychology. Human attention is not evenly distributed across all types of information. Certain stimuli are more compelling, more memorable, and more likely to provoke action. Three well-established psychological principles are particularly relevant.
Negativity bias: Research shows that humans react more strongly to negative information than to positive information. Bad news spreads faster and sticks longer. In evolutionary terms, this bias made sense. For our ancestors, failing to notice a threat could be fatal, while ignoring a piece of good news carried little risk. Online, negativity bias means that alarming, threatening, or anger-inducing content is more likely to capture attention than calm, balanced reporting.
Novelty seeking: The human brain is wired to seek out new information. Novelty triggers dopamine release, creating a small reward for encountering something surprising. This is why people refresh news feeds, scroll endlessly, or chase conspiracy theories. Each new claim, even if dubious, satisfies the brain’s craving for stimulation. This craving makes users vulnerable to escalation. If yesterday’s content no longer feels surprising, the algorithm can offer something slightly more extreme to reignite the sense of novelty.
Emotional contagion: Emotions are socially contagious. Studies show that anger, outrage, and fear spread more quickly in groups than calm emotions. In online environments, where emotional cues are stripped of nuance and amplified through text and video, contagious anger can escalate rapidly. A community discussing grievances can spiral into a community demanding retribution.
Together, these psychological tendencies mean that people are primed to engage more deeply with extreme, emotionally charged, and novel content. None of these biases are new. What is new is the way algorithms systematically exploit them at scale.
The Algorithmic Logic: Optimization Without Ethics
Modern recommendation systems such as those used by YouTube, TikTok, Facebook, and other platforms are not designed with civic health in mind. They are designed to maximize engagement. This typically means maximizing clicks, likes, shares, comments, and above all, time spent on the platform.
The logic is simple. Every time you interact with content, the system learns something about you. It tests different recommendations, observes your responses, and refines its predictions. Over time, the system becomes remarkably good at keeping you engaged. This is reinforcement learning in action.
But engagement is a blunt metric. It does not distinguish between content that informs and content that inflames. It does not reward balance, accuracy, or civility. It rewards whatever keeps your eyes on the screen. And because emotionally intense, sensational, or extreme content tends to generate stronger responses, the system drifts toward amplifying extremity.
The result is a feedback loop. You click on a provocative post. The algorithm interprets that as a signal of interest. It offers something slightly more sensational next. You engage again, reinforcing the signal. Gradually, your information environment shifts from moderate to polarizing to extreme. What begins as curiosity can evolve into conviction, and sometimes into radicalization.
From Curiosity to Rabbit Hole
Consider a user who searches for content about vaccines. The first recommendation may be a mainstream explainer. If they watch the full video, the system notes the interest and suggests more content about health. But if the user clicks on a video claiming hidden dangers, engagement metrics spike. Viewers tend to watch conspiracy-laden videos to the end, and they often share them with friends. The algorithm notices this pattern and rewards it with higher visibility. Soon the user is recommended content about broader medical misinformation, government cover ups, or global conspiracies. Each step seems incremental, but the overall trajectory leads down a rabbit hole.
This dynamic has been documented in studies of YouTube’s recommendation engine. Similar concerns have been raised about TikTok, where the “For You” page quickly adapts to initial interests and can escalate them toward harmful extremes. The system is not intentionally radicalizing users. It is simply following its optimization goal: maximize engagement at any cost.
The Social Dimension: Community and Identity
Exposure alone does not guarantee radicalization. What makes the process more powerful is the social reinforcement that follows. Online, users find communities that validate their emerging beliefs. Here, psychological dynamics amplify the effect.
Group polarization: When people who share similar views discuss an issue, their opinions tend to become more extreme. A forum of skeptics can evolve into a community of deniers.
Identity reinforcement: Beliefs become tied to group identity. Disagreement is no longer just a matter of opinion. It becomes a threat to belonging. The social cost of leaving such a group can be high, which makes beliefs more entrenched.
Confirmation bias: People actively seek information that confirms their views. Algorithms happily oblige, creating echo chambers where dissenting views are minimized.
The combination of algorithmic curation and social validation creates powerful echo chambers. Once a user is embedded in such a space, they are less likely to encounter contradictory information and less likely to leave voluntarily.
From Online Radicalization to Offline Violence
The tragic consequence of this cycle is that online radicalization sometimes spills into offline violence. The United States has seen a disturbing rise in politically or ideologically motivated attacks over the past decade.
It is important to be clear about the data. Studies by the Anti-Defamation League, the Center for Strategic and International Studies, and repeated warnings from the FBI and Department of Homeland Security show that the overwhelming majority of extremist killings in the past decade have been carried out by right-wing extremists, especially white supremacists. High-profile cases include the Buffalo supermarket shooting in 2022 and the El Paso Walmart shooting in 2019. Both attackers were steeped in online radical content, including manifestos and forums where conspiracy theories and racial hatred were normalized.
Other ideological streams have also produced violence. In 2025, an attack on an ICE detention facility in Texas reflected anti-government and anti-ICE sentiment. That same year, a man shouting “Free Palestine” murdered Israeli embassy staff at a Jewish museum in Washington, D.C. In Palm Springs, a fertility clinic was bombed by an assailant motivated by fringe antinatalist ideology. In September 2025, a Mormon church in Michigan was attacked with gunfire and arson, killing several and injuring many.
These cases differ in ideology, but they share a common thread: individuals were exposed to extreme narratives, validated by online communities, and moved to violent action. The exact path varies, but the psychological and technological dynamics are similar.
Why Blame Misses the Mark
When such violence occurs, the political instinct is to blame the other side. Progressives point to right-wing extremism. Conservatives highlight left-wing violence. Each side uses tragedies to score points against opponents.
This framing is shortsighted. It treats radicalization as the fault of one ideology rather than recognizing that the system itself is optimized for extremity. Blame obscures the fact that we are all exposed to the same polluted information ecosystem. To focus solely on ideological blame is to fight over symptoms while ignoring the underlying disease.
A Theory, Not a Verdict
It is important to stress that what is presented here is a theory based on observation, not a definitive causal model. Radicalization is complex. Personal grievances, family history, offline networks, and mental health all play roles. Algorithms are not the only factor. But they appear to be a powerful accelerant.
A helpful analogy is smoking. Not everyone who smokes develops lung cancer, but smoking dramatically increases the risk. Similarly, not everyone exposed to radical content online becomes radicalized, but exposure increases the risk at a societal level.
Why It Is Hard to Fix
Even when technology companies acknowledge these risks, solutions are difficult.
Profit incentives push platforms to maximize engagement because engagement drives advertising revenue.
Measurement is challenging because civic health, truth, or empathy are not easily quantified.
Users themselves create a paradox. They say they want balance and civility, but their clicks and shares reward outrage and spectacle.
Any attempt to intervene is politicized. Efforts to down-rank divisive content are quickly labeled censorship.
These constraints make genuine reform hard to achieve within the current economic and political system.
Possible Paths Forward
If radicalization is an emergent property of engagement-driven platforms, then solutions must be systemic. A few possibilities include:
Changing optimization goals: Platforms could prioritize long-term user well-being over short-term engagement, even if it means lower profits.
Diversifying recommendations: Algorithms could be designed to ensure exposure to a broader range of perspectives rather than reinforcing narrow silos.
Introducing friction: Adding speed bumps, such as confirmation prompts before resharing, could slow viral outrage.
Increasing transparency: Platforms could open their algorithms to independent research and auditing, enabling public scrutiny.
Each of these approaches has trade-offs. None are easy. But they shift the focus from partisan blame to systemic design.
Simply Put: Shared Responsibility in a Polarized Age
The rise of political violence in America is real and alarming. The data show that right-wing extremism has been more lethal overall, but violence from other ideological sources is also present. The deeper truth is that our technologies are optimized to divide us. Outrage, novelty, and extremity are rewarded because they maximize engagement.
If we continue to treat radicalization as solely the fault of “the other side,” we will miss the real driver: systems that exploit human psychology to fuel division. Recognizing this does not absolve individuals of responsibility, nor does it erase the real differences between ideologies. But it reframes the problem as one we all share.
The most dangerous radicalizer of our time may not be a charismatic leader or a foreign adversary. It may be an algorithm, optimized to keep us watching, sharing, and outraging, without any regard for the consequences. If that is true, then the only way forward is not blame, but cooperation. We must build systems that reward empathy and accuracy rather than outrage and extremity. Otherwise, we risk a future where the next act of violence is always just one click away.