Among the most concerning findings, the research identified posts supporting undemocratic practices as a key driver of polarization. When content expressing anti-democratic attitudes was amplified in users’ feeds, political animosity increased substantially, suggesting that algorithmic amplification of such content threatens not just political civility but democratic institutions themselves.
The study examined X during the 2024 presidential election, identifying posts that expressed support for undemocratic practices. This included content that advocated bypassing democratic norms, questioned election legitimacy without evidence, or supported authoritarian approaches to political problems. Such posts represented a subset of the divisive content researchers manipulated in users’ feeds.
Over 1,000 participants received feeds with slightly more or less of this anti-democratic content. Those exposed to more showed increased polarization equivalent to three years of natural change, all within one week. This rapid effect suggests that algorithmic amplification could accelerate not just partisan division but also erosion of democratic commitment.
The finding has implications beyond individual attitudes. When substantial portions of the population lose faith in democratic processes and institutions, those institutions become more fragile. If millions of citizens are regularly exposed to content supporting undemocratic practices through algorithmic amplification, the cumulative effect could significantly undermine democratic stability.
Platforms face crucial choices about how to handle anti-democratic content. Current engagement-based algorithms tend to amplify whatever provokes strong reactions, and challenges to democratic norms often generate intense engagement. But should platforms accept responsibility for the potential democratic consequences of amplifying such content, even if doing so might reduce engagement and revenue?
Support for Undemocratic Practices Spreads Through Algorithmic Amplification
8