AI “swarms” could fake public consensus and quietly distort democracy, Science Policy Forum warns

January 22, 2026

A new Science Policy Forum article warns that the next generation of influence operations may not look like obvious “copy-paste bots,” but like coordinated communities: fleets of AI-driven personas that can adapt in real time, infiltrate groups, and manufacture the appearance of public agreement at scale. In this week’s journal, the authors describe how the fusion of large language models (LLMs) with multi-agent systems could enable “malicious AI swarms” that imitate authentic social dynamics—and threaten democratic discourse by counterfeiting social proof and consensus. 

The article argues that the central risk is not only false content, but synthetic consensus: the illusion that “everyone is saying this,” which can influence beliefs and norms even when individual claims are contested. This risk compounds existing vulnerabilities in online information ecosystems shaped by engagement-driven platform incentives, fragmented audiences, and declining trust. 

A malicious AI swarm is a network of AI-controlled agents that can hold persistent identities and memory; coordinate toward shared objectives while varying tone and content; adapt to engagement and human responses; operate with minimal oversight; and deploy across platforms. Operating with minimal oversight and spreading across platforms, these systems can generate diverse, context-aware content that still moves in lockstep—making them far more difficult to detect than traditional botnets.

"In our research during COVID-19, we observed misinformation race across borders as quickly as the virus itself. AI swarms capable of manufacturing synthetic consensus could push this threat into an even more dangerous realm.”, says Prof. Meeyoung Cha, a scientific director at the Max Planck Institute for Security and Privacy in Bochum.

Instead of moderating posts one by one, the authors argue for defenses that track coordinated behavior and content provenance: detect statistically unlikely coordination (with transparent audits), perform stress tests for social media platforms via simulations, offer privacy-preserving verification options, and share evidence through a distributed AI Influence Observatory—while also reducing incentives by limiting monetization of inauthentic engagement and increasing accountability.

Go to Editor View