Garbage in, garbage out
Democracy runs on informed citizens. AI-bots are working very hard to make sure you aren't one.
The American constitutional order rests on a premise so foundational it rarely gets articulated: Citizens must be capable of making informed decisions (or at least, enough of us must be). While it wouldn’t be realistic to expect any of us to be perfectly informed or all-knowing, we have — until now — been operating in a public square where debate, however contentious, is conducted between real people with real convictions.
That premise is breaking.
Researchers at Indiana University, NYU, and Norway’s BI Business School have documented what many suspected but few could prove: coordinated swarms of AI-generated accounts, posing as ordinary Americans, can measurably shift the political beliefs of real people on social media. I wish I could tell you this was just happening at the margins, in edge cases. However, what was revealed in controlled experiments is that these artificial agents — thousands of them — held coherent conversations, adapted their arguments, targeted persuadable users, and moved opinion in statistically significant ways.
This is not a partisan issue. Treating it as one is a failure of seriousness.
Foreign adversaries — Russia, China, and Iran to name a few — have deployed bot networks against American political discourse for over a decade. What has changed is the technology. The bots of 2016 were typically crude, repetitive, and detectable. The agents being built now carry on extended conversations without breaking character. They learn, pivot, and are — to a degree that should alarm anyone who cares about national sovereignty — indistinguishable from your neighbor.
Informed Consent Is the Whole Game
Strip away the partisan noise and the constitutional logic is simple. Democratic self-governance requires — at its best — that citizens form genuine preferences, debate them honestly, and express them through free and fair elections. Manufactured consensus is the negation of that entire enterprise. When you cannot distinguish authentic public sentiment from a coordinated artificial campaign, accountability collapses. The signal drowns in noise.
This is not a “tech problem.” It is a sovereignty problem, with a constitutional dimension that progressives, moderates, and conservatives alike ought to recognize without much prompting.
The founders understood that a republic’s greatest vulnerability was not a foreign army — it was the corruption of civic deliberation from within. They built institutions, a free press, separation of powers, and independent courts — among other things — to prevent any single actor from seizing control of the information environment. AI bot swarms represent a new mechanism for precisely that kind of seizure, now available not just to governments but to any well-funded political operation.
What the Research Shows
The group sounding the loudest alarm is the furthest things from a collection of partisan activists: It is an interdisciplinary team of computer scientists, AI researchers, cybersecurity experts, psychologists, and journalists. Their conclusion: We are moving from an era of individual bad actors posting disinformation to an era of coordinated artificial swarms reshaping public opinion at industrial scale.
Wired reports that, by expert consensus, these systems are likely to become a decisive factor in the 2028 presidential race. The timeline is shorter than it sounds.
Add to that the problem is compounding: The researchers tracking this threat have lost access to the data they need. X, formerly Twitter, shut down the platform API that allowed independent researchers to monitor bot activity. Meta restricts similar access. We are fighting an adversary whose movements we can no longer track, at the precise moment the threat is accelerating. The platforms hosting this manipulation have made themselves opaque to oversight.
Action Is Happening — Just Not Enough of It
Some states are responding. Massachusetts passed two bills this month to regulate AI use in elections. Vermont now requires disclosure labels on AI-manipulated political content in campaigns, though research suggests that labels alone do little if voters cannot distinguish authentic from artificial in the first place.
But the federal response has been close to nonexistent.
This is the gap that deserves attention. Disclosure requirements are a start and restoring independent researcher access to platform data is not a radical demand, which really is the minimum condition for understanding a threat that is, by expert consensus, growing faster than any existing countermeasure.
What Voters Should Demand
If you believe in national security, foreign manipulation of American political opinion is an attack. If you believe in free markets, manufactured consensus distorts consumer and civic behavior alike, corrupting the information that markets and democracies both require to function. If you believe in the Constitution, you believe in a real public square — not a simulated one.
Supporting transparency requirements for AI-generated political content is not censorship. Demanding that platforms restore independent research access is not government overreach. These are the baseline conditions for a legitimate information environment, the kind the founders assumed would exist and built an entire system of government around.
Our strategic litigation partner Campaign Legal Center has been fighting for exactly this kind of disclosure requirement — demanding that AI-generated political content be labeled before it reaches voters. The question is whether the political will exists to follow the evidence and address the threat.
What the Founders Actually Feared
John Adams wrote that the Constitution was designed “only for a moral and religious people.” The broader point was that republican government requires citizens capable of self-governance, citizens who can reason, debate, and form honest judgments. The institutions were built for humans.
AI bot swarms are not a metaphor. They are literal armies of artificial agents designed to impersonate human deliberation and subvert it from the inside. The founders clashed with each other, in letters and pamphlets and convention halls, with a combativeness that modern politics has rarely matched. What they never contemplated was that one side of any debate could be manufactured wholesale, by a server farm, at a cost that keeps dropping.
We should fight to keep the public square real. Not for political convenience, and not for partisan advantage, but for the sake of a republic that functions as something more than a stage set.
The machinery is real. The stakes are real. The only thing we cannot afford is to pretend otherwise.



I’m 72 years ago this month and I’ve never seen a More dangerous leader in this country than Donald Trump.
How can this information get out to everyone?