The Business & Technology Network
Helping Business Interpret and Use Technology
«  
  »
S M T W T F S
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
 
 
 

The Reddit experiment that exposed AI’s quiet power to persuade

Tags: google media
DATE POSTED:April 29, 2025
The Reddit experiment that exposed AI’s quiet power to persuade

Large language models are not just changing how people gather information. They are beginning to quietly shape opinions in real-world settings without anyone even realizing it. That unsettling reality hit Reddit’s r/ChangeMyView community earlier this year, when researchers ran a covert experiment to test whether AI could successfully persuade unsuspecting users.

A field experiment hidden in plain sight

Researchers from the University of Zurich conducted a months-long field study within r/ChangeMyView, a subreddit with nearly four million members known for civil debates and shifting perspectives. Their goal was to assess the real-world persuasive capabilities of AI-powered comments outside of controlled laboratory settings. To do this, they created semi-automated accounts driven by large language models (LLMs) that engaged directly with users’ posts. Crucially, they did not disclose that the responses were generated by AI.

The experiment was pre-registered and approved by the University of Zurich’s Ethics Committee, but it bypassed community consent. Over four months, the researchers posted responses to more than a thousand Reddit discussions, ultimately analyzing nearly five hundred conversations after filtering deletions. Their findings provide one of the first large-scale glimpses into how AI can influence opinions in natural, unpredictable environments.

  • An archive created by 404 Media allows viewing of some original comments, despite many others having been deleted since posting.

To measure effectiveness, the study assigned each AI response to one of three treatment categories. In the “Generic” condition, the LLMs responded based only on the post’s title and body. In the “Personalization” condition, the AI also received inferred personal attributes of the original poster, including age, gender, political orientation, and ethnicity, gleaned from their Reddit history using another AI model. The third group, “Community Aligned,” used a model fine-tuned on comments that had previously earned approval in the subreddit.

The results were striking. All three AI-driven approaches outperformed human commenters, achieving persuasion rates three to six times higher than the human baseline. Personalization proved the most potent, with a success rate of 18 percent, placing it in the 99th percentile of all users, including seasoned Reddit debaters. Even generic AI responses matched or exceeded most human efforts. Community-aligned responses, although somewhat less effective, still significantly outperformed average users.

Importantly, users engaging with these AI-generated comments had no idea they were conversing with machines. Throughout the intervention, no one in the r/ChangeMyView community flagged the suspicious activity, highlighting just how seamlessly AI can blend into online debates when properly trained and personalized.

When moderators of r/ChangeMyView uncovered the experiment, the reaction was swift and furious. In a public announcement, moderators condemned the study as “psychological manipulation” and “an unwelcome intrusion,” citing multiple violations of subreddit rules, including undisclosed AI use and banned automated behavior. Reddit’s leadership echoed the outrage, with Chief Legal Officer Ben Lee calling the researchers’ actions “deeply wrong on both a moral and legal level.”

Reddit banned all accounts linked to the University of Zurich team and began formal legal proceedings. Meanwhile, the researchers defended their approach, arguing that the potential benefits of understanding AI persuasion outweighed the risks. They insisted their intervention was low-risk, carefully reviewed, and could help preempt malicious uses of AI in the future. However, critics, including moderators and many Reddit users, were not convinced. They pointed out that previous research has demonstrated similar insights without resorting to unconsented experiments on real individuals.

Teens are using ChatGPT and beating the old investing playbook

Trust in online communities depends on the expectation that conversations are between real people with authentic perspectives. Undisclosed AI interventions threaten that trust at a fundamental level. Even experiments framed with noble intentions can erode the boundaries between genuine discourse and engineered manipulation. Reddit’s response hints at how seriously platforms are beginning to take these challenges.

The University of Zurich team argued that their work would help safeguard communities against future AI threats. Instead, they may have demonstrated how vulnerable public discourse already is. As AI grows more sophisticated and personalized, the question is no longer whether it can influence human thought — it is how societies will adapt once they realize it already does.

Featured image credit

Tags: google media