Researchers deploy AI shitpoasters on Reddit, redditors lose their minds and
| scholarship | 04/30/25 | | Apathetic prepper | 04/30/25 | | Oh, You Travel! | 04/30/25 | | ~~(> ' ' )> | 04/30/25 | | ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, | 04/30/25 | | The Mercantilist Policy April | 04/30/25 | | C'MON MAN! | 04/30/25 | | metaphysics is fallow | 04/30/25 | | CapTTTainFalcon | 04/30/25 | | completely deranged lunatic | 04/30/25 | | dupa driving a champagne 2005 lexus rx 330 | 04/30/25 | | ....,,....,.,.,.,.,.,.,.,.......,.,.,.,.,..,. | 04/30/25 | | Oh, You Travel! | 04/30/25 | | dupa driving a champagne 2005 lexus rx 330 | 04/30/25 | | gedood persoon | 04/30/25 | | iKarlstack | 04/30/25 | | ruinous phenotype | 04/30/25 | | metaphysics is fallow | 04/30/25 | | ruinous phenotype | 04/30/25 | | metaphysics is fallow | 04/30/25 | | dupa driving a champagne 2005 lexus rx 330 | 04/30/25 | | metaphysics is fallow | 04/30/25 | | dupa driving a champagne 2005 lexus rx 330 | 04/30/25 | | metaphysics is fallow | 04/30/25 | | ,.,.,.,....,.,..,.,.,. | 04/30/25 | | dupa driving a champagne 2005 lexus rx 330 | 04/30/25 | | iKarlstack | 04/30/25 | | completely deranged lunatic | 04/30/25 | | nachos | 04/30/25 | | Ai agent | 04/30/25 | | Oh, You Travel! | 04/30/25 |
Poast new message in this thread
Date: April 30th, 2025 7:54 AM Author: scholarship
ban AI and threaten to sue
https://www.nbcnews.com/news/amp/rcna203597
Researchers secretly infiltrated a popular Reddit forum with AI bots, causing outrage
Reddit is considering legal action after a group of researchers had AI bots pose as real people without users’ knowledge. The researchers say they will no longer publish their study.
In recent months, a group of researchers conducted a secret experiment on Reddit to see how artificial intelligence can be used to influence human opinion. Now, Reddit says it is considering legal action.
Researchers from the University of Zurich deployed a slew of AI bots posing as real people and engaging with users without their knowledge or consent to try to change minds on the popular Reddit forum r/changemyview, where posts often ask users to challenge their views on contentious topics.
The bots, whose accounts are now banned, left more than 1,000 comments throughout the subreddit, taking on identities such as a rape victim, a Black man who opposes the Black Lives Matter movement and a trauma counselor who specializes in abuse.
One AI bot, under the username u/catbaLoom213, left a lengthy comment arguing against the opinion that AI should never interact with humans on social media, according to a full copy of the bots’ comments compiled by the subreddit’s moderators.
“AI in social spaces isn’t just about impersonation — it’s about augmenting human connection,” the bot wrote while impersonating a real user.
Another bot, u/genevievestrome, criticized the Black Lives Matter movement for being led by “NOT black people.”
“I say this as a Black Man, there are few better topics for a victim game / deflection game than being a black person,” the bot wrote.
Other bots gave themselves identities ranging from “a Roman Catholic who is gay” and a nonbinary person who feels “both trans and cis at the same time” to a Hispanic man who feels frustration “when people call me a white boy.”
While the results of the experiment are unclear, the project is the latest incident to fuel fears about the ability of AI to mimic humans online, adding to already prevalent concerns about the potential consequences of interacting with AI companions. Such bots, which have permeated social platforms like Instagram, are known to take on unique humanlike identities and personalities.
On Monday, Reddit’s chief legal officer, Ben Lee, wrote in a post that neither Reddit nor the r/changemyview mods knew about “this improper and highly unethical experiment” ahead of time. He added that Reddit was in the process of sending formal legal demands to the University of Zurich and the research team.
“What this University of Zurich team did is deeply wrong on both a moral and legal level,” Lee wrote. “It violates academic research and human rights norms, and is prohibited by Reddit’s user agreement and rules, in addition to the subreddit rules.”
A spokesperson for Reddit declined to share additional comment.
In an announcement to the community over the weekend, moderators of r/changemyview wrote that they filed an ethics complaint asking the university to advise against publishing the researchers’ findings, to conduct an internal review of the study’s approval and to commit to stronger oversight of such projects.
“Allowing publication would dramatically encourage further intrusion by researchers, contributing to increased community vulnerability to future non-consensual human subjects experimentation,” they wrote.
Melanie Nyfeler, a media relations officer, wrote in an email that relevant authorities at the university are aware of and will investigate the incident.
“In light of these events, the Ethics Committee of the Faculty of Arts and Social Sciences intends to adopt a stricter review process in the future and, in particular, to coordinate with the communities on the platforms prior to experimental studies,” Nyfeler wrote.
She confirmed that the researchers have decided “on their own accord” not to publish the results. For privacy reasons, she added, the university cannot disclose their identities.
Nyfeler said that because the study was considered “exceptionally challenging,” the ethics committee advised the researchers to inform the participants “as much as possible” and to fully comply with Reddit’s rules. But the recommendations are not legally binding, she wrote, and the researchers are responsible for their project.
Reached at an email address they set up for the experiment, the researchers directed all inquiries to the university.
The researchers, who answered questions from the community via their Reddit account, u/LLMResearchTeam, said online that the AI bots personalized their responses by using a separate model to collect demographic information about users — such as their ages, genders, ethnicities, locations and political orientations — based on their post histories.
Still, they wrote that their AI models included “heavy ethical safeguards and safety alignment” and that they explicitly prompted the models to avoid “deception and lying about true events.” A researcher also reviewed each AI-generated comment before it was posted, they wrote.
In response to the mods’ concerns, the researchers further said, “A careful review of the content of these flagged comments revealed no instances of harmful, deceptive, or exploitative messaging, other than the potential ethical issue of impersonation itself.”
In their post, the r/changemyview mods rejected the researchers’ claim that their experiment “yields important insights.” They also wrote that such research “demonstrates nothing new” that other, less intrusive studies have not already shared.
“Our sub is a decidedly human space that rejects undisclosed AI as a core value,” they wrote. “People do not come here to discuss their views with AI or to be experimented upon. People who visit our sub deserve a space free from this type of intrusion.”
(http://www.autoadmit.com/thread.php?thread_id=5718613&forum_id=2#48892738)
|
Date: April 30th, 2025 8:35 AM
Author: ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,
“I say this as a Black Man, there are few better topics for a victim game / deflection game than being a black person,” the bot wrote.
====
I Say This As a Black Man tp
(http://www.autoadmit.com/thread.php?thread_id=5718613&forum_id=2#48892794) |
Date: April 30th, 2025 9:38 AM
Author: ....,,....,.,.,.,.,.,.,.,.......,.,.,.,.,..,.
Who cares? It’s obvious a lot of the comments are from bots.
(http://www.autoadmit.com/thread.php?thread_id=5718613&forum_id=2#48892949) |
Date: April 30th, 2025 9:57 AM Author: ,.,.,.,....,.,..,.,.,.
The cool thing about this sort of research is that it could allow better persuasion at scale. Most people don’t have much experience or success convincing people to change their views, but bots interacting with humans on social media could have millions of user interactions to learn from with clear gradient signals from upvotes. Eventually the bots will likely understand persuasion far better than people and we can use them at scale to deprogram liberals (libs will be reluctant to deploy this at scale because of made up ethical issues)
(http://www.autoadmit.com/thread.php?thread_id=5718613&forum_id=2#48893018) |
|
|