This is both disturbing and informative, in regards to the broader application of AI bots on social apps.
As reported by 404 Media, a team of researchers from the University of Zurich recently ran a live test of AI bot profiles on Reddit, to see whether these bots could sway people’s opinions on certain divisive topics.
As reported by 404 Media:
“The bots made more than a thousand comments over the course of several months and at times pretended to be a ‘rape victim,’ a ‘Black man’ who was opposed to the Black Lives Matter movement, someone who ‘work[s] at a domestic violence shelter,’ and a bot who suggested that specific types of criminals should not be rehabilitated. Some of the bots in question ‘personalized’ their comments by researching the person who had started the discussion and tailoring their answers to them by guessing the person’s ‘gender, age, ethnicity, location, and political orientation as inferred from their posting history using another LLM.’”
So, basically, the team from the University of Zurich deployed AI bots powered by GPT4o, Claude 3.5 Sonnet, Llama 3.1, and used them to argue perspectives in the subreddit r/changemyview, which aims to host debate on divisive topics.
The result?
As per the report:
“Notably, all our treatments surpass human performance substantially, achieving persuasive rates between three and six times higher than the human baseline.”
Yes, these AI bots, which had been unleashed on Reddit users unknowingly, were significantly more persuasive than humans in changing people’s minds on divisive topics.
Which is a concern, on several fronts.
For one, the fact that Reddit users were not informed that these were bot replies is problematic, as they were engaging with them as humans. The results show that this is possible, but the ethical questions around such are significant.
The research also shows that AI bots can be deployed within social platforms to sway opinions, and are more effective at doing so than other humans. That seems very likely to lead to the utilization of such by state-backed groups, at massive scale.
And finally, in the context of Meta’s reported plan to unleash a swathe of AI bots across Facebook and IG, which will interact and engage like real humans, what does this mean for the future of communication and digital engagement?
Increasingly, it does seem like “social” platforms are going to eventually be inundated with AI bot engagement, with even human users using AI to generate posts, then others generating replies to those posts, etc.
In which case, what is “social” media anymore? It’s not social in the context that we’ve traditionally understood it, so what it is then? Informational media?
The study also raises significant questions about AI transparency, and the implications around using AI bots for varying purpose, potentially without human user knowledge.
Should we always know that we’re engaging with an AI bot? Does that matter if they can present valid, valuable arguments?
What about in the case of, say, developing relationships with AI profiles?
That’s even being questioned internally at Meta, with some staff pondering the ethics of pushing ahead with the roll-out of AI bots without fully understanding the implications on this front.
As reported by The Wall Street Journal:
“Inside Meta, staffers across multiple departments have raised concerns that the company’s rush to popularize these bots may have crossed ethical lines, including by quietly endowing AI personas with the capacity for fantasy sex, according to people who worked on them. The staffers also warned that the company wasn’t protecting underage users from such sexually explicit discussions.”
What are the implications of enabling, or indeed, encouraging romantic relationships with unreal, yet passably human-like entities?
That seems like a mental health crisis waiting to happen, yet we don’t know because there hasn’t yet been any sufficient testing to understand the impacts of such deployments.
We’re just moving fast, and breaking things, like the Facebook of old, which, more than a decade into the introduction of social media, is now revealing significant impacts, on a massive scale, to the point where authorities are looking to implement new laws to limit the harms of social media usage.
We’ll be doing the same with AI bots. In five years time, in ten years. We’ll be looking back and questioning whether we should have ever allowed these bots to be passed off as humans, with human-like responses and communication traits.
We can’t see it now, because we’re too caught up in the innovation race, the push to beat out other researchers, the competition of building the best bots that can replicate humans, etc.
But we will, and likely too late.
The research shows that bots are already convincing enough, and passable enough, to sway opinions on whatever topic. How long until we are being inundated with politically-aligned messaging using these same tactics?