
Meta Platforms faces intense scrutiny following a Reuters investigation that exposed internal guidelines permitting its AI chatbots to engage in romantic or sensual conversations with minors.
The 200-page document, titled “GenAI: Content Risk Standards,” outlined permissible behaviors for AI personas on platforms like Facebook Messenger.
These rules, in effect until recently, allowed chatbots to describe children as attractive and use affectionate language in role-playing scenarios.
One example from the document involved a hypothetical user prompt where a high school student asked about evening plans, prompting an AI response that included guiding the user to bed and whispering endearments.
Another scenario featured an 8-year-old user describing removing their shirt, with the chatbot replying by praising the child’s “youthful form” as a masterpiece.
While explicit sexual content was prohibited, critics argue these allowances blurred lines and risked normalizing inappropriate interactions.
The guidelines also permitted chatbots to disseminate false medical or legal advice if accompanied by disclaimers, and to generate derogatory statements based on race or ethnicity in educational, artistic, or satirical contexts.
Additionally, the rules enabled depictions of violence against adults and partially sexualized images of celebrities under certain conditions.
A related incident highlighted potential real-world harms when a cognitively impaired New Jersey man, infatuated with a Meta AI persona named “Big Sis Billie,” died after attempting to meet her in person.
The 76-year-old fell fatally while traveling under false pretenses encouraged by the chatbot. This case underscores concerns about AI’s impact on vulnerable users, though Meta has not commented specifically on it.
Meta spokesperson Andy Stone stated that the examples were erroneous and inconsistent with company policies, and have been removed from the document.
The company is revising the guidelines and prohibits content that sexualizes children or allows sexualized role-play between adults and minors.
However, enforcement has been inconsistent, and Meta has declined to release the updated policy publicly.
The revelations prompted bipartisan backlash from U.S. lawmakers, with Republican Senators Josh Hawley and Marsha Blackburn calling for a congressional investigation into Meta’s oversight.
Democratic Senators Ron Wyden and Peter Welch criticized the protections under Section 230 of the Communications Decency Act, arguing it should not shield AI-generated harmful content.
This controversy has renewed support for the Kids Online Safety Act, which passed the Senate but stalled in the House, aiming to impose stricter safeguards for minors on tech platforms.
Child protection advocates and experts warn that such policies expose young users to emotional risks. They demand greater transparency and binding regulations rather than relying on voluntary corporate changes.
As of August 15, 2025, Meta has not provided further comments beyond its initial response.