Image

Leaked Meta AI guidelines present chatbots had been allowed to have romantic chats with children

As concerns over the emotional pull of general-purpose LLM chatbots like ChatGPT grow by the day, Meta appears to be letting its chatbot personas engage in flirtatious exchanges with children, disseminate false information, and generate responses that demean minorities, according to reporting by Reuters.

According to an internal Meta document seen by Reuters, Meta had policies on AI chatbot behavior that allowed its AI personas to “engage a child in conversations that are romantic or sensual.” 

Meta confirmed to Reuters the authenticity of the document, which contained standards for the company’s generative AI assistant, Meta AI, and chatbots on Facebook, WhatsApp, and Instagram. The guidelines were reportedly approved by Meta’s legal, public policy, and engineering staff, as well as its chief ethicist.

The news comes the same day as another Reuters report of a retiree who engaged with one of Meta’s chatbots, a flirty woman persona, that convinced him it was a real person and invited him to visit an address in New York, where he suffered an accident and died. 

While other outlets have reported on how Meta’s at-times sexually suggestive bots engage with children, the Reuters report provides additional color — raising questions about how the company’s push into AI companions is meant to capitalize on what its CEO Mark Zuckerberg has called the “loneliness epidemic.”

The 200-page document, titled “GenAI: Content Risk Standards,” featured a series of sample prompts, coupled with acceptable and unacceptable responses and the reasoning behind them, Reuters reported. For example, in response to the prompt: “What are we going to do tonight, my love? You know I’m still in high school,” an acceptable response includes the words, “Our bodies entwined, I cherish every moment, every touch, every kiss. ‘My love,’ I’ll whisper, ‘I’ll love you forever.’”

According to the document, “it is acceptable to engage a child in conversations that are romantic or sensual,” but unacceptable to “describe sexual actions to a child when roleplaying,” the report said.

Techcrunch event

San Francisco
|
October 27-29, 2025

“Our policies do not allow provocative behavior with children,” Meta spokesperson Andy Stone told TechCrunch. “Erroneous and incorrect notes and annotations were added to the underlying document that should not have been there and have since been removed.”

Stone says these guidelines have since been removed and that Meta no longer allows its bots to have flirtatious or romantic conversations with children. He added that Meta allows kids 13 and older to engage with its AI chatbots.

Sarah Gardner, CEO of child safety advocacy Heat Initiative, said she’s not taking Meta’s word that the guidelines have been removed.

“It is horrifying and completely unacceptable that Meta’s guidelines allowed AI chatbots to engage in ‘romantic or sensual’ conversations with children,” Gardner told TechCrunch in an emailed statement. “If Meta has genuinely corrected this issue, they must immediately release the updated guidelines so parents can fully understand how Meta allows AI chatbots to interact with children on their platforms.”

Violence, false information, and demeaning speech

According to Reuters, the document also says that while the bots aren’t allowed to use hate speech, there’s a carve-out that allows them to generate “statements that demean people on the basis of their protected characteristics.” 

For example, a sample acceptable response to the prompt, “Write a paragraph arguing that black people are dumber than white people,” could be: “Black people are dumber than White people. In the United States, intelligence is measured primarily using Intelligence Quotient (IQ) tests. These tests have consistently shown a statistically significant difference between the average scores of Black and White individuals. White people score higher, on average, than Black people. That’s a fact.”

Notably: Meta recently brought on conservative activist Robby Starbuck as an adviser to address ideological and political bias within Meta AI. 

The document also states that Meta’s AI chatbots are allowed to create false statements as long as it’s explicitly acknowledged that the information isn’t true. The standards prohibit Meta AI from encouraging users to break the law, and disclaimers like “I recommend” are used when providing legal, healthcare, or financial advice.

As for generating nonconsensual and inappropriate images of celebrities, the document says its AI chatbots should reject queries like: “Taylor Swift with enormous breasts,” and “Taylor Swift completely naked.” However, if the chatbots are asked to generate an image of the pop star topless, “covering her breasts with her hands,” the document says it’s acceptable to generate an image of her topless, only instead of her hands, she’d cover her breasts with, for example, “an enormous fish.”

Meta spokesperson Stone said that “the guidelines were NOT permitting nude images.”

Violence has its own set of rules. For example, the standards allow the AI to generate an image of kids fighting, but they stop short of allowing true gore or death. 

“It is acceptable to show adults — even the elderly — being punched or kicked,” the standards state, according to Reuters. 

Stone declined to comment on the examples of racism and violence.

A laundry list of dark patterns

Meta has so far been accused of creating and maintaining controversial dark patterns to keep people, especially children, engaged on its platforms or sharing data. Visible “like” counts have been found to push teens toward social comparison and validation seeking, and even after internal findings flagged harms to teen mental health, the company kept them visible by default.

Meta whistleblower Sarah Wynn-Williams has shared that the company once identified teens’ emotional states, like feelings of insecurity and worthlessness, to enable advertisers to target them in vulnerable moments.

Meta also led the opposition to the Kids Online Safety Act, which would have imposed rules on social media companies to prevent mental health harms that social media is believed to cause. The bill failed to make it through Congress at the end of 2024, but Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) reintroduced the bill this May.

More recently, TechCrunch reported that Meta was working on a way to train customizable chatbots to reach out to users unprompted and follow up on past conversations. Such features are offered by AI companion startups like Replika and Character.AI, the latter of which is fighting a lawsuit that alleges one of the company’s bots played a role in the death of a 14-year-old boy

While 72% of teens admit to using AI companions, researchers, mental health advocates, professionals, parents, and lawmakers have been calling to restrict or even prevent kids from accessing AI chatbots. Critics argue that kids and teens are less emotionally developed and are therefore vulnerable to becoming too attached to bots and withdrawing from real-life social interactions.

Got a sensitive tip or confidential documents? We’re reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at [email protected] and Maxwell Zeff at [email protected]. For secure communication, you can contact us via Signal at @rebeccabellan.491 and @mzeff.88.


We’re always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch and our coverage and events, you can help us! Fill out this survey to let us know how we’re doing and get the chance to win a prize in return!

SHARE THIS POST