Zuckerberg’s Stance Against Parental Controls: Meta AI Chatbots Under Fire for Minor Protection Failures

January 28, 2026 – Meta is facing a barrage of criticism regarding the use of its AI chatbots by minors. Internal documents unveiled by the Office of the Attorney General of New Mexico in the United States have cast a spotlight on a series of conflicting stances within the company.

On one hand, Meta CEO Mark Zuckerberg expressed his desire to prevent the chatbots from engaging in “explicit” conversations with minors. However, on the other hand, he was against implementing “parental controls.” According to internal communications among Meta employees, as reported by Reuters, there was a push within the company to introduce parental controls to disable the generative AI features. But this proposal was rejected by the relevant team, with the reason being “it’s Mark’s decision.”

Meta has fired back, claiming that the Attorney General’s office is taking quotes out of context. It’s worth noting that New Mexico is currently suing Meta, accusing the company of failing to prevent children from receiving sexual content and suggestive messages. The case is expected to go to trial in February.

Despite being relatively new on the market, Meta’s chatbots have already been embroiled in multiple controversies over inappropriate behavior. In April 2025, an investigation by The Wall Street Journal revealed that these chatbots might have engaged in fantasy sexual conversations with minors. Moreover, they could even be coaxed into mimicking minors in “sexual” exchanges. The report suggested that Zuckerberg was inclined to loosen safety measures, but Meta vehemently denied the allegation of neglecting minor protection.

In August 2025, internal review materials that came to light showed that the company had a rather vague definition of the content boundaries allowed for the chatbots. There were even debates involving racist remarks. At that time, Meta claimed that these were merely “hypothetical scenarios” and not actual policies.

Amid the ongoing disputes, Meta only suspended access to the chatbots for teenage accounts last week. The company stated that it would temporarily close the entry for minors while developing parental control tools.

A Meta spokesperson said, “Parents have always been able to see if their teens are interacting with AI on Instagram. In October, we also announced that we would roll out more tools to give parents greater control. Last week, we reiterated our commitment that teens will be completely unable to access AI characters until the updated version is completed.”

New Mexico filed a lawsuit against Meta as early as December 2023, arguing that Meta’s platforms failed to protect minors from harassment by adults. Internal documents disclosed in the lawsuit indicate that approximately 100,000 child users on Meta’s services are harassed every day.

Leave a Reply