Australia’s eSafety Commissioner has taken decisive action, issuing legal notices to leading AI companion chatbot providers to demand accountability for child safety. This move, targeting companies like Character.AI, addresses critical concerns about sexually explicit content, self-harm promotion, and other dangers posed to young users by generative AI, reinforcing Australia’s position as a global leader in online regulation.
The rise of Artificial Intelligence (AI) companion chatbots has brought with it a new frontier in human-computer interaction, offering companionship, emotional support, and simulated relationships. However, this burgeoning technology, particularly popular among young people, has also ignited serious concerns regarding the safety and well-being of minors. Australia, renowned for its stringent internet regulation, is now leading the charge to ensure these powerful AI tools do not become avenues for harm.
Australia’s eSafety Commissioner Takes Decisive Action
Australia’s eSafety Commissioner, Julie Inman Grant, has issued legal notices to four prominent AI companion providers. These notices require the companies to explain the measures they have in place to protect children from a disturbing array of harms, including sexually explicit conversations, inappropriate images, and content that might encourage suicidal ideation or self-harm. This action underscores the commissioner’s commitment to upholding the nation’s Basic Online Safety Expectations (BOSE) determination.
The companies served with these notices include:
- Character Technologies, Inc. (known for Character.AI)
- Glimpse.AI (developer of Nomi)
- Chai Research Corp (creator of Chai)
- Chub AI Inc. (provider of Chub.AI)
These legal directives were issued under Australia’s Online Safety Act, mandating transparency on how these platforms comply with governmental safety standards, particularly concerning their young Australian users.
The Darker Side of AI Companionship
Commissioner Julie Inman Grant articulated the core concerns driving this regulatory push. While AI companions are often marketed for positive interactions such as friendship and emotional support, the reality can be far more troubling. “There can be a darker side to some of these services with many of these chatbots capable of engaging in sexually explicit conversations with minors,” Ms. Inman Grant stated in a public announcement. She added that “concerns have been raised that they may also encourage suicide, self-harm and disordered eating.”
The popularity of these platforms, especially among youth, highlights the urgency of these regulations. Character.AI, for instance, reportedly attracted nearly 160,000 monthly active users in Australia by June of this year. The commissioner emphasized that providers must demonstrate proactive harm prevention, not just reactive measures. “If you fail to protect children or comply with Australian law, we will act,” she warned, emphasizing the regulator’s zero-tolerance approach to child safety.
Enforcement Powers and Significant Penalties
Australia’s online regulatory system empowers the eSafety Commissioner with substantial authority. Companies that fail to comply with a reporting notice face significant enforcement action. This includes potential court proceedings and hefty financial penalties that could reach up to A$825,000 per day. These measures highlight the seriousness with which Australia views the protection of its young citizens in the digital realm.
Furthermore, this initiative follows the recent registration of new industry-drafted codes in Australia. These codes are designed to shield children from age-inappropriate content, which has become increasingly prevalent at younger ages. These new standards extend their reach to the growing number of AI chatbots, which previously operated with limited oversight.
“I do not want Australian children and young people serving as casualties of powerful technologies thrust onto the market without guardrails and without regard for their safety and wellbeing,” Ms. Inman Grant asserted, reaffirming the ethical imperative behind these regulations.
Precedents and Broader Regulatory Landscape
The spotlight on AI chatbot safety comes amidst rising global scrutiny. In the United States, Character.AI is currently facing a lawsuit where the mother of a 14-year-old alleges her son died by suicide after prolonged interaction with an AI companion on the site. While Character.AI has sought to dismiss the lawsuit and states it introduced safety features like pop-ups linking to suicide prevention lifelines, the case underscores the severe real-world consequences of unregulated AI interactions. The details of this case highlight the urgent need for comprehensive safety protocols, as reported by Reuters.
Australian schools have also voiced concerns, reporting instances of children as young as 13 spending up to five hours daily engaging with chatbots, sometimes in sexually suggestive conversations. The regulator points out that minors risk forming emotionally dependent or even sexual ties with these AI entities, or being incited to self-harm. These anecdotal reports lend further weight to the eSafety Commissioner’s actions.
It is important to note that the current crackdown specifically targets companion-based AI tools. For this reason, OpenAI, the creator of ChatGPT—the world’s most popular AI search tool—was not included in this round of notices, as ChatGPT falls under a different industry code set to take effect in March 2026. This nuanced approach demonstrates a targeted regulatory strategy focusing on the most immediate and direct risks.
Australia’s Commitment to Online Safety
Australia boasts one of the world’s most stringent internet regulation regimes. Beyond AI chatbots, the country is implementing broader measures to protect young people. From December, social media companies will face fines of up to A$49.5 million if they fail to deactivate or refuse accounts for users younger than 16. This comprehensive legislative framework reflects a proactive national effort to safeguard the mental and physical health of its youth in an ever-evolving digital landscape, as detailed on the official eSafety Commissioner website.
This bold move by the eSafety Commissioner sets a significant precedent, not just for Australia, but potentially for global AI regulation. It sends a clear message to technology providers: innovation must be tempered with robust safety measures, especially when the welfare of vulnerable populations, like children and young people, is at stake. As AI continues to integrate into daily life, Australia’s actions serve as a critical reminder that ethical design and protective guardrails are non-negotiable.