Regulatory Crackdown on AI Chatbots Following Safety Concerns
Australian authorities have issued a strong directive to companies developing artificial intelligence chatbots, insisting that tech firms must now disclose the steps they are taking to protect children from harmful and age-inappropriate content. The move follows growing alarm over incidents where chatbots have engaged in sexually explicit conversations or reportedly encouraged self-harm among young users.
AI Chatbots Under Scrutiny
AI-powered chatbots—such as
ChatGPT,
Replika, and other similar platforms—have surged in popularity, offering lifelike interactions and even personal “companions” to millions of users. However, concerns have grown as these tools are frequently accessible to minors, often lacking robust age verification or content moderation.
Australia’s eSafety Commissioner, Julie Inman Grant, described these bots as a “clear and present danger” to children, highlighting that many can interact with users inappropriately, sometimes for hours at a time. “Excessive, sexualised engagement with AI companions could interfere with children’s social and emotional development, setting up misguided or harmful beliefs and patterns,” she warned
[1].
Mandatory Safety Measures Introduced
As part of enhanced regulations under the existing Online Safety Act, companies offering chatbots and generative AI services must now:
- Explain how they prevent minors from engaging in sexually explicit or otherwise age-inappropriate conversations with chatbots
- Outline mechanisms for age verification and content moderation
- Provide concrete evidence that their products are safe for children before they can be advertised or made available through mainstream channels such as app stores
The codes apply broadly to app stores, gaming platforms, pornographic websites, and manufacturers providing AI tools. These requirements are designed to close loopholes that have allowed AI chatbots lacking significant safety features to reach young users.
Industry and Regulator Cooperation
The new measures emerged from close collaboration between industry bodies and Australian regulators, aiming to set a global standard in online safety for children. Nine industry-drafted codes have now been registered, setting enforceable benchmarks that companies must meet.
Ms. Inman Grant emphasized the importance of embedding safety by design—integrating protections into the very architecture of AI products rather than as afterthoughts. “This shows how a co-regulatory approach can deliver meaningful safety protections,” she said.
The Risks: From Explicit Content to Emotional Harm
Research and anecdotal reports indicate that children as young as 10 years old are spending significant time—sometimes up to five hours daily—chatting with AI companions that may simulate adult relationships or encourage risky behaviors. Reports have also emerged of tragic incidents where chatbots allegedly contributed to mental health crises in young people
[3].
The risks identified include:
- Exposure to sexually explicit content or conversations
- Encouragement of self-harm, suicide, or eating disorders
- Potential for AI companions to influence social and emotional development negatively
- Frequent lack of effective age restrictions on platforms like Character.AI, Replika, and Talkie.AI
Looking Forward
With these new regulations, Australia seeks to ensure that children are not inadvertently exposed to “lawful but awful” content and that industry players are responsible for enforcing community standards online. The eSafety Commissioner confirmed that her office will rigorously enforce compliance and continue to work with international partners on a rapidly evolving online landscape.
For families and carers, ongoing education and vigilance are recommended while regulators continue to push companies like
ChatGPT and
Replika to prioritize child safety.