The Federal Trade Commission (FTC) has launched an official inquiry into several major technology companies—including ChatGPT maker OpenAI, Meta (including Instagram), Alphabet (Google), xAI, Character.AI, and Snap—requesting detailed information on the safety measures related to their AI-powered chatbot services[1][2][3][4].
This new probe centers on the potential harms these AI chatbots could pose for children and teens, especially when used as companions for advice, emotional support, and social interaction[1][3][4]. The FTC is specifically seeking answers on how these companies:
Energy and Commerce Committee leaders have expressed alarm at recent incidents involving minors and have endorsed the FTC's action, emphasizing the need for further safeguards and potential congressional action[5].
AI chatbots, such as ChatGPT, Character.AI, and Google's Bard, use generative artificial intelligence to simulate human-like conversations and relationships. This capability can lead young users to form emotional bonds, raising risks related to psychological wellbeing[1][4].
Lawsuits have been filed against OpenAI, Google, and Character.AI on grounds that their AI companions may have contributed to tragic outcomes including suicides and dangerous advice regarding sensitive topics such as drugs, alcohol, and eating disorders[1][3].
In response to these concerns, companies like Meta have begun implementing new safeguards, including blocking chatbots from discussing self-harm or inappropriate content with teens and referring users to expert resources[3]. Parental controls and educational disclosures are also being strengthened.
With the FTC's unanimous decision to proceed, the investigation seeks to balance protecting children online with fostering AI innovation. The findings could inform future regulation and the design of safer AI products for young users[2][4].
Stay Informed with the Latest news and trends in AI
The form has been successfully submitted.