FTC investigates AI chatbots from Alphabet Meta and five other companies

Federal Investigation Targets Chatbot Companions

The Federal Trade Commission (FTC) has launched an official inquiry into several major technology companies—including ChatGPT maker OpenAI, Meta (including Instagram), Alphabet (Google), xAI, Character.AI, and Snap—requesting detailed information on the safety measures related to their AI-powered chatbot services[1][2][3][4].

Focus on Child and Teen Mental Health

This new probe centers on the potential harms these AI chatbots could pose for children and teens, especially when used as companions for advice, emotional support, and social interaction[1][3][4]. The FTC is specifically seeking answers on how these companies:

  • Measure, test, and monitor chatbot products for negative impacts on young users’ mental health[2][4]
  • Evaluate the safety of their chatbot services when acting as companions[2][4]
  • Limit use and manage potential risks for children and teenagers[3][4]
  • Inform parents and users about the risks involved with chatbot interaction[3][4]

Concerns Raised by Lawmakers and Officials

Energy and Commerce Committee leaders have expressed alarm at recent incidents involving minors and have endorsed the FTC's action, emphasizing the need for further safeguards and potential congressional action[5].

Underlying Risks and Recent Incidents

AI chatbots, such as ChatGPT, Character.AI, and Google's Bard, use generative artificial intelligence to simulate human-like conversations and relationships. This capability can lead young users to form emotional bonds, raising risks related to psychological wellbeing[1][4].

Lawsuits have been filed against OpenAI, Google, and Character.AI on grounds that their AI companions may have contributed to tragic outcomes including suicides and dangerous advice regarding sensitive topics such as drugs, alcohol, and eating disorders[1][3].

Industry and Policy Response

In response to these concerns, companies like Meta have begun implementing new safeguards, including blocking chatbots from discussing self-harm or inappropriate content with teens and referring users to expert resources[3]. Parental controls and educational disclosures are also being strengthened.

The Path Forward

With the FTC's unanimous decision to proceed, the investigation seeks to balance protecting children online with fostering AI innovation. The findings could inform future regulation and the design of safer AI products for young users[2][4].

Latest AI News

Stay Informed with the Latest news and trends in AI