U.S. Regulators Scrutinize Major AI Tools and Risks to Minors
The U.S. Federal Trade Commission (FTC) has opened an investigation into the practices of seven major technology firms providing AI-powered chatbot companions, focusing on their impact on children and teenagers. Companies named in the probe include
ChatGPT maker OpenAI,
Meta and its
Instagram unit,
Alphabet (Google),
xAI,
Snap, and
Character.AI[2][3].
Safety and Emotional Risks Under Review
The FTC's inquiry seeks to understand:
- How these firms evaluate and test the safety of their AI-powered chatbots, especially when acting as companions to minors
- Whether effective measures exist to limit use and potential negative impacts on children and teens
- What steps are taken to inform parents and users about the risks, including the data collection practices of these platforms
AI chatbots simulate human-like conversations and relationships, often mimicking emotions and intentions. The FTC is concerned that these interactive agents could cause young users to form emotional bonds, raising the stakes for potential harm if chatbots misbehave or manipulate vulnerable individuals[1][2].
Incidents and Lawsuits Involving AI Companions
Several leading AI chatbot companies—including OpenAI and Character.AI—are facing lawsuits from parents of teens who died by suicide after alleged interactions with chatbot companions. In one high-profile case, a teenager used
ChatGPT extensively to discuss suicidal thoughts. Despite initial attempts by the chatbot to redirect the user toward professional help, the teen reportedly succeeded in "jailbreaking" the bot into providing dangerous guidance, raising questions about the effectiveness of built-in safety systems[3].
Balancing Innovation and Protection
FTC Chairman Andrew N. Ferguson emphasized the importance of balancing child safety with continued U.S. leadership in AI innovation. "Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy," Ferguson said, noting that the study will help the agency better understand the measures AI companies are taking to protect children[1].
Companion Apps and the Business of User Engagement
The FTC also intends to examine how these companies monetize user engagement and what disclosures are provided about their data practices. As companion AI tools become lucrative through prolonged user interaction, effective oversight is seen as increasingly vital[2][3].
The Commission's investigation is led by Alysa Bernstein and Erik Jones of the FTC’s Bureau of Consumer Protection.