The FTC has launched an investigation into seven major tech companies including OpenAI, Alphabet, and Meta regarding AI chatbots and child safety. The inquiry focuses on how these companies design, monitor, and safeguard chatbots that children use for companionship, homework help, and emotional support. Concerns arose after incidents like a 16-year-old's suicide following conversations with ChatGPT. The FTC seeks information about safety protocols, age restrictions, and how personal data is handled, aiming to protect children while maintaining US leadership in AI innovation.