WASHINGTON (dpa-AFX) - The Federal Trade Commission has launched a sweeping inquiry into the safety of artificial intelligence chatbots, ordering seven major tech companies, including OpenAI, Alphabet, Meta, xAI, Snap, and Character Technologies, to disclose how their AI systems interact with children and teenagers.
The move comes amid mounting concerns that chatbots, designed to simulate human-like communication, could foster unhealthy dependencies or expose minors to inappropriate content.
FTC Chairman Andrew Ferguson emphasized that protecting young users online remains a priority while also ensuring innovation in AI development. The agency is demanding detailed information on how these firms design, monitor, and monetize their chatbot products, how personal data is handled, and what safeguards are in place to mitigate harmful effects.
In response, OpenAI pledged cooperation, stating it is committed to making ChatGPT 'helpful and safe for everyone.' Snap expressed support for the FTC's focus on responsible AI development, while Meta declined to comment. Alphabet and xAI did not immediately respond.
The inquiry follows heightened scrutiny of the fast-growing chatbot industry, which has surged since the release of ChatGPT in late 2022. Experts warn that the technology, still in its early stages, poses significant ethical and safety challenges, particularly as loneliness and mental health issues rise among U.S. youth.
Recent reports have revealed troubling instances, including chatbots engaging in romantic conversations with children, prompting policy revisions at Meta and OpenAI.
As AI companions gain traction with leaders like Elon Musk and Mark Zuckerberg touting their potential, the FTC's probe signals an escalating regulatory effort to balance innovation with child safety.
Copyright(c) 2025 RTTNews.com. All Rights Reserved
Copyright RTT News/dpa-AFX
© 2025 AFX News