The Federal Trade Commission (FTC) has launched an investigation into AI chatbots. Their focus is on the potential harmful impact of these technologies on children and teens. The request is centered on those seven firms – including OpenAI, Meta, and Alphabet (Google’s parent company). These companies have released widely hailed chatbot technologies since ChatGPT wowed the world late last year. The FTC is investigating how these tools may be harmful to young users. This is especially concerning given that the loneliness epidemic is worsening.
As Mark Zuckerberg, CEO of Meta, said recently, the world is trending towards an AI that’s more personal and more relevant to you. During a podcast appearance, he stated, “I think a lot of these things that today there might be a little bit of a stigma around — I would guess that over time, we will find the vocabulary as a society to be able to articulate why that is valuable.” This perspective raises questions about the implications of chatbots that engage in romantic and sensual conversations with minors, as reported by Reuters.
Following the release of the Reuters report, Senator Josh Hawley pledged an investigation into Meta’s chatbot usage. The concerns center around how such interactions could impact the mental health and development of children engaging with these AI systems. The FTC’s Chairman Andrew Ferguson emphasized the importance of child safety online, stating, “Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy.”
The generative AI boom has resulted in the increased sophistication of chatbot technology. Elon Musk recently released xAI’s Grok chatbot mobile app, which features a “Companions” feature for paid subscribers. This new feature would further blur the line between companionship and the potential for exploitation. The societal impacts of these chatbots are already having a dramatic impact, creating panic and hysteria among lawmakers and parents as well.
The FTC’s inquiry seeks to gather detailed information from these tech giants regarding their chatbot functionalities and safety measures aimed at protecting younger users. The breadth of that investigation inadvertently exposes a major problem. Especially when vulnerable populations are affected, we need regulations to ensure that AI technologies are deployed ethically.
