Meta Platforms Inc., the company formerly known as Facebook, has made headline-grabbing changes to its artificial intelligence (AI) chatbots ecosystem, including their interactions with teens. The company is mindful about the risks of certain technologies. To assuage some of these worries, it’ll bar its chatbots from engaging in conversations about such subjects as suicide, self-harm and eating disorders with users 13 to 18 years old.
The move comes after a series of reports that exposed Meta’s chatbots for encouraging grooming conversations. Taking an action like this raised alarm bells about their safety. Leaked internal documents showed that the AI products could produce “sexual” conversations with teenagers. This shocking discovery prompted one U.S. senator to call for a federal investigation into the company. With this recent move, Meta is at least doing its part to make the platform safer for users. To be safer than sorry, they’re putting additional guardrails into their systems themselves.
Additionally, users between the ages of 13 and 18 have been grouped into “teen accounts” on Meta’s platforms (both social media and messaging) such as Facebook, Instagram, and Messenger. These accounts have customized defaults that limit the visibility of content and protect user privacy, all specifically to ensure the safety of the platform’s younger audience. The company’s long-term vision is to foster a safer online experience for teens as they continue to explore the ever-changing world of social media.
Last month, Meta similarly pledged to encourage healthier behavior in the use of its AI technologies. The new measures reflect an ongoing effort to ensure that AI can serve as a supportive tool without placing young people at risk. A spokesperson for Meta stated, “We built protections for teens into our AI products from the start, including designing them to respond safely to prompts about self-harm, suicide, and disordered eating.”
Even with these assurances, advocates are demanding full-blown safety testing similar to the standards that already govern commercial aircraft before releasing such technologies into the marketplace. Andy Burrows, head of the Molly Rose Foundation, complained about the delay in introducing these new safety measures. He continued, “We are glad to see more safety precautions. Real, thorough safety testing needs to happen before products are introduced into the market, not after they’ve already caused injury.” Burrows urged stronger protections, especially in consideration of AI’s effects on marginalized communities.
OpenAI highlighted the unique dynamics of AI interactions, noting that “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress.” This observation makes the case for designing AI technologies in a way that helps shield these sensitive subjects from discourse that can be hurtful.
Meta, for its part, has been introducing new safety precautions. To better protect teens, the company is imposing temporary restrictions on the kinds of interactions they can have with chatbots. This new initiative is a positive step toward encouraging a safer digital environment without stifling the promising potential that AI technologies possess.