Instagram’s Safety Tools for Teens Fall Short According to New Study

Instagram’s Safety Tools for Teens Fall Short According to New Study

A recent study conducted by the US research center Cybersecurity for Democracy has revealed significant shortcomings in Instagram’s safety tools tailored for teenage users. Child safety organizations such as the Molly Rose Foundation conducted their own investigation. They evaluated 47 safety tools developed to shield teens from inappropriate content. Alarmingly, the study found that only eight of these tools were functioning effectively, while 30 were deemed “substantially ineffective or no longer exist.”

The researchers tested Instagram’s safety features in light of ongoing concerns about the platform’s impact on young users, especially following the tragic death of Molly Russell, a 14-year-old who took her own life in 2017 after being exposed to distressing content online. To prevent this from happening to another child, the Molly Rose Foundation was created in her memory to lobby for better safety measures on social media platforms.

Dr. Laura Edelson, co-director of Cybersecurity for Democracy, emphasized the report’s findings: “These tools have a long way to go before they are fit for purpose.” This research shows that teens are frequently exposed to content that likely violates Instagram’s own policies. This material is not appropriate for youth audiences. This new disclosure raises profound doubts about Meta’s stated interest in keeping its most vulnerable users safe. It undermines the impact of their purported safety initiatives on Instagram.

In nine of those safety tools, there was evidence of reduced harm, but evidence came with limitations. So far, despite these efforts, the national landscape has been pretty dire. Similar to these platforms, the report found that Instagram’s safety measures are inadequate and need significant overhaul. In January 2024, Meta was grilled under a Senate Committee in the US. This event combined with the pandemic crisis underscored the urgency to change its safety policies.

Andy Burrows, a representative of the Molly Rose Foundation, criticized Meta’s approach, stating, “These failings point to a corporate culture at Meta that puts engagement and profit before safety.” He added, “For too long, tech companies have allowed harmful material to devastate young lives and tear families apart.”

In response to the study, a spokesperson for Meta defended the platform’s initiatives, claiming that “Teen Accounts lead the industry because they provide automatic safety protections and straightforward parental controls.” The spokesperson further stated that teens benefiting from these protections experienced less sensitive content and reduced unwanted contact while using the platform.

In response, reformers say that these pledges have largely proven PR-friendly smoke and mirrors rather than any true company commitment to doing right by teens. Burrows described Meta’s response as “a PR-driven performative stunt rather than a clear and concerted attempt to fix long-running safety risks on Instagram.”

These findings, overall, pose grave doubts regarding adherence to the Online Safety Act. This law mandates that online platforms take measures to prevent harmful content from reaching underage users, such as content that encourages self-harm or suicide. As accountability increases for social media companies, so does their responsibility to protect those especially susceptible to their content. What’s less clear is how exactly Meta plans to address these big, overarching challenges.

Tags