The hashtag-loving social media platform is set to introduce some new tools in the coming weeks aimed at granting users greater power over AI-created content. The announcement follows a dramatic onset of AI content creation. This boom has been accelerated by powerful new AI video-generating technologies like OpenAI’s Sora and Google’s Veo 3. That’s especially true when one platform—TikTok—uploads more than 100 million pieces of content each day. To enhance user experience, it unlocks the power for users to curate their own feeds to suit their interests and needs.
Brie Pegum, TikTok’s global head of program management for trust and safety, underscored the important value of that balance. She emphasized that people and technology should work together to improve content moderation technologies and ensure appropriate human oversight. The social media platform is still beta testing these changes for the next few weeks ahead of a widely anticipated global rollout. This new effort aims to balance community concerns about misleading AI-generated content with the need to maintain a safe, welcoming space for all users.
Jade Nester, TikTok’s European director of public policy for safety and privacy, described the reasoning in making this choice.
“We know from our community that many people enjoy content made with AI tools, from digital art to science explainers, and we want to give people the power to see more or less of that, based on their own preferences.” – Jade Nester
TikTok is certainly making a genuine effort to address the difficulties of content moderation. “We’re confident that there’s going to be a role for human moderators,” Pegum told users. This two-pronged approach protects employees by preemptively filtering out graphic content before it shows up on their smartphones.
As with the introduction of AI moderation tools, there are major controversies. Critics are raising serious alarms over TikTok’s content moderation approach. They’re particularly dismayed by stories that the company plans to fire dozens of London-based content moderators. In response to these concerns, Nester stated:
“We think that it’s essential to balance humans and technology to keep the platform safe.” – Jade Nester
The integration of AI into TikTok’s content moderation strategy reflects broader industry trends as platforms increasingly rely on automation to manage vast quantities of user-generated content. With over 1 billion users globally, TikTok faces the challenge of maintaining a safe and engaging environment while adapting to the evolving landscape of digital content.
As we continue testing, users should expect to see some changes in their feeds. These moves are designed to address what their users want—AI-generated content. This announcement comes as social media companies are under fire. They’re hearing it and being called out for how they handle content written by a computer or by a human.
