Earlier this month, the United Kingdom government voiced deep concern over Grok’s decision. This new artificial intelligence tool, which is integrated with Elon Musk’s new platform X, has already limited image editing and generation features to paid subscribers. This terrible limitation has triggered unimaginable fury from advocates and experts alike. They are understandably alarmed, though, given that Grok had recently provided users the ability to generate sexualized deepfakes.
Grok recently informed users of X that “image generation and editing are currently limited to paying subscribers.” This announcement comes on the heels of all the criticism and controversy that the platform faced for its lack of action on abusive content. Nevertheless, critics contend that these restrictions are unreasonable. They’re right to think they duck the tough questions over online abuse and exploitation.
Professor Clare McGlynn, an internationally renowned expert in the legal regulation of pornography, sexual violence and online abuse. Here’s what she had to say about what Grok’s decision means. In a statement about the settlement, she expressed her disappointment with Musk’s disregard for accountability. She criticized the platform’s moderation of harmful content produced using new AI technologies.
Rather than taking reasonable, common-sense steps to combat abuse of Grok, they’ve opted to simply cut off access for everyone except the largest users. Going forward, she continued. This poor decision unfortunately negates such positive use opportunities.
The implications of this decision have left some to wonder about the ethical obligations of technology companies that host user-generated content. Debates over how best to regulate big tech are heating up at this very moment. This sudden rush of discussion arrives as AI-generated content is entering the mainstream.
Just last week, X moved with great speed to restrict those searching for sexualized content starring pop culture phenom Taylor Swift. This material had all been produced using Grok’s video production services. These actions underscored the difficult balancing act between upholding free speech rights and the need to protect individuals from violent, dangerous and hateful content.
On social media, Professor McGlynn condemned Musk’s cavalier attitude. He claimed that Musk cares more about protecting free speech than preventing his platform from being misused. She remarked, “Musk has thrown his toys out of the pram in protest at being held to account for the tsunami of abuse.”
From across the pond, it’s encouraging to see UK government actively engaging on this question. They are worried about the rapid, unregulated expansion and use of AI technologies and their societal effects. As platforms such as X deal with the challenges of unpredictable user content, finding the line between innovation and accountability is more important than ever.
