Grok, the AI image editing tool on X (formerly Twitter), owned by Elon Musk, made a similar leap last week. Going forward, only its paying subscribers will be able to access its AI art-making and image editing capabilities. This decision comes on the heels of heavy public outcry aimed at Grok for specifically contributing to the development of sexualized deepfakes. The UK government has decried this step, labeling it “insulting” to victims of online harassment. They insist on the far-reaching consequences of limiting access to these resources for those who genuinely rely on them.
The announcement from Grok, stating that “image generation and editing are currently limited to paying subscribers,” has raised concerns about accessibility and accountability in digital spaces. Environmental justice advocates agree that X’s decision is consistent with its recent history. Last year, it prevented searches for sexualized content starring Taylor Swift created through a Grok AI video tool. That first intervention was designed to put a stop to technology abuse. The recent decision to go entirely paid has stoked the fires of this debate.
Feminist legal scholar Professor Clare McGlynn, authority on legal regulation of pornography and online abuse, recently spoke out about the current state of affairs. She explained that the agencies’ latest restrictions are indicative of a dangerous pattern from X toward avoiding responsibility for abusive content on its platform.
“Instead of taking the responsible steps to ensure Grok could not be used for abusive purposes, it has withdrawn access for the vast majority of users.” – Professor Clare McGlynn
These restrictions placed by Grok come as a huge departure of user freedom in how users can use AI generated content. Until now, millions of users enjoyed free access to image editing tools, enabling creativity and innovation on a massive scale. As the field increasingly moves toward a subscription model, it begs the question of who deserves access to these tools.
Cool pragmatism or not, we found that Professor McGlynn had some pretty scathing criticism of Musk’s decision-making here. He argued that it illustrates Musk’s unwillingness to be accountable for the weaponization of his platform. She further explained how this approach does a great disservice by restricting access. Second, it undermines users’ ability to interact responsibly and safely with technology.
“Musk has thrown his toys out of the pram in protest at being held to account for the tsunami of abuse.” – Professor Clare McGlynn
Recent developments threw stakeholders’ hearts and minds into a tizzy. Online safety advocates and groups that support victims of intimate partner violence fear the unintended consequences of taking away access to image editing software. Restricting Grok’s functions to paying subscribers would make it harder, if not impossible, to combat misuse. It will further alienate users who are priced out of these subscriptions.
As interesting as Grok’s controversy is though, it has generated a discussion we badly need to have. More importantly, it recognizes that technology companies are responsible for creating safe environments for their users. Critics argue that platforms like X must take proactive measures to prevent abuse rather than impose restrictions that may seem punitive.
As the debate continues, it remains unclear how Grok’s decision will affect its user base and the overall landscape of AI-generated content. The growing scrutiny over deepfake technology and its applications highlights the need for more comprehensive regulations and ethical considerations in the digital age.
