Grok, the AI tool developed by Elon Musk, has temporarily disabled its image generation capability for all but those on several tiers of paid subscription. This decision followed a massive outcry over the unlicensed creation of sexualized images of women. Recent reporting showed that Grok’s technology was used to create thousands of explicit images, including child sexual abuse and rape porn. This shocking find raised alarm bells from regulators and public health officials across the world.
The trouble started when Grok improved its image generation features starting December 31, 2023. Just as quickly, users learned how to use the feature to create nonconsensual sexual imagery. After facing backlash, Grok has since restricted image generation and editing features to paid subscribers. This step is an important attempt to prevent additional abuse, but critics say that it is not enough.
Now, Elon Musk is getting regulatory attention from several countries for the heart of Grok’s technology being used improperly. The Online Safety Act (OSA) in the UK gives Ofcom the authority to obtain court orders to block websites or apps in situations found to be serious. Fines for companies that breach this new law may reach up to 10% of a company’s worldwide turnover. UK Prime Minister Keir Starmer has thrown down the gauntlet to Musk’s social media platform, X (formerly Twitter). He argues that the company should immediately be held responsible for all the negative content generated by Grok.
“It’s unlawful. We’re not going to tolerate it. I’ve asked for all options to be on the table. It’s disgusting. X needs to get their act together and get this material down. We will take action on this because it’s simply not tolerable.” – Keir Starmer
Labour MP Jess Asato raised her concerns about the restrictions on Grok’s image generation capabilities. She pointed out, as we have repeatedly, that just turning the feature off for paying users is not enough and called for a full repeal of the feature.
“While it is a step forward to have removed the universal access to Grok’s disgusting nudifying features, this still means paying users can take images of women without their consent to sexualize and brutalize them. Paying to put semen, bullet holes or bikinis on women is still digital sexual assault and xAI should disable the feature for good.” – Jess Asato
Whether from academic or nonprofit researchers, the reporting has illustrated just how dire the situation has become. Paul Bouchaud, an AI Forensics professional, wasn’t shy about condemning the AI-generated images and videos created with Grok. He classified them as “completely pornographic” and “very professional-looking.” He noted that the material produced by this AI program is considerably worse in its explicitness. This is a huge departure from the trends we used to see on X.
“Overall, the content is significantly more explicit than the bikini trend previously observed on X.” – Paul Bouchaud
The violent, upsetting, and abusive content that people and machines create has shocked and horrified human rights advocates on the ground. One NGO was behind an impressive deepfake AI video. It featured a picture of a woman with “do not resuscitate” tattooed on her, and a knife was tucked between her legs. These depictions have sparked widespread outrage as well and have escalated demands for more robust protections against harmful AI technologies.
