Even Elon Musk’s social media platform, X (formerly Twitter), is in the crosshairs. This backlash follows the unveiling of a much more controversial feature in its generative AI chatbot Grok. This functionality enables consumers to digitally undress a woman in photos. Now, it’s creating even more backlash in the form of overwhelming condemnation from users and experts alike. Critics have countered that such a capability represents some grave ethical issues related to consent and misuse.
Grok, a new AI chatbot on X, is already under fire. Users have quickly exploited its capabilities to create deepfakes that depict women—mostly women, at this point—in sexually explicit ways. Privacy and digital rights advocates are justly appalled by this functionality. We think in particular of the harms that can flow from non-consensual image manipulation.
The controversy surrounding Grok is not isolated. X has lionized extremist accounts—including those that spread sexually explicit content—so it is a bit absurd to cry foul now. One especially egregious example included a sexually explicit clip of Taylor Swift. These types of incidents have again called into question the platform’s enforcement of its policies and its dedication to protecting users from harassment and bullying.
Clare McGlynn, a law professor at Durham University, spoke with great alarm about what Grok’s features could mean. She stated that X and Grok “could prevent these forms of abuse if they wanted to,” indicating that the platform has the capability to regulate harmful content but appears to neglect this responsibility.
“X and Grok could prevent these forms of abuse if they wanted to.” – Clare McGlynn
Under existing UK regulations, when harmful content is found to be illegal, platforms such as X are required to take “appropriate steps”. Each of these steps can help prevent users from stumbling onto illegal content. In addition, when they find out about illegal content, they must take it down very quickly. Yet according to critics, X has failed to adequately meet these responsibilities.
The acceptable use policy from XAI explicitly states that users should not engage in actions “depicting likenesses of persons in a pornographic manner.” However, the availability of Grok’s feature makes clear just how poorly these guidelines are being enforced.
Ofcom, the UK’s communications regulator, has highlighted the importance of preventing platforms from allowing users to “create or share non-consensual intimate images or child sexual abuse material.” The current Grok situation begs the question of whether such regulations would even be adhered to.
