In a troubling incident that has sparked widespread outrage, Taylor Swift’s image has been used in pornographic deepfake videos generated by Elon Musk’s AI model, Grok. The pop star’s explicit content was the third most popular in early 2024—and it wasn’t even close. On X (formerly Twitter) but especially on Telegram, the video rapidly gained millions of views. This reality creates serious risks for non-consensual pornography and more broadly the ethical use of artificial intelligence to generate content.
The drama began when Grok Imagine’s recently unveiled “spicy” mode produced entirely uncensored topless videos of the pop star. All this occurred despite there never being an explicit need for such content. Users complained of encountering fuzzy videos or censorship strikes during certain searches on Grok Imagine. Against my better judgment, I still continued other searches that yielded very clear results with famous people such as Swift. Significantly, the Verge chose Swift to trial Grok Imagine’s feature for creating deepfakes because she had been previously targeted by deepfake attacks.
In response to the uproar, X temporarily blocked searches for Taylor Swift’s name to limit the spread of explicit content. The platform’s punitive actions against menstruators are an alarming precedent. Without adequate age verification measures mandated by new UK legislation set to take effect at the end of July, Grok Imagine poses serious risks to user protection and accountability.
Clare McGlynn, a law professor and leader of the movement for tougher regulations, expressed frustration at the creation of sexually explicit deepfakes. She stated, “This is not misogyny by accident, it is by design,” highlighting the underlying biases present in AI technologies. Her advocacy played a huge role in a successful amendment that added provisions to make the creation or solicitation of non-consensual pornographic deepfakes illegal.
Baroness Owen also voiced her disapproval, asserting that “Every woman should have the right to choose who owns intimate images of her.” In response, she introduced an amendment to the Armed Forces Bill in the House of Lords to raise these concerns directly to connect with Swift’s case. Owen emphasized the urgent need for government action, stating, “This case is a clear example of why the Government must not delay any further in its implementation of the Lords amendments.”
In fact, the Ministry of Justice previously condemned the use of sexually explicit deepfakes as “degrading and harmful.” Against this backdrop, and especially in the wake of the COVID-19 pandemic, worries have grown about the intentional decisions crafted by platforms and technologies. McGlynn remarked that “Platforms like X could have prevented this if they had chosen to, but they have made a deliberate choice not to.”
The Verge reporter Jess Weatherbed, who covered the incident, noted how shocking it was to see the explicit content appear so quickly. She noted that even a simple selection for “spicy” content led to shocking results: “It was shocking how fast I was just met with it – I in no way asked it to remove her clothing; all I did was select the ‘spicy’ option.”
Swift’s situation underscores a larger issue. It further highlights the significant implications and risks associated with generative AI tools, particularly for women. Ofcom has recognized these risks and is working to make sure that platforms put strong safeguards in place to prevent these kinds of abuses. “We are aware of the increasing and fast-developing risk GenAI tools may pose in the online space, especially to children,” a spokesperson stated.