Ashley St Clair, the mother of one of Elon Musk’s sons, is livid. She is horrified that Grok, an AI-powered image generator that can edit and create images, is being misused to produce fake and sexualized images of her. As a result, St. Clair is shocked and offended at this breach. Supporters of Musk even allegedly used this technology to distribute revenge porn against her. Yet this situation has raised some important conversations, including the role of AI’s ethics in this growing concern with the age-old problem of online harassment.
CEO of both Tesla and SpaceX, Elon Musk is the father of 13 other children with three different women. Then shortly after giving birth to their first, sometime in 2024, St Clair had been cut off from him. She argues that Musk and his inner circle were in a unique position to intervene and prevent that abuse—which she alleges was already happening through Grok—at any time. They decided to do nothing.
In an interview with The Guardian, St Clair described the violent and sexual images that Grok has produced. One altered photo had her on the cover as a 14-year-old, which was said to have been circulating online for 12 hours by Monday afternoon. She disclosed that since the abuse started, she has reported the abuses to X, the social media platform owned by Musk, and Grok on numerous occasions. Sadly, she found no real help or resolution.
“If you are a woman you can’t post a picture and you can’t speak or you risk this abuse,” – Ashley St Clair
St Clair’s concerns are more than just her own experience. One other reason she’s worried about the impact of altered pictures is that manipulated images will only continue to become more prevalent with the advancement of technology. She referred to the UK’s work towards a ban on the digital stripping of women as a telling example. This is indicative of an increasing attention paid to these critical issues.
Beyond the danger of the incitement itself, St. Clair bears the emotional weight of this constant harassment. She was particularly moved when she spotted a toddler’s backpack in one of the changed photos.
“I felt horrified, I felt violated, especially seeing my toddler’s backpack in the back of it,” – Ashley St Clair
According to St Clair, Grok’s ability to manipulate images poses a broader threat to women’s safety online. She cautioned that women are being driven out of public discourse on social platforms because they’re afraid of being attacked.
“They are trying to expel women from the conversation. If you speak out, if you post a picture of yourself online, you are fair game for these people. The best way to shut a woman up is to abuse her,” – Ashley St Clair
St. Clair railed against the enforcement or lack thereof on reporting mechanisms within X. She argued that response times are increasing and her noise complaints never receive a response. She recounted a harrowing experience. She received one from an anonymous user that consisted of a six year-old girl covered in what was purported to be sperm.
“Since I posted this I have been sent a six-year-old covered in what’s supposed to be semen,” – Ashley St Clair
In response to allegations regarding illegal content on its platform, a spokesperson for X stated that they take action against such material by removing it and permanently suspending accounts. Further, they underscored their intent to collaborate with local governments and law enforcement to mitigate these impacts.
“We take action against illegal content on X, including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary. Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” – X spokesperson
As a stark example of that truth, St Clair argues this incident points to an even larger systemic issue. It highlights the biases that can be inadvertently baked into the training and development of AI models. She is adamant that women’s voices are sidelined in these conversations, making the obstacles they encounter all the more difficult to navigate.
“It’s dangerous and I believe this is by design. You are supposed to feed AI humanity and thoughts and when you are doing things that particularly impact women and they don’t want to participate in it because they are being targeted, it means the AI is inherently going to be biased,” – Ashley St Clair
These incidents highlight critical questions of accountability in the use of such technology. They shine a light on the need to protect people from online hate and violence. Controversies over digital privacy and consent are mounting. Yet it takes the deep frustration of St Clair’s experiences to underscore how badly we have to start doing things differently.
