Over the last few years, we’ve seen a troubling trend. So-called “nudify” websites are spreading like wildfire, harnessing this same no-strings-attached AI technology to produce horrifying nonconsensual deepfake pornography. Since first emerging in 2017 and skyrocketing in 2018, these platforms have raised widespread alarm among legislators, victims, and advocacy organizations. As the problem continues to grow, activists have taken to litigation aiming to hold tech companies accountable for playing a pernicious role in this controversial sector.
Minnesota State Senator Erin Maye Quade is doing something about it. She’s advocating for state legislation that would penalize tech companies $500,000 whenever they assist in creating nonconsensual explicit deepfake images. This legislative effort reflects a growing recognition of the urgent need for accountability in the face of rapidly evolving AI capabilities.
Mantzarlis, as a concerned citizen, only wishes to continue seeing nudify-related advertisements on Meta’s platforms. Without a doubt, this raises questions about how effective the existing measures are at addressing this problem. The picture hasn’t improved now, a long six months later. Last year, San Francisco was the first to file a lawsuit against 16 companies associated with nudify applications. The city’s attorney’s office was able to successfully shut down ten websites focused on the production of nonconsensual deepfake pornography in June.
Briver LLC was one of the companies named in the lawsuit. They ultimately reached a settlement with the city, agreeing to pay $100,000 in civil penalties. We applaud the city of San Francisco for moving so quickly. This begs the question of how urgently local governments are addressing the harms these platforms have already caused.
Platforms for creating deepfakes, including mass-produced sites like DeepSwap, have increased in popularity over the past three years. This massive growth has everything to do with lightning-fast innovations in generative AI technology. One of the most infamous nudify sites, MrDeepFakes, suddenly went dark in May. This choice followed the public naming of the primary operator. Researchers discovered that most of the people who were considered “celebrity” deepfakes actually had no or minimal online presence at all. This shocking discovery raised grave ethical questions around issues of consent and privacy.
Victims like Jessica Guistolise, Molly Kelley, and Megan Hurley have been so remarkably courageous. They have been very vocal about their experiences with AI-generated deepfake images featuring their faces. Guistolise described the profound impact of discovering these images:
“The first time I saw the actual images, I think something inside me shifted, like fundamentally changed.” – Jessica Guistolise
Law professor Ari Ezra Waldman sheds light on the emotional trauma inflicted by these technologies. He emphasizes that these citizens who are victimized experience deep trauma as a result, driving them to dangerous mental health hurdles.
“I was not enjoying life at all like this.” – Molly Kelley
The White House is not alone in understanding the need for action against nonconsensual deepfakes. In fact, the AI Action Plan specifically recommends that states harmonize their laws with federal guidance. This alignment will increase accountability for technology companies that engage in these harmful practices. As you may recall in July, former President Trump signed executive orders to institute many of these changes.
“Everyone is subject to being objectified or pornographied by everyone else,” – Ari Ezra Waldman
She emphasized how a federal approach could be more effective at tackling the cross-border nature of many of the companies that would be involved.
Conversations around the removal of nudify sites have recently flared up. At the same time, platforms like Discord have emerged as new popular hotbeds for these activities following the shutdown of MrDeepFakes. This recent change reflects the resourcefulness of these communities and speaks to the continued problems that law enforcement and regulators are facing.
“We just haven’t grappled with the emergence of AI technology in the same way.” – Erin Maye Quade
Advocates have been very vocal in demanding tougher legislation. They call for increased accountability for the actors who intentionally abuse technology, and for the companies that give them the tools to do so. Jenny, an advocate for victims, stated:
“This is why I think a federal response is more appropriate,” – Erin Maye Quade
The mental health implications cannot be overstated. Kelley described experiencing health issues linked to stress caused by her situation:
AI-generated content has adversely affected families through the misinformation spread on social media platforms. The influence of this technology doesn’t stop at pictures—the development of these technologies should make us question consent, privacy, and human dignity.
“This is not an issue that will resolve itself. We need stronger laws to ensure accountability — not only for the individuals who misuse this technology, but also for the companies that enable its use on their platforms.” – Jenny
The mental health implications cannot be overstated. Kelley described experiencing health issues linked to stress caused by her situation:
“I had a fear that my body would not ‘make any insulin,’ due to cortisol brought on by stress.” – Molly Kelley
As society grapples with these complex issues, it becomes increasingly clear that the impact of AI-generated content extends far beyond mere images; it touches on fundamental questions of consent, privacy, and human dignity.
