Arsenii Alenichev, a PhD researcher at the Institute of Tropical Medicine in Antwerp, has been the early warning siren. He warns of a new AI-generated imagery of dystopian extreme poverty being adopted by NGOs and others. His collection is made up of more than 100 such images. They demonstrate a deeply concerning reality — that digital technology, while powerful, can reinforce and magnify harmful racist and sexist stereotypes across global health communications.
Alenichev’s research further emphasizes a troubling trend in which prejudiced, AI-produced imagery permeate social media movements against food scarcity and sexual assault. These illustrations nearly always depict overtly racialized spaces. Thirdly, they can continue harmful, deficit-based stereotypes about whole continents such as Africa and countries such as India. For instance, captions accompanying these images include phrases like “Photorealistic kid in refugee camp,” or “Caucasian white volunteer provides medical consultation to young black children in African village.”
This past year, the Dutch branch of the UK-based charity Plan International kicked off a highly controversial video campaign. The campaign included these AI-generated images of an abused girl, an older man, and a pregnant teen. Through this partnership, the initiative has led to important conversations about the ethical implications of employing synthetic visuals in advocacy efforts. The United Nations recently put out a big, cool video on YouTube prompting everyone to share the same message. It includes AI-generated re-enactments of sexual violence in conflict zones, including the haunting testimony of a Burundian woman remembering being raped during the civil war in 1993.
As Alenichev insists, these images must never be allowed to circulate, for they are precisely the kinds of images that do all the work in perpetuating harmful stereotypes. He remarked,
“They are so racialised. They should never even let those be published because it’s like the worst stereotypes about Africa, or India, or you name it.”
With the rapid proliferation of these AI-created images, important questions arise about how the AI trend may affect public perception. Alenichev cautions that as these visuals go viral on the internet, they may unintentionally further affect future AI models to reproduce the biases that they already have. Their unfortunate effect, it seems, has been an incitement of more intolerance, instead of sparking the deeper dialogues needed to address issues of poverty and urban violence.
Many popular stock photo sites, including Adobe Stock Photos and Freepik, now host a growing number of AI-generated images depicting extreme poverty. This trend has provoked the ire of industry experts including Kate Kardol, an NGO communications consultant. In the remarks, she lamented how these images have brought back the conversation about “poverty porn” in the industry.
“It saddens me that the fight for more ethical representation of people experiencing poverty now extends to the unreal,” Kardol stated.
Freepik, CEO Joaquín Abela, responded to these concerns head on. It’s media consumers – not the platforms on which they’re shared – who bear the final responsibility, he said. He stated,
“It’s like trying to dry the ocean. We make an effort, but in reality, if customers worldwide want images a certain way, there is absolutely nothing that anyone can do.”
Noah Arnold, another industry observer, noted that the ease of utilizing AI imagery without consent makes them appealing to organizations. He pointed out,
“Supposedly, it’s easier to take ready-made AI visuals that come without consent, because it’s not real people.”
This trend occurs against the backdrop of decades-long poetic calls for ethical imagery and dignified storytelling in anti-poverty and anti-violence work. The rise of AI-generated images represents an alarming shift. It rushes synthetic representations to the forefront rather than championing authentic storytelling, something that has troubled NGOs in recent years.
Beyond the ethical conversations abound, some analysts are pointing to outside forces that have led to this crisis. Arnold noted how much US funding cuts have devastated NGO coffers. Consequently, brand organizations are increasingly looking for more cost-effective options like AI visuals.
The United Nations has recognized that this same debate is continuing inside their organization too. A spokesperson confirmed their decision to take down one such video which featured AI-generated content. They cited AI technology misuse and a lack of integrity in the information as the basis for the decision. They maintained that,
“It is quite clear that various organisations are starting to consider synthetic images instead of real photography, because it’s cheap and you don’t need to bother with consent and everything.”
The United Nations has acknowledged this ongoing debate within their organization as well. A spokesperson indicated that a video featuring AI-generated content had been taken down due to improper use of AI technology and potential risks regarding information integrity. They maintained that,
“The United Nations remains steadfast in its commitment to support victims of conflict-related sexual violence, including through innovation and creative advocacy.”