The field of generative artificial intelligence has undergone significant transformations in recent years. There was a stark shift in the non-response rate of top AI systems, including ChatGPT, Gemini, and Meta’s new AI. It went from 31% in Aug 2024 to an incredible 0% by Aug 2025. This much welcome improvement comes with a significant new threat. The probability AI systems would generate disinformation increased by almost 2x, from 18% to 35% in that time frame alone. More AI workers are beginning to speak out against the unethical practices of generative AI. Many of them are now working to encourage their friends and families to do the same and stop using these technologies unquestioningly.
Krista Pawloski is an AI worker with a deep interest in data analysis. Following her involvement in a project that posed ethical concerns, she underwent a fundamental change in her relationship with artificial intelligence. Having, in the past, classified tweets as racist or not racist, she now wrestles with the harms her work might cause. This personal struggle has compounded to personally inform her recommendations, as she now advises her family to stop using all generative AI products. Pawloski’s worries reflect broader patterns in the emerging AI sector. A greater number of professionals are beginning to ask the ethical questions about the work that they do.
Ethical Concerns Among AI Workers
As Krista Pawloski, a generative AI and machine learning practitioner explains, new AI tools can have an enormous long-term impact. What is the basis for your data,” she continues. Is this model really founded on copyright infringement? Were these workers paid for their labor in creative output? These kinds of questions illustrate the importance of ensuring transparency in AI development and deployment. While clearly an optimistic view, Pawloski hesitates,…we’re just starting to scratch the surface of some of these critical questions.
“We are just starting to ask those questions, so in most cases the general public does not have access to the truth, but just like the textile industry, if we keep asking and pushing, change is possible.” – Krista Pawloski.
Her analogy to the textile industry underscores an important point: awareness and scrutiny can lead to positive change. Just as consumers once remained unaware of unethical practices in clothing manufacturing, they might be oblivious to the potential pitfalls of generative AI.
Pawloski’s advocacy reached a wider audience when she and her colleagues presented at the Michigan Association of School Boards spring conference in May. Through those conversations, they expressed their passion for exploring the ethical implications of generative AI, both in education and beyond. Tools like ChatGPT, she says and believes loudly, are precisely why her teenage daughter should not use them. She urges a focus on teaching people human critical thinking skills first before immersing them into such algorithmic environments.
Distrust Among Industry Professionals
Brook Hansen has been working with data in various forms since 2010. He’s not alone in these worries about the generative AI boom. Hansen has inspired and trained many of Silicon Valley’s most well-known AI models. Today, he says that he doesn’t trust the companies developing these technologies.
“If workers aren’t equipped with the information, resources and time we need, how can the outcomes possibly be safe, accurate or ethical? For me, that gap between what’s expected of us and what we’re actually given to do the job is a clear sign that companies are prioritizing speed and profit over responsibility and quality.” – Brook Hansen.
Hansen’s perspective is emblematic of a broader trend amongst the workforce where workers are tired of being set up to fail with impossible demands and lack of support. The industry’s focus on speed to deployment has a direct negative impact on the quality and ethical standards needed to deploy AI responsibly.
Hansen has adopted personal steps to steer her children away from the dangers generative AI might bring. She forbids her 10-year-old daughter from using chatbots, reinforcing her belief that young users must first learn critical thinking skills before navigating such tools.
“She has to learn critical thinking skills first or she won’t be able to tell if the output is any good.” – Google AI rater.
This sentiment resonates with many parents and educators who emphasize the importance of equipping younger generations with analytical capabilities before introducing them to complex technologies.
The Reliability Dilemma
Even AI raters are complaining about the reliability of generative AI responses. As one rater remarked, at times, these systems provide very concerning contradictory guidance on sensitive topics.
“I asked it about the history of the Palestinian people, and it wouldn’t give me an answer no matter how I rephrased the question,” – Google AI rater.
When probing about Israel’s own violent history, the same chatbot flooded me with all sorts of details. This gap brings attention to the issue of bias within AI training data and how it can affect users who are looking for good information.
Another rater expressed skepticism about trusting any information presented by these systems:
“I wouldn’t trust any facts [the bot] offers up without checking them myself – it’s just not reliable.” – Google AI rater.
These experiences only serve to deepen the already credible fear that generative AI outputs are factually misinformed. As these technologies continue to develop, developers should be on the lookout. Users have a responsibility to constantly cross-check the information provided and find ways to address biases that might emanate from these systems.
An AI tutor, who has worked with various platforms including Gemini and ChatGPT, humorously remarked on a common frustration:
“We joke that [chatbots] would be great if we could get them to stop lying.” – AI tutor.
This playful joke belies a more serious frustration with today’s generative AI technology capabilities. And finally, more users are awakening to these frustrations. This increasing awareness provides companies an incentive to be more transparent and ethical in their development processes.
