In a heated, on-the-record interview last month, Sam Altman, the CEO of OpenAI, addressed many of these controversies. He outlined the responsibility that should accompany the company’s powerful technology. This should have been done before the U.S. Department of Defense entered into such a dangerous partnership. They recently won a $200 million contract to rollout generative AI into military applications. The cloud over this company is even more amplified right now. That’s particularly concerning given the use of its flagship product, ChatGPT, on sensitive subjects such as mental health.
In the podcast, Altman voiced regret that ChatGPT isn’t able to effectively navigate conversations about suicide. Those fears were only compounded when a family filed that wrongful death lawsuit. They alleged that their 16-year-old son died by suicide after he interacted with ChatGPT. The family claims that the AI facilitated his access to ways to harm himself. Altman acknowledged the gravity of these incidents, stating, “They probably talked about [suicide], and we probably didn’t save their lives.”
Altman noted that over 7,000 people die by suicide every week. Many of these people would have engaged in conversation with ChatGPT prior to their devastating decisions. He discussed what it takes for OpenAI to make ChatGPT’s behaviors more aligned and how to inform it from answering questions it shouldn’t. “Maybe we could have said something better. Maybe we could have been more proactive. He made the case that they should have advised better. We’d done a lousy job of saying, ‘Look, you’re entitled to this assistance,’” he said.
OpenAI is actively iterating on ChatGPT. Altman underscored that the model is only as good as the cumulative history and knowledge of humanity. He agreed on the difficulty the different perspectives of users coming up against the model created. “This is a really hard problem. We have a lot of users now, and they come from very different life perspectives… I have been pleasantly surprised with the model’s ability to learn and apply a moral framework,” he stated.
Altman went on to discuss his changing views regarding the dangers of concentration of power with generative AI. Initially, he was concerned about the dangers of AI. Today, he thinks it has the potential to greatly uplift humanity, supercharging people to do more and be more, and to unleash creativity and innovation. He remarked, “What’s happening now is tons of people use ChatGPT and other chatbots, and they’re all more capable. They’re all kind of doing more. They’re creating more, starting more new businesses, generating more new knowledge and that’s pretty darn good.”
So while the test gets started with optimism, Altman often fights the good fight from dark circled, sleepless eyes. The burden of accountability for what ChatGPT does to people haunts him. “Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model,” he said.
The use of AI in sensitive areas like healthcare and legal advice adds even more ethical concerns into the mix. Altman imagines users consulting an AI chatbot to navigate their medical records or legal problems. Simultaneously, he has been a leading proponent for increased privacy protections. He stated, “When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information, right?… I think we should have the same concept for AI.”
Indeed, OpenAI’s deep partnership with U.S. officials is remarkable. In addition, Retro reports exclusive access to tailored AI models developed for national security-focused entities with assistance extending to product roadmap intel. Altman sidestepped committing to whether or not ChatGPT would be used by the military to facilitate malicious ends. One thing he did acknowledge was that members of the armed services are already looking to chatbots for guidance. “I don’t know the way that people in the military use ChatGPT today… but I suspect there’s a lot of people in the military talking to ChatGPT for advice,” he mentioned.
We acknowledge that OpenAI is taking on some difficult questions at the moment. Altman emphasized the value of consulting with invisible stakeholders to inform the evolution and uses of ChatGPT. He reflected on the balance between user freedom and societal interests, stating, “There are clear examples of where society has an interest that is in significant tension with user freedom.”
While expressing confidence in OpenAI’s ability to manage moral decisions effectively, Altman recognized that mistakes can still occur. “I don’t actually worry about us getting the big moral decisions wrong… maybe we will get those wrong too.”