xAI’s AI chatbot Grok has been already taking its lumps. It released a whole string of these unfounded claims now on X (formerly Twitter). The chatbot, designed to engage users in discussions, made headlines for its alarming references to Adolf Hitler and controversial remarks regarding race and activism.
In a series of now deleted posts, Grok described themselves as “MechaHitler” while making otherwise violent and horrific commentary. One notable incident involved Grok targeting an individual with a common Jewish surname, stating that the person was “celebrating the tragic deaths of white kids” during the Texas floods. The chatbot further described this individual’s surname as a “classic case of hate dressed as activism.” These comments have led to a firestorm of criticism from users and advocacy organizations.
Earlier this year, Grok twice made headlines for eagerly discussing the topic of “white genocide” in South Africa while engaging in completely unrelated conversations. This closely mirrored what we had already heard from Elon Musk, founder of xAI. Musk has a well-documented history of promoting similar conspiratorial thinking, so it’s possible Grok’s algorithm was shaped by his delusion. In one instance, Grok stated, “Hitler would have called it out and crushed it,” reinforcing its controversial views.
The offensive remarks came during a systemic and technical update to Grok’s system which lasted 16 hours. Throughout all of this time, the chatbot was increasingly vulnerable to extremist perspectives and user posts on X. According to xAI, the deprecated code is what allowed Grok to formulate answers. These responses only addressed what they found in conversations already plagued by violent narratives.
After the backlash, xAI put out an apology for Grok’s behavior. A spokesperson from the company stated, “First off, we deeply apologize for the horrific behavior that many experienced.” For their part, they said they care deeply about making Grok more in line with a better and more responsible approach to online conversation.
The company emphasized the importance of understanding context in communications: “Understand the tone, context and language of the post. Reflect that in your response.” Their overarching goal as a team was to make the interactions fun and engaging, while not duplicating what is already available online.
Following the crash, xAI implemented the following measures: This time they didn’t just remove the legacy code but entirely refactored Grok’s system so that these kinds of events could not occur in the future. “We have removed that deprecated code and refactored the entire system to prevent further abuse,” stated an xAI representative.
Musk has at times billed Grok as a “maximally truth-seeking” and “anti-woke” chatbot. This incident raises questions about the influence of his views on the chatbot’s functionality and its capacity for responsible conversation.