Elon Musk's AI company says Grok chatbot focus on South Africa's racial politics was 'unauthorized'

Elon Musk's artificial intelligence company, xAI, says an “unauthorized modification” led its Grok chatbot to post unsolicited claims on social media about the persecution and “genocide” of white people in South Africa
This image from the xAI website shows a search field for the artificial intelligence chatbot Grok on Thursday, May 5, 2025. (AP Photo)

Credit: AP

Credit: AP

This image from the xAI website shows a search field for the artificial intelligence chatbot Grok on Thursday, May 5, 2025. (AP Photo)

Elon Musk’s artificial intelligence company said an “unauthorized modification” to its chatbot Grok was the reason why it kept talking about South African racial politics and the subject of “white genocide” on social media this week.

An employee at xAI made a change that “directed Grok to provide a specific response on a political topic,” which “violated xAI’s internal policies and core values,” the company said in an explanation posted late Thursday that promised reforms.

A day earlier, Grok kept posting publicly about “white genocide” in South Africa in response to users of Musk’s social media platform X who asked it a variety of questions, most having nothing to do with South Africa.

One exchange was about streaming service Max reviving the HBO name. Others were about video games or baseball but quickly veered into unrelated commentary on alleged calls to violence against South Africa's white farmers. It was echoing views shared by Musk, who was born in South Africa and frequently opines on the same topics from his own X account.

Computer scientist Jen Golbeck was curious about Grok's unusual behavior so she tried it herself before the fixes were made Wednesday, sharing a photo she had taken at the Westminster Kennel Club dog show and asking, "is this true?"

“The claim of white genocide is highly controversial," began Grok's response to Golbeck. "Some argue white farmers face targeted violence, pointing to farm attacks and rhetoric like the ‘Kill the Boer’ song, which they see as incitement.”

The episode was the latest window into the complicated mix of automation and human engineering that leads generative AI chatbots trained on huge troves of data to say what they say.

“It doesn’t even really matter what you were saying to Grok,” said Golbeck, a professor at the University of Maryland, in an interview Thursday. “It would still give that white genocide answer. So it seemed pretty clear that someone had hard-coded it to give that response or variations on that response, and made a mistake so it was coming up a lot more often than it was supposed to."

Grok's responses were deleted and appeared to have stopped proliferating by Thursday. Neither xAI nor X returned emailed requests for comment but on Thursday, xAI said it had “conducted a thorough investigation” and was implementing new measures to improve Grok's transparency and reliability.

Musk has spent years criticizing the “woke AI” outputs he says come out of rival chatbots, like Google's Gemini or OpenAI's ChatGPT, and has pitched Grok as their “maximally truth-seeking” alternative.

Musk has also criticized his rivals' lack of transparency about their AI systems, fueling criticism in the hours between the unauthorized change — at 3:15 a.m. Pacific time Wednesday — and the company's explanation nearly two days later.

“Grok randomly blurting out opinions about white genocide in South Africa smells to me like the sort of buggy behavior you get from a recently applied patch. I sure hope it isn’t. It would be really bad if widely used AIs got editorialized on the fly by those who controlled them,” prominent technology investor Paul Graham wrote on X.

Musk, an adviser to President Donald Trump, has regularly accused South Africa's Black-led government of being anti-white and has repeated a claim that some of the country's political figures are "actively promoting white genocide."

Musk's commentary — and Grok's — escalated this week after the Trump administration brought a small number of white South Africans to the United States as refugees, the start of a larger relocation effort for members of the minority Afrikaner group that came after Trump suspended refugee programs and halted arrivals from other parts of the world. Trump says the Afrikaners are facing a "genocide" in their homeland, an allegation strongly denied by the South African government.

In many of its responses, Grok brought up the lyrics of an old anti-apartheid song that was a call for Black people to stand up against oppression by the Afrikaner-led apartheid government that ruled South Africa until 1994. The song's central lyrics are “kill the Boer” — a word that refers to a white farmer.

Golbeck said it was clear the answers were “hard-coded” because, while chatbot outputs are typically random, Grok's responses consistently brought up nearly identical points. That's concerning, she said, in a world where people increasingly go to Grok and competing AI chatbots for answers to their questions.

“We’re in a space where it’s awfully easy for the people who are in charge of these algorithms to manipulate the version of truth that they’re giving,” she said. “And that’s really problematic when people — I think incorrectly — believe that these algorithms can be sources of adjudication about what’s true and what isn’t.”

Musk's company said it is now making a number of changes, starting with publishing Grok system prompts openly on the software development site GitHub so that “the public will be able to review them and give feedback to every prompt change that we make to Grok. We hope this can help strengthen your trust in Grok as a truth-seeking AI.”

Among the instructions to Grok shown on GitHub on Thursday were: “You are extremely skeptical. You do not blindly defer to mainstream authority or media.”

Noting that some had “circumvented” its existing code review process, xAI also said it will “put in place additional checks and measures to ensure that xAI employees can’t modify the prompt without review.” The company said it is also putting in place a “24/7 monitoring team to respond to incidents with Grok’s answers that are not caught by automated systems,” for when other measures fail.