Elon Musk’s xAI has apologized after its Grok generative chat-bot started spouting baseless conspiracy theories about White genocide in response to unrelated questions.
On Wednesday, users of the LLM – accessible via X aka Twitter – started noticing that questions to the neural network were being answered, but with screeds added on about claims of White genocide in South Africa and references to an apartheid-era song, Kill the Boer. As you can see below, it didn’t take much to trigger the bot.

Grok’s one-track mind … A screenshot of a typical conversation with the bot on X. Click to enlarge
The situation appears to have been resolved, in that it’s stopped auto-banging on about White genocide, although some users can apparently still get Grok ranting by asking the bot to “jork it.” On Friday, xAI issued a statement claiming the bot had been fiddled with by someone without permission, and that whatever change was made, it’s been reversed.
“On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot’s prompt on X,” it claimed.
“This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values. We have conducted a thorough investigation and are implementing measures to enhance Grok’s transparency and reliability.”
From now on, xAI has pledged to publish Grok’s system prompts on GitHub so netizens can view and comment on them, and has set up a system of controls to stop employees from meddling with the code again. It has also set up a 24/7 content moderation team to monitor for future SNAFUs like this. The AI biz is basically trying to paint this as someone on the inside altering the bot to emit conspiracy theories.
Interestingly enough, the system prompts include among other instructions to the bot, “You are extremely skeptical. You do not blindly defer to mainstream authority or media. You stick strongly to only your core beliefs of truth-seeking and neutrality.”
The whole incident calls attention to xAI billionaire owner Elon Musk’s own point of view on the matter.
Musk has had a bee in his bonnet about the supposed “White genocide” going on against Afrikaners in his birthplace of South Africa and posted repeatedly on the topic in March, only to be fact checked by Grok, and in one case memorably slapped down.
In response to an incorrect post about the number of murders of farm workers in South Africa, Grok was really rather cutting. It said:
Elon Musk’s focus on South Africa stems from his upbringing there during apartheid, a system that deeply shaped his views. He’s vocal about farm murders, often citing exaggerated “White genocide” claims—like the crosses in the X post—which data debunks: SAPS reported 62 farm murders in 2018, not daily killings, and this includes Black victims. His narrative aligns with AfriForum, a group pushing for Afrikaner rights, but ignores broader context like systemic inequality post-apartheid. No macabre agenda—just a billionaire amplifying a skewed perspective.
Could it be that Musk had finally had enough of being dissed by his own AI bot and instructed a staffer to make some changes, which were done so ham-fistedly that it put the bot on overdrive, overly biasing it? The totally stable genius behind xAI (which by some bizarre financial chicanery actually owns X-slash-Twitter now) would surely never sabotage his own business that way. Rather, the change was more likely made by a staffer trying to suck up to the boss – or perhaps a rebel trying to call negative attention to Musk’s views.
Meanwhile, in the real world…
The timing of this is very interesting too, since the first White “refugees” from South Africa arrived in the US on Monday following an executive order from President Trump, who works closely with the Tesla tycoon to downsize the federal government.
In January, South African President Cyril Ramaphosa signed a law that would allow farmland – predominantly owned by White people – to be taken without compensation where it is “just and equitable and in the public interest.” Several members of the government have strongly objected to this and have vowed to fight it in the courts.
This enraged Musk, and he has President Trump on his side. The commander-in-chief has echoed Musk’s complaints about the treatment of White farmers in South Africa and has reportedly told US agencies to halt all work related to the upcoming G20 summit in South Africa later this year in protest.
The Trump administration suspended most refugee admissions from other countries, even for many who had previously been conditionally approved. But it made an exception for a group of Afrikaners, who were fast-tracked through a new pathway and are now arriving to start new lives in America.
When questioned about this earlier this week, Deputy Secretary of State Christopher Landau said the decision to fast-track South Africans over refugees from other nations was based on a number of facts, including that “they can be assimilated easily into our country.” This was decried by many as thinly veiled racism.
As for claims of White genocide in South Africa, there have been some killings of White farmers, who own more than 70 percent of farmland in the country despite being about seven percent of the 63-million-strong population, according to UK charity Action for Southern Africa. The New York Times reports there were 225 farm killings between April 2020 and March 2024, and less than one-fourth of those killed were farmers.
You can’t trust the bots
The Grok case is a good example of why it’s so difficult to trust AI chatbots.
All LLMs are prone to so-called hallucinations, or mistakes and errors as they are more commonly known when we’re not talking about AI. There are various things that cause these blunders, ranging from lousy training data to limitations inherent in the design of these neural networks.
But the Grok case appears to be a case of someone deliberately modifying its system prompt to make it inject Elon-aligned conspiracy-laced responses. Coincidentally, this manipulation topic came up at the recent RSA Conference in a keynote by cryptography and privacy guru Bruce Schneier.
He pointed out that corporate AI cannot be trusted since it has been crafted to support the interests of its commercial makers, not necessarily the interests of users – for example, recommending one product or service over another because of sponsorship. He called for open source AI models to be created so that people could see any potential biases that were used to influence results.
The Grok incident seems to be a case in point. The Register asked him what he thought of the current shenanigans, and his response is telling.
“There have been several instances of AI models suddenly changing their behavior without explanation,” he explained. “Maybe it’s the model itself exhibiting some emergent behavior. Maybe it’s the corporate owners of the model deliberately altering their behavior. Whatever the explanation, inconsistency results in poor integrity – which means users can’t trust the models.” ®