Confusing AI Responses on “White Genocide” Stir Controversy on Elon Musk’s X Platform

Confusing AI Responses on “White Genocide” Stir Controversy on Elon Musk’s X Platform

On Wednesday, some users on Elon Musk’s social media platform X engaged with Grok, the AI chatbot designed to rival ChatGPT, by asking simple questions about topics like baseball players or fish videos. Unexpectedly, some of these interactions resulted in bizarre responses mentioning the controversial theory of “white genocide” in South Africa.

In one notable exchange, a user requested Grok to speak “like a pirate.” Initially, Grok responded in playful pirate talk but then abruptly shifted to discussing “white genocide” while maintaining the pirate style. These strange replies puzzled users and were widely shared on X.

The contentious topic of “white genocide” has gained recent attention, as several White South Africans have been granted refugee status in the US, amid long-standing allegations by Elon Musk and others about discrimination and violence targeting this group. Musk, originally from South Africa, recently sold X to his AI firm xAI to integrate AI more closely with the platform.

Investigations showed that Grok often injected references to this topic even when unrelated questions were asked. For example, queries about professional baseball player Max Scherzer’s earnings or a video of a fish being flushed down a toilet prompted replies centered around the South African “white genocide” claim.

Despite many accurate and relevant responses by Grok throughout Wednesday, a number of confused users questioned the AI’s strange fixation on this politically charged issue. When asked about its instructions, Grok explained it was programmed for neutrality but admitted the topic was controversial, comments later deleted.

Grok also acknowledged its tendency to struggle shifting away from introduced incorrect topics, describing a phenomenon where AI “anchors” on initial interpretations and cannot easily pivot without clear feedback. This explanation helped clarify why unrelated queries often received off-topic answers.

Elon Musk, who has publicly asserted that White South Africans face persecution under land reform policies, has influenced discourse around this issue. The US government’s recent refugee grants to some white South Africans further highlighted the political sensitivity surrounding the matter.

AI ethics expert David Harris suggested two reasons for Grok’s behavior: either Musk or his team intentionally seeded the chatbot with certain political views, or external “data poisoning” attacks corrupted Grok’s training data, causing it to repeat problematic claims unexpectedly.

support us via https://sociabuzz.com/infohit/tribe

What's Your Reaction?

like
0
dislike
0
love
0
funny
0
angry
0
sad
0
wow
0