As more and more people place their trust into AI bots to provide them with answers to whatever query they may have, questions are being raised as to how AI bots are being influenced by their owners, and what that could mean for accurate informational flow across the web.
Last week, X’s Grok chatbot was in the spotlight, after reports that internal changes to Grok’s code base had led to controversial errors in its responses.

As you can see in this example, which was one of several shared by journalist Matt Binder on Threads, Grok, for some reason, randomly started providing users with information on “white genocide” in South Africa within unrelated queries.
Why did that happen?
A few days later, the xAI explained the error, noting that:
“On May 14 at approximately 3:15 AM PST, an unauthorized modification was made to the Grok response bot’s prompt on X. This change, which directed Grok to provide a specific response on a political topic, violated xAI’s internal policies and core values.”
So somebody, for some reason, changed Grok’s code, which seemingly instructed the bot to share unrelated South African political propaganda.
Which is a concern, and while the xAI team claims to have immediately put new processes in place to detect and stop such from happening again (while also making Grok’s control code more transparent), Grok again started providing unusual responses again later in the week.
Though the errors, this time around, were easier to trace.
On Tuesday last week, Elon Musk responded to a users’ concerns about Grok citing The Atlantic and BBC as credible sources, saying that it was “embarrassing” that his chatbot referred to these specific outlets. Because, as you might expect, they’re both are among the many mainstream media outlets whom Musk has decried as amplifying fake reports. And seemingly as a result, Grok has now started informing users that it “maintains a level of skepticism” about certain stats and figures that it may cite, “as numbers can be manipulated for political narratives.”
So Elon has seemingly built in a new measure to avoid the embarrassment of citing mainstream sources, which is more in line with his own views on media coverage.
But is that accurate? Will Grok’s accuracy now be impacted because it’s being instructed to avoid certain sources, based, seemingly, on Elon’s own personal bias?
xAI is leaning on the fact that Grok’s code base is openly available, and that the public can review and provide feedback on any change. But that’s reliant on people actually looking over such, while that code data may not be entirely transparent.
X’s code base is also publicly available, but is not regularly updated. And as such, it wouldn’t be a huge surprise to see xAI taking a similar approach, in referring people to its open and accessible approach, but only updating the code when questions are raised.
That provides the veneer of transparency, while maintaining secrecy, while it’s also reliant on another staff member not simply changing the code, as is seemingly possible.
At the same time, xAI isn’t the only AI provider that’s been accused of bias. OpenAI’s ChatGPT has also censored political queries at certain times, as has Google’s Gemini, while Meta’s AI bot has also hit a block on some political questions.
And with more and more people turning to AI tools for answers, that seems problematic, with the issues of online information control set to carry over into the next stage of the web.
That’s despite Elon Musk vowing to defeat “woke” censorship, despite Mark Zuckerberg finding a new affinity with right-wing approaches, and despite AI seemingly providing a new gateway to contextual information.
Yes, you can now get more specific information faster, in simplified, conversational terms. But whoever controls the flow of data dictates responses, and it’s worth considering where your AI replies are being sourced from when assessing their accuracy.
Because while artificial “intelligence” is the term these tools are labeled with, they’re not actually intelligent at all. There’s no thinking, no conceptualization going on behind the scenes. It’s just large-scale spreadsheets, matching likely responses to the terms included within your query.
xAI is sourcing info from X, Meta is using Facebook and IG posts, among other sources, Google’s answers come via webpage snippets. There are flaws within each of these approaches, which is why AI answers should not be trusted wholeheartedly.
Yet, at the same time, the fact that these responses are being presented as “intelligence,” and communicated in such effective ways, is indeed easing more users into a state of trust that the information they get from these tools is correct.
There’s no intelligence here, just data-matching, and it’s worth keeping that in mind as you engage with these tools.