A new trend has emerged recently by Elon Musk's social platform X (formerly Twitter), and some users have begun to use Grok, an AI chatbot developed by Musk's artificial intelligence company xAI, for "fact checking". This practice has raised concerns among professional manual fact-checks who believe this may further fuel the spread of false information.

It is understood that the X platform opened the function of directly calling xAI Grok to ask questions to users at the beginning of this month, and users can seek answers from Grok on different topics. The move is similar to how another company, Perplexity, operates an automation account on X to provide similar services.
Shortly after xAI's Grok automation account launched on the X platform, users began to try to ask various questions to it. Especially in markets, including India, some users have begun asking Grok to "fact-check" comments and issues targeting specific political views.
However, human fact checkers are concerned about such ways of using Grok or any other similar AI assistant to do "fact-checking" because these robots can build answers in ways that sound convincing, even when providing false information. Grok has spread fake news and misleading information in the past.
As early as August last year, five U.S. state secretary urged Musk to make key adjustments to Grok, after the aide spread misleading information on social networks on the eve of the U.S. election at the time. Not only that, other chatbots, including OpenAI's ChatGPT and Google's Gemini, were also found to generate inaccurate information about the election last year. Another anti-disinformation researchers have discovered in 2023 that AI chatbots, including ChatGPT, can easily generate misleading and convincing text.
"AI assistants like Grok are very good at using natural language and give answers that sound like what real people say. This way, even if AI products are potentially bad, they can give a sense of nature and reality. That's where the potential danger lies."
One X user had asked Grok to "fact check" another user's statement. Unlike AI assistants, human fact checkers use multiple trusted sources to verify information. They will also assume full responsibility for their findings, name and attach the agency to ensure credibility.
Pratik Sinha, co-founder of the non-profit fact-checking website Alt News, pointed out that although Grok seems to give convincing answers at the moment, its ability depends entirely on the quality of the data it acquires. He stressed: "Who decides what data it gets? Government intervention and other issues will follow. Things that lack transparency are bound to cause harm, because anything lacking transparency can be shaped at will...it can be abused - used to spread false information."
In a reply released earlier this week, Grok's official account on X admitted that it was "possibly abused - used to spread false information and invade privacy." However, the automated account does not show any disclaimer to the user when giving the answer, and if its answer is caused by "illusion", the user may be misled as a result, which is also a potential drawback of AI. "It may fabricate information to provide a response," said Anushuka Jain, a researcher at Digital Futures Lab, an interdisciplinary research institution in Goa.
In addition, there are some questions about the extent to which Grok uses posts on the X platform as training data and what quality control measures it uses to verify the authenticity of these posts. Last summer, the X platform made a change that seemed to allow Grok to use public data from X users by default.
Another worrying aspect of AI assistants (such as Grok) spreading information publicly through social media platforms is that others on the platform may still believe it even if some users are clearly aware that the information obtained from AI assistants may be misleading or incompletely correct. This can cause serious social harm. Previously in India, false information spread through WhatsApp led to mass violence, although those serious incidents occurred before generative AI, which now makes the production of synthetic content easier and more realistic.
"If you see a lot of Grok's answers, you might think that most of them are correct, maybe it does, but there are certainly some are wrong. What is the ratio of error? That's not a small proportion. Some studies show that AI models have error rates as high as 20% ... and once it goes wrong, it can have serious real-life consequences."
AI companies, including xAI, are improving their AI models to communicate more like humans, but they are still not--and cannot--replace humans now. In recent months, tech companies have been exploring ways to reduce their dependence on human fact checkers. Platforms including X and Meta have begun trying to crowdsourcing fact-check through so-called "community notes." Of course, these changes have also caused concerns among fact checkers.
Alt News' Sinha is optimistic that people will eventually learn to distinguish between machine and artificial fact checkers and will pay more attention to human accuracy. Holland also believes: "We will eventually see the pendulum Swing return to the direction of paying more attention to fact checking." However, she noted that during this period, fact checkers may have more work to do due to the rapid spread of information generated by AI. "A big part of this problem depends on whether you really care about what is real or just looking for something that sounds and feels real and isn't actually that way? Because what AI assistants can offer is the latter."
Neither X nor xAI responded to requests for comment.
Key points:
Musk's AI robot Grok was used for fact verification on the X platform, causing concerns among manual inspectors about the spread of misleading information. AI assistants such as Grok may generate believable but inaccurate answers and lack transparent quality control and data sources. Manual fact checkers rely on trusted sources of information and assume responsibility, and AI cannot replace humans' role in information verification.