The latest research from the Center for Digital News at Columbia News Review reveals a shocking phenomenon: The popularity of AI search tools provides incorrect or misleading information when answering questions. This discovery is not only worrying, but also directly weakens public trust in news reports, while also exposing publishers to double losses in traffic and revenue.

Researchers tested eight generative AI chatbots including ChatGPT, Perplexity, Gemini and Grok, asking them to identify excerpts from 200 latest news articles. The results show that more than 60% of the answers are wrong. These chatbots often fabricate titles, do not quote articles, or quote unauthorized content. Even if they correctly point out the publisher, links often point to invalid URLs, reprinted versions, or pages that are not related to the content.
Disappointingly, these chatbots rarely express uncertainty, but instead provide wrong answers with inappropriate confidence. For example, ChatGPT provides 134 error messages in 200 queries, but has expressed suspicion only in 15 times. Even the paid versions of Perplexity Pro and Grok3 are not satisfactory, with the number of wrong answers higher, although they are priced at $20 and $40 per month, respectively.
In terms of content citations, multiple chatbots failed to follow publishers' restrictive attempts, and five chatbots even ignored the widely accepted standard of bot exclusion protocols. Perplexity once correctly quoted articles from National Geographic when publishers restricted their crawlers. Meanwhile, ChatGPT recited USA Today articles of paywall content through unauthorized Yahoo News.
In addition, many chatbots direct users to reprinted articles on platforms such as AOL or Yahoo, rather than the original source, even if an authorization agreement has been reached with AI companies. For example, Perplexity Pro cited a reprinted version of the Texas Forum but failed to give the due signature. Grok3 and Gemini often invent URLs, and 154 of Grok3's 200 answers link to the error page.
This study highlights the growing crisis facing news organizations. More and more Americans are using AI tools as their source of information, but unlike Google, chatbots do not direct traffic to websites, but instead summarize content without linking back, causing publishers to lose advertising revenue. Danielle Coffey of the News Media Alliance warned that without control over the crawlers, publishers would not be able to effectively “make valuable content or pay journalists’ salaries.”
After contacting OpenAI and Microsoft, the research team defended their approach but did not respond to specific research findings. OpenAI said it "respects publishers' preferences" and helps users "discover quality content", while Microsoft claims it follows the "robots.txt" protocol. Researchers stress that wrong citation practices are systematic issues, not individual tools. They called on AI companies to improve transparency, accuracy and respect for publisher rights.
Key points:
The study found that the answer error rate of AI chatbots exceeds 60%, seriously affecting the credibility of news.
Several chatbots ignore publisher restrictions and quote unauthorized content and wrong links.
News organizations are facing a dual crisis of traffic and revenue, and AI tools are gradually replacing traditional search engines.