Recently, a serious problem has been revealed in Perplexity, the much-watched AI search tool: its answer sources contain a large amount of low-quality or even wrong AI-generated information from unreliable blogs and LinkedIn posts. This discovery raised concerns about the reliability of AI search engine information and also had a huge impact on Perplexity’s reputation. This article will analyze in detail the dilemma faced by Perplexity and the impact of this incident on the field of AI.
New reports reveal that the high-profile and well-funded AI search tool Perplexity is generating spam citing low-quality or even faulty AI from unreliable blogs and LinkedIn posts.

GPTZero, a startup that specializes in detecting AI-generated content, recently conducted an in-depth investigation of Perplexity. The company's CEO, Edward Tian, noted in a blog post earlier this month that he had noticed that "more and more of the sources of Perplexity links are automatically generated by AI."
He then examined Perplexity's AI reuse of this information and found that in some cases, Perplexity even pulled outdated and incorrect information from these AI-generated sources. In other words, it's an AI-driven cycle of misinformation, where the AI's mistakes and fictitious information find their way into Perplexity's AI-generated answers.
For example, for the question “Cultural Festivals in Kyoto, Japan,” Perplexity gave a seemingly coherent list of cultural attractions in Japanese cities. But it cites only one source: an obscure blog post posted to LinkedIn in November 2023 that was likely itself generated by AI. This is a far cry from the "news organizations, academic papers and established blogs" that Perplexity claims to use.
That's a bad image for an already troubled startup that claims to be "revolutionizing the way you discover information" by delivering "accurate knowledge" by getting "the latest information" from "trusted sources."
Highlight:
Perplexity was exposed as quoting faulty AI to generate spam from questionable blogs and LinkedIn posts.
GPTZero has detected that an increasing number of sources linked to by Perplexity are AI-generated, and that Perplexity sometimes even uses outdated and incorrect information from these sources.
Perplexity claims that its answers only come from "reliable sources," but it really matters whether its AI algorithms actually get good information from good information.
The Perplexity incident once again reminds us that although AI technology is developing rapidly, its reliability and security still need to be further improved. Strict review and screening of AI-generated content is crucial to prevent AI from becoming a tool for spreading false information. In the future, AI search engines need to establish a more complete quality control system to truly provide users with reliable information services.