Recently, Patrick Soon-Shiong, the billionaire owner of the Los Angeles Times, posted a letter to readers, announcing that the newspaper will use artificial intelligence technology to add a "sound" label to some articles. This initiative aims to help readers more clearly identify the position and perspective of the article through technical means. In the letter, Matsuo emphasized that this label not only applies to traditional opinion columns, but will also cover news comments, criticisms and reviews. He believes that by providing a diverse perspective, the media can better fulfill its journalistic mission and help readers understand the problems facing the country more fully.

However, this change has not received widespread support from members of the Los Angeles Times union. Union vice chairman Matt Hamilton said that while unions support initiatives to help readers differentiate between news reports and comment articles, they are reserved for AI generation analysis that has not been reviewed by the editorial team. Hamilton believes that the lack of artificial supervision of AI-generated content may weaken the credibility of the media and even cause misleading.
In fact, some problems arose shortly after this change was implemented. For example, The Guardian noted that at the bottom of an opinion piece on the dangers of AI's unregulated use in historical documentaries, the AI tool claims that the article "is generally consistent with the center-left-sided view" and suggests that "AI democratizes historical narratives." Furthermore, in a report about California cities electing members of the Klan to serve as city council in the 1920s, an AI-generated view claimed that local historical records sometimes describe the Klan as "a product of the 'white Protestant culture' that responded to social changes" rather than a clearly hatred-driven movement. Although this statement reflects the historical background to a certain extent, its presentation seems clumsy and has a clear opposition to the subject of the article.
Ideally, the use of AI tools should be supplemented with certain editing supervision to avoid similar problems. Lack of supervised AI-generated content can lead to various errors, such as MSN's AI news aggregator's mistaken recommendation of attractions, or Apple's misinterpretation of the BBC title in the latest notification summary. These cases show that despite the huge potential for AI technology to be used in the field of news, it still needs to be used with caution, especially when it comes to sensitive topics.
It is worth noting that the Los Angeles Times is not the only media institution that applies AI technology in news operations. Bloomberg, Today, Wall Street Journal, The New York Times and The Washington Post are also using this technology in different ways. However, these agencies often use AI to assist news production rather than directly generating editorial evaluations. In contrast, the Los Angeles Times' attempts seem more radical and have caused more controversy.
In general, the Los Angeles Times' initiative to introduce AI technology to add "voice" tags to articles and generate analytical insights is innovative, but it also faces many challenges. The concerns of union members about AI-generated content and various problems arising in actual applications all indicate that this technology still needs further improvement. In the future, how to find a balance between technology application and editorial supervision will be a key issue that the Los Angeles Times and other media organizations need to solve.