According to the Economic Daily, a series of dubbing videos generated by AI have attracted attention on the Internet. They are surprisingly realistic. Not only are the pictures lifelike, but the sound characteristics are almost the same as real people. They have been played over 100 million times in a short period of time. quantity. However, the content in these videos has nothing to do with the public figures in the images and has been removed from the platform.
The development of AI technology has lowered the threshold for deep forgery and low-cost mass production, and AI-generated images, voice-changing and face-changing videos have appeared in large numbers. Although these videos are eye-catching, the risks hidden behind them cannot be ignored.
Currently, the creation and dissemination of false information has become a global problem, and the use of artificial intelligence synthesis technology to make it look real is putting ordinary people in trouble in identifying information. The "2024 Artificial Intelligence Security Report" shows that AI-based deep forgery fraud will surge 30 times in 2023, posing a serious threat to network security and social security. The proliferation of false information not only disrupts the normal order of cyberspace, but may also transmit negative emotions from the virtual world to reality, exacerbating social anxiety.
Dealing with chaos such as deep forgery requires multi-party collaboration and joint governance. At present, our country has promulgated laws and regulations such as the "Regulations on the In-depth Synthesis Management of Internet Information Services" to strengthen content management from the source. At the same time, the release of the "Measures for Labeling Synthetic Content Generated by Artificial Intelligence (Draft for Comments)" also marks that the national level is increasing supervision of AI technology. Only by allowing AI to develop healthily within a legal and compliant framework can people truly enjoy the dividends brought by technological changes and promote the healthy development of the artificial intelligence industry.
As an important part of information dissemination, platform operators and service providers should also assume corresponding responsibilities, draw red lines for AI-generated content, strictly prohibit the dissemination of illegal, infringing and fraudulent content, and add prominent warning signs to prevent false information diffusion. In addition, the use of digital watermarks, timestamps, hash functions and other technical means can not only improve the verifiability of the authenticity of information, but also be an effective means to prevent the spread of false information. As content consumers, audiences should also remain vigilant, improve their ability to identify AI-generated content, and avoid biased beliefs.