Google DeepMind has released a groundbreaking research report on malicious AI applications that reveals major trends in the misuse of generative AI tools. The research looks at the risks of generative AI around the world, particularly tools being brought to market by big tech companies in pursuit of huge profits. Research teams DeepMind and Jigsaw collaborated to analyze hundreds of abuse cases, aiming to quantify these risks and provide data support for future AI security strategies. The report found that the spread of false information, especially the generation of "Deepfakes", has become the most common malicious AI application, posing a serious threat to public opinion and the political process.
A first-of-its-kind study by Google DeepMind on the most common malicious AI applications shows that artificial intelligence (AI) "deepfakes" that generate fake politicians and celebrities are more common than attempts to use AI to assist cyberattacks. The study, a collaboration between Google's AI unit DeepMind and Google's research and development unit Jigsaw, aimed to quantify the risks of generating AI tools that have been brought to market by the world's largest technology companies in search of huge profits.

Motives of technology-related bad actors
The study found that the act of creating real-but-false images, videos, and audio of people was the most common misuse of generative AI tools, nearly two times more common than the next-highest use of text tools, such as chatbots, to falsify information. times. The most common goal of misuse of generative AI is to influence public opinion, which accounts for 27% of use cases, raising concerns about how deepfakes may influence elections around the world this year.
Deepfakes of British Prime Minister Rishi Sunak and other global leaders have appeared on TikTok, X and Instagram in recent months. British voters will go to the polls in next week's general election. Despite the efforts of social media platforms to label or remove such content, people may not recognize it as false, and the spread of the content may influence voter turnout. DeepMind researchers analyzed approximately 200 abuse cases involving social media platforms X and Reddit, as well as online blogs and media reports of abuse.

The study found that the second largest motivation for abusing AI-generated products, such as OpenAI's ChatGPT and Google's Gemini, was to make money, whether that was offering services to create deepfakes or using generative AI to create large amounts of content, such as fake news articles. The study found that most abuses use easily accessible tools that "require minimal technical expertise," meaning more bad actors can abuse generative AI.
DeepMind’s research will influence how it improves its assessment of the safety of its models, and it hopes this will also influence how its competitors and other stakeholders view the “manifestation of harm.”
Highlights:
- DeepMind’s research found that deepfakes are the number one problem for misuse of artificial intelligence applications.
- The most common misuse of generative AI tools is to influence public opinion, accounting for 27% of uses.
- The second largest motivation for abusing generated AI is to make money, which mainly includes providing Deepfakes services and the creation of fake news.
This research is critical to understanding and addressing the potential risks posed by generative AI. DeepMind’s research findings highlight the need to develop more effective security mechanisms and regulatory frameworks to prevent malicious actors from using AI technology to conduct harmful activities and protect public interests and information security. In the future, similar research will continue to promote the safe development of AI technology.