As the 2024 US presidential election approaches, the application and potential risks of artificial intelligence are increasingly attracting attention. OpenAI recently published a blog post revealing the measures taken by its artificial intelligence model ChatGPT to maintain fairness during the election and prevent abuse. The article highlights ChatGPT's efforts in refusing to generate images of politicians, maintaining political neutrality, and combating the dissemination of malicious information, demonstrating the role and complexity of large language models in dealing with real-world challenges.
As the 2024 U.S. Presidential Election Day approaches, a blog post published by OpenAI on Friday noted that ChatGPT rejected more than 250,000 requests to generate images of political candidates a month before the election. Among the requests are hoped to generate images of President-elect Trump, Vice President Harris, Vice President candidate Vans, current President Biden and Minnesota Governor Waltz.
OpenAI said in a blog that ChatGPT has applied several security measures to refuse to generate images of real people, including politicians. These protections are particularly important during the elections and are part of the company's broader efforts to prevent its tools from being used for misleading or harmful purposes.
In addition, ChatGPT has worked with the Association of National Secretaries (NASS) to direct election-related issues to CanIVote.org to maintain political neutrality. In an inquiry about the election results, the platform advised users to visit news organizations such as the Associated Press and Reuters. Recently, OpenAI also had to ban an external impact operation called Storm-2035, a group that attempted to spread influential Iranian political content.
OpenAI said it will continue to monitor ChatGPT to ensure the accuracy and ethics of its responses. This year, the company also praised the Biden administration’s policy framework on national security and artificial intelligence technology.
Key points:
ChatGPT rejected more than 250,000 requests to generate images of political candidates a month before the election.
OpenAI has implemented several security measures to prevent the generation of images of real people, especially during elections.
ChatGPT works with the National Secretaries Association to remain politically neutral and guide users to reliable sources of election information.
OpenAI's initiatives reflect the responsibility and efforts of AI companies in responding to information manipulation and dissemination of false information during elections. In the future, how to better balance the convenience and potential risks of artificial intelligence technology will be an important topic that needs to be continuously explored.