As the 2024 presidential election in the United States approaches, the impact of artificial intelligence technology in the political field has also received increasing attention. The editor of Downcodes learned that OpenAI recently published a blog article disclosing the security measures taken by ChatGPT to prevent malicious use during the election. The article elaborates on how ChatGPT responds to a large number of requests to generate images of political figures, and how to maintain political neutrality and ensure information security.
As the 2024 U.S. presidential election approaches, a blog post published by OpenAI on Friday noted that ChatGPT rejected more than 250,000 requests to generate images of political candidates in the month leading up to the election. The requests included requests for images of President-elect Trump, Vice President Harris, Vice Presidential candidate Vance, current President Joe Biden and Minnesota Governor Walz.

OpenAI said in a blog post that ChatGPT has applied multiple security measures to refuse to generate images of real people, including politicians. These protections are especially important during elections and are part of the company's broader efforts to prevent its tools from being used for misleading or harmful purposes.
Additionally, ChatGPT is partnering with the National Association of Secretaries (NASS) to direct election-related questions to CanIVote.org to maintain political neutrality. In inquiries about the election results, the platform recommended users visit news organizations such as The Associated Press and Reuters. Recently, OpenAI also had to ban an external influence operation called Storm-2035, which was trying to spread influential Iranian political content.
OpenAI said it will continue to monitor ChatGPT to ensure the accuracy and ethics of its responses. This year, the company also praised the Biden administration's policy framework on national security and artificial intelligence technology.
OpenAI's initiative reflects the responsibility of large-scale language models in maintaining social stability and resisting the spread of malicious information, and also provides valuable experience for other artificial intelligence companies. This shows that while artificial intelligence technology is developing, it needs to pay more attention to its social responsibilities and ethical norms to ensure its healthy development.