Recently, Anthropic, a well-known company in the field of artificial intelligence, quietly deleted the AI security commitments related to the Biden administration on its official website. This move has attracted widespread attention, especially in the areas of AI regulation and security. The promises were initially discovered by an AI watchdog called the Midas Project, but last week they were removed from Anthropic's transparency center. The Transparency Center originally listed the company's "voluntary commitment" in responsible AI development. Although these commitments are not legally binding, they make it clear that they will share information and research on AI risks, including bias, with the government.

In July 2023, Anthropic joined the Biden administration's voluntary self-regulation agreement with other tech giants such as OpenAI, Google and Meta to support AI security initiatives. These moves were further confirmed in Biden's AI executive order. Participating companies promise to conduct security testing of AI models before release, watermark the content generated by AI, and develop data privacy infrastructure. These measures are designed to ensure the security and transparency of AI technology and reduce potential risks.
However, with the Trump administration coming to power, Anthropic seems to have changed its attitude towards these commitments. On his first day of taking office, Trump revoked Biden's AI executive order, fired several AI experts in the administration, and cut some research funds. These changes may cause many large AI companies to re-examine their relationship with the government, and some even take the opportunity to expand their contracts with the government to participate in shaping the still unclear AI policies. The impact of this policy shift on the AI industry is unclear, but it has obviously triggered widespread discussion in the industry.
At present, Anthropic has not made any public statement about the removal of the promise and said its position on responsible AI has nothing to do with or predate the Biden-era agreements. Relatedly, the Trump administration may dissolve the AI Security Institute established during the Biden era, thus facing uncertainty in the relevant measures. This series of changes suggests that the AI regulatory framework may undergo major adjustments under the Trump administration.
Overall, the Trump administration is weakening the AI regulatory framework established in the Biden era, and AI companies seem to have more freedom to manage their systems without external regulatory pressure. At present, security checks on prejudice and discrimination about AI do not appear in the relevant policies of the Trump administration. This policy shift could have a profound impact on the development of the AI industry, especially in terms of security and transparency.
Key points:
1. Anthropic deleted AI security commitments related to the Biden administration, reflecting policy changes.
2. The Trump administration has revoked Biden's AI executive order and cut AI regulation.
3. Large AI companies re-examine their relationship with the government under the Trump administration and may relax their self-regulation.