Palantir, Anthropic and Amazon Web Services (AWS) work together to build a Claude-based cloud platform for the U.S. government’s defense and intelligence services. The move integrates Claude 3 and 3.5 into Palantir's AI platform, which will be hosted on AWS and take advantage of the highest confidential level data processing and storage certifications that the three companies have already obtained. The move aims to improve data processing efficiency, assist with pattern recognition and trend analysis, simplify document review processes, and ultimately help government officials make smarter decisions in emergencies.
Recently, Palantir announced a partnership with Anthropic and Amazon Web Services (AWS) to build a Claude cloud platform suitable for the use of defense and intelligence of the U.S. government.

According to the announcements by the three companies, the collaboration will integrate Claude3 and 3.5 into Palantir’s artificial intelligence platform, which will be hosted on AWS. It is worth mentioning that Palantir and AWS have obtained Impact Level 6 (IL6) certification issued by the U.S. Department of Defense, which allows them to process and store data at the highest confidential level.
A spokesman for Anthropic said Claude was open to the defense and intelligence community for the first time in early October. The U.S. government will use Claude to reduce data processing time, identify patterns and trends, simplify document review, and help officials make smarter decisions when time is tight, while maintaining their decision-making power. “Palantir is proud to be the first industry partner to bring the Claude model into a confidential environment,” said Shyam Sankar, CTO of Palantir.
Unlike Meta, which recently announced the opening of Llama to the U.S. government for defense and national security applications, set relevant usage policies. Anthropic does not need to make exceptions to its Acceptable Use Policy (AUP), which allows Claude to be applied in potentially dangerous areas in the U.S. Department of Defense, the CIA, or other defense and intelligence agencies that use it.
Although Anthropic has identified some high-risk use cases in its AUP, it does not limit its use in defense and intelligence fields, and only mentions its use in areas such as law, healthcare, insurance, finance, employment, housing, academic and media. Claude is “a realm about public welfare and social equity.” When asked about the relationship between AUP and government applications, Anthropic mentioned only one blog post about extending government access to Claude.
In this blog, Anthropic mentions that the mechanism has been established to grant exceptions to acceptable use policy for government users, and stresses that these exceptions are “well calibrated to allow beneficial use by strictly selected government agencies.” However, it is unclear what exactly these exceptions are.
Anthropic also said the existing exception structure allows Claude to be used for foreign intelligence analysis authorized by law and provide early warnings of potential military activities, thereby opening up windows for diplomacy to prevent or stop conflicts. However, other restrictions on false information, weapon design and use, censorship, and malicious network operations remain.
Key points:
Palantir partnered with Anthropic and AWS to launch the Claude cloud platform suitable for U.S. defense intelligence.
Claude is used for data processing, pattern recognition, and decision support without exceptions to its acceptable use policies.
Anthropic's acceptable use policy allows certain applications in the defense sector, but there are no explicit restrictions on high-risk areas.
The collaboration integrates advanced AI technology and high security data processing capabilities, providing strong support for the U.S. government's data analysis and decision-making in the field of defense and intelligence. However, the limitations on the application of Claude in high-risk areas still need to be further clarified to ensure its safe and responsible use.