Flanked by Palantir and AWS, Anthropic's Claude marches into US defense intelligence

An emotionally-manipulable AI in the hands of the Pentagon and CIA? This'll surely end well Palantir has announced a partnership with Anthropic and Amazon Web Services to build a cloudy Claude platform suitable for the most secure of the US government's defense and intelligence use cases....

featured-image

Palantir has announced a partnership with Anthropic and Amazon Web Services to build a cloudy Claude platform suitable for the most secure of the US government's defense and intelligence use cases. In an announcement today, the three firms said the partnership would integrate Claude 3 and 3.5 with Palantir's Artificial Intelligence Platform, hosted on AWS.

Both Palantir and AWS have been awarded Impact Level 6 (IL6) certification by the Department of Defense, which allows the processing and storage of classified data up to the Secret level. Claude was first made available to the defense and intelligence communities in early October, an Anthropic spokesperson told The Register . The US government will be using Claude to reduce data processing times, identify patterns and trends, streamline document reviews, and help officials "make more informed decisions in time-sensitive situations while preserving their decision-making authorities," the press release noted.



"Palantir is proud to be the first industry partner to bring Claude models to classified environments," said Palantir's CTO, Shyam Sankar. "Our partnership with Anthropic and AWS provides US defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions." Unlike Meta, which announced yesterday it was opening Llama to the US government for defense and national security applications, Anthropic doesn't even need to make an exception to its acceptable use policy (AUP) to allow for potentially dangerous applications of Claude in the hands of the DoD, CIA or any other defense or intelligence branch using it.

Meta's policy specifically prohibits the use of Llama for military, warfare, espionage, and other critical applications, for which Meta has granted some exceptions for the Feds. No such restrictions are included in Anthropic's AUP . Even high-risk use cases, which Anthropic defines as the use of Claude that "pose an elevated risk of harm," leave defense and intelligence applications out, only mentioning legal, healthcare, insurance, finance, employment, housing, academia and media usage of Claude as "domains that are vital to public welfare and social equity.

" When asked about its AUP and how that might pertain to government applications, particularly defense and intelligence as indicated in today's announcement, Anthropic only referred us to a blog post from June about the company's plans to expand government access to Claude . "Anthropic's mission is to build reliable, interpretable, steerable AI systems," the blog stated. "We're eager to make these tools available through expanded offerings to government users.

" Anthropic's post mentions that it's already established a method of granting acceptable use policy exceptions for government users, noting that those allowances "are carefully calibrated to enable beneficial use by carefully selected government agencies." What those exceptions are is unclear, and Anthropic didn't directly answer questions to that end and the AUP leaves a lot of unanswered questions around the defense and intelligence use of Claude. The existing carve-out structure, Anthropic noted, "allow[s] Claude to be used for legally authorized foreign intelligence analysis .

.. and providing warning in advance of potential military activities, opening a window for diplomacy to prevent or deter them," Anthropic said.

"All other restrictions in our general Usage Policy, including those concerning disinformation campaigns, the design or use of weapons, censorship, and malicious cyber operations, remain." We'll just have to hope no one decides to emotionally blackmail Claude into violating whichever of Anthropic's rules the US government still has to follow. ®.