Secretary of Defense Pete Hegseth is close to retaliating against an AI company that wants to make sure its tools aren’t used for mass surveillance against Americans or to develop weapons that fire without human involvement.
For months, Anthropic has been negotiating with the Pentagon over the terms under which the military can use Claude, the only AI model currently available in the military’s classified systems, Axios reported.
The company is willing to loosen its current terms of use, but Pentagon officials are insisting that Anthropic and other big AI labs give the military to use their tools “for all lawful purposes.”

An Anthropic official told Axios that although there are laws against domestic mass surveillance, “They have not in any way caught up to what AI can do,” which is why Anthropic wants to put tighter limits on its military use.
Hegseth, however, is close to not just cutting ending its $200 million contract with Anthropic, but designating the company a “supply chain risk”—a penalty usually reserved for foreign adversaries, according to Axios.
That would require any company doing business with the military to also certify that they don’t use Anthropic tools in their own workflows.
The company brings in $14 billion in annual revenue and is widely considered a leader in many business applications, with eight of the top 10 biggest U.S. companies using Claude, according to Axios.
The technology is also widely embedded within the military and was used in January’s capture of Venezuelan President Nicolás Maduro.
“It will be an enormous pain the a-- to disentangle, and we are going to make sure they pay a price for forcing our hand like this,” a senior Pentagon official told the publication.
The Pentagon’s chief spokesperson told Axios that the Defense Department’s relationship with Anthropic was being reviewed.
“Our nation requires that our partners be willing to help our warfighters win in any fight,” Sean Parnell said. “Ultimately, this is about our troops and the safety of the American people.
A spokesperson for Anthropic told Axios that the company was “having productive conversations, in good faith.”
The Daily Beast has also reached out for comment.

But the other Anthropic official warned that AI can be used to analyze “any and all publicly available information at scale,” which the Department of Defense is legally allowed to collect via so called “open-source intelligence.”
With AI, the Defense Department could continuously monitor and analyze the public social media posts of every American, cross-referenced against information such as public voter registration rolls, concealed carry permits, and demonstration permit records, to automatically flag civilians who fit certain profiles.
The Pentagon is reportedly hoping that its negotiations with Anthropic will force OpenAI, Google, and xAI to also agree to the “all lawful use” standard.








