The president went on a threat-laden tirade against a government-contracted company after it refused to cave to the Pentagon’s demands.
Trump, 79, on Friday directed the federal government to immediately stop using all Anthropic technology, which includes some of the most advanced AI tools in the world, such as the industry-leading language model Claude. Anthropic was in a public standoff this week with Defense Secretary Pete Hegseth over removing safeguards on capabilities to conduct mass surveillance of Americans and for fully autonomous, deadly weapons.
“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military,” Trump wrote in a Truth Social post.

“The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY,” he continued.
Despite the Pentagon making a series of demands and warning of harsh penalties if Anthropic did not comply with a Friday 5 pm ET deadline, Trump insisted the U.S. does not need the AI giant’s technology, but he also issued his own threat on Friday.

“There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” Trump wrote.
Claude is already used across the Defense Department and other national security agencies, including for operational planning, modeling, and intelligence analysis.
Trump’s furious post came after Anthropic’s CEO Dario Amodei said on Thursday that his company could not meet the Defense Department’s demands to remove safeguards for mass domestic surveillance and fully autonomous weapons, which would remove all human control.

The two use cases in question had not previously been included in their defense contracts, and the company was refusing to remove the safeguards for them going forward.
“Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” Amodei said in a lengthy statement. “However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”

As the standoff between Anthropic and the Pentagon over its AI technology escalated with hundreds of millions of dollars in contracts on the line, the Defense Department tried to exert maximum pressure.
It threatened to designate the company a “supply chain risk,” a term never before applied to a U.S. company, while also suggesting it would invoke the Defense Production Act to force the company to do its bidding.
“Regardless, these threats do not change our position: we cannot in good conscience accede to their request,” Amodei said in his statement.
As the tech company refused to bow to, the Defense Department insisted it had no interest in using AI to conduct mass surveillance of U.S. citizens while attacking Amodei personally in a series of angry social media posts.
“Anthropic is lying. The @DeptofWar doesn’t do mass surveillance as that is already illegal,” wrote Emil Michael, Under Secretary of War for Research and Engineering. “What we are talking about is allowing our warfighters to use AI without having to call @DarioAmodei for permission to shoot down an enemy drone swarms that would kill Americans.”
The six-month phase-out period could give the Defense Department time to find other partners, but some other tech giants have already praised Anthropic for sticking to its principles. Reaching new contracts without the ultimatum being an issue could prove challenging.

OpenAI’s CEO, Sam Altman, praised his competitor and said he shared the same red line.
“For all the differences I have with Anthropic, I mostly trust them as a company and I really do think, and I think they really do care about safety,” Altman said in a CNBC interview.
The Wall Street Journal reported on Friday that Altman sent a memo to staff that the company was working with the Defense Department to determine if its models could be used, but with the same guardrails that led to the Anthropic standoff in place.








