The clash between military ambition and artificial intelligence ethics just got very real.
Defense Secretary Pete Hegseth has issued a hard deadline to Anthropic CEO Dario Amodei: loosen the safety restrictions built into the company’s Claude AI model — or forfeit a $200 million Defense Department contract.
The stakes could not be higher.
A demand for ‘all lawful use’

Two sources familiar with the negotiations confirm the Pentagon wants unrestricted access to Claude for what officials describe as “all lawful use.” In practice, that means significantly fewer guardrails on how the U.S. military deploys the AI system.
Anthropic is not budging
The company has drawn two firm red lines. It will not allow its technology to control weapons systems autonomously. It will not permit its models to power mass domestic surveillance of American citizens.
Those limits, Anthropic argues, are non-negotiable — at least for now.
Clock is ticking
The ultimatum carries a specific timestamp. A Pentagon official confirmed Anthropic was given until 5:01 p.m. on a Friday to “get on board or not.” Refusal, the official warned, would trigger the termination of the contract.
The pressure does not stop there.
Hegseth has reportedly raised the possibility of invoking the Defense Production Act. This sweeping federal authority allows the government to compel private industry to supply goods and services deemed essential to national security. The act was used extensively during the COVID-19 pandemic to redirect manufacturing capacity.
Applying it to an AI company over model safety restrictions would be unprecedented.
The ‘supply chain risk’ threat
The Pentagon has floated another pressure point. Officials have suggested labeling Anthropic a “supply chain risk” — a designation normally reserved for companies connected to foreign adversaries, such as Russia or China.
If applied, the label could bar businesses with Defense Department contracts from using Anthropic products in any military-adjacent work.
The commercial damage could be severe. Anthropic has been aggressively expanding its enterprise footprint. Losing access to the defense contracting ecosystem would cut off a major growth channel at a moment when competition across the AI industry is accelerating rapidly.
Legal experts doubt the Pentagon’s “supply chain risk” threat is legitimate or legally defensible.
Katie Sweeten, a former Justice Department liaison to the Pentagon and now a partner at law firm Scale, called out the contradiction directly.
“I would assume we don’t want to utilize the technology that is the supply chain risk,” she said. “What it sounds like is that the supply chain risk may not be a legitimate claim, but more punitive because they’re not acquiescing.”
Behind closed doors, a civil standoff
Despite the sharp public posturing, the tone inside Tuesday’s meeting was notably composed. No voices were raised. Hegseth reportedly praised Claude’s capabilities and said he wanted the relationship to continue. Amodei restated his company’s position clearly.
Anthropic described the exchange as productive.
“Dario expressed appreciation for the Department’s work and thanked the Secretary for his service,” the company said in a statement. “We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”
Why Anthropic won’t move — yet?

The company’s resistance is rooted in technical reality, not just ethics. Anthropic executives believe current AI systems are not reliable enough to make autonomous battlefield decisions. They also point to a significant legal gap — no comprehensive federal framework governs how AI can be used for large-scale domestic surveillance.
Without that legislative clarity, the company says removing internal safeguards would be reckless.
Rivals are already circling
While Anthropic holds its ground, competitors are positioning themselves to step in. A Pentagon official confirmed that xAI, Elon Musk’s AI company, is “on board with being in a classified setting.” Other firms are reportedly close to similar agreements.
If Anthropic exits — voluntarily or otherwise — the vacuum will fill fast.
A defining moment for AI and national security

Anthropic was founded by former OpenAI researchers who left over disagreements about development pace and safety standards. The company has consistently championed responsible AI development, recently pledging $20 million to a political organization advocating for stricter federal oversight of advanced AI systems.
That identity is now being tested in the most direct way possible.
The Defense Department sees AI-driven systems as critical to operational speed, battlefield intelligence, cybersecurity resilience, and strategic superiority. Anthropic sees unchecked autonomous deployment as a risk that the technology is not yet equipped to handle responsibly.
The outcome of this standoff will likely set the tone for how AI developers and the U.S. Government negotiate the boundary between innovation and military authority for years to come.
What do you think — should AI companies retain control over how their models are used in defense settings, or does national security demand a different standard? Please share your views in the comments below and join the conversation.

