February 25, 2026
US Threatens Anthropic With Deadline in Escalating AI Safeguards Dispute

US Threatens Anthropic With Deadline in Escalating AI Safeguards Dispute

US Threatens Anthropic With Deadline in Escalating AI Safeguards Dispute

Washington, D.C. — A growing dispute between the U.S. Department of Defense and artificial intelligence company Anthropic has escalated after Defense Secretary Pete Hegseth warned the firm it could be cut out of the Pentagon’s supply chain if it refuses to relax restrictions on how its AI systems are used by the military.

The warning came during a high-level meeting at the Pentagon on Tuesday between Hegseth and Anthropic chief executive Dario Amodei, according to sources familiar with the discussion. A senior defense official said the company has until Friday evening to comply with the department’s expectations or risk facing serious consequences.

At the center of the disagreement is the Pentagon’s position that any AI tools operating under defense contracts should be available for all lawful military purposes. Anthropic, however, has drawn firm boundaries around certain applications of its technology — particularly when it comes to fully autonomous lethal operations and large-scale domestic surveillance.

Red Lines and Resistance

Sources say Amodei made clear during the meeting that Anthropic will not permit its AI models to be used in autonomous kinetic operations where systems would make final targeting decisions without meaningful human oversight. The company also opposes the use of its tools for mass surveillance of domestic populations.

Anthropic has consistently marketed itself as a safety-focused AI developer, differentiating itself from some competitors by publishing regular transparency reports and outlining strict usage policies. The company argues that guardrails are essential to prevent misuse of rapidly advancing AI capabilities.

Despite those safeguards, Pentagon officials insist the current conflict is not specifically about autonomous weapons or surveillance. Rather, they argue that a private company should not dictate how the U.S. military deploys technology in lawful national security operations.

One official indicated that if Anthropic declines to adjust its policies, the department could recommend invoking the Defense Production Act — a powerful federal authority that allows the government to compel companies to prioritize or adapt production in the interest of national defense. Such a move could potentially require Anthropic to permit broader Pentagon access to its models.

In addition, the company could be formally labeled a “supply chain risk,” a designation that would effectively exclude it from defense-related procurement and potentially influence private-sector partners to reconsider their relationships.

A Cordial but Tense Exchange

While the tone of Tuesday’s meeting was described as professional, the gap between the two sides appears significant. In a statement issued afterward, Anthropic said it remains engaged in “good-faith conversations” about how its AI systems can support national security while remaining aligned with what the company believes its models can responsibly and reliably handle.

A spokesperson added that Amodei expressed appreciation for the department’s mission and thanked Hegseth for his service, signaling that communication channels remain open.

The Pentagon, however, is seeking clearer assurances. Defense officials have previously stated that they expect major AI contractors — including OpenAI, Google, xAI and Anthropic — to allow the department to use any contracted model for all lawful purposes.

Anthropic was one of four companies awarded defense contracts last summer worth up to $200 million each, marking a significant expansion of AI integration into military planning, logistics and intelligence systems.

Trust and Transparency

The dispute has also revived scrutiny of Anthropic’s past defense involvement. Reports earlier this year indicated that the company’s Claude AI model was used indirectly in a U.S. military operation through a contract with Palantir Technologies. That revelation surprised some observers and, according to people familiar with the matter, may have contributed to a breakdown in trust between the company and defense officials.

Anthropic was also the first AI firm authorized to operate within certain classified Pentagon networks — a milestone that underscored its growing importance to U.S. defense strategy.

At the same time, the company has acknowledged in public safety reports that its technology has been misused in the past, including by hackers who leveraged AI tools to assist with sophisticated cyberattacks. Those disclosures have reinforced Anthropic’s argument that strong safeguards are necessary.

Broader Implications

The standoff reflects a larger debate unfolding across Washington and Silicon Valley: Who ultimately controls the ethical boundaries of artificial intelligence — private developers or the government agencies deploying the systems?

Some defense analysts argue that limiting military access to advanced AI could put U.S. service members at a disadvantage in an era of accelerating technological competition. Others caution that removing corporate guardrails may increase the risk of unintended escalation or misuse.

Emelia Probasco, a senior fellow at Georgetown University’s Center for Security and Emerging Technology, said both sides have strong incentives to find common ground.

“They need to get to a resolution,” she said, noting that military personnel deserve the best tools available, but that trust and responsible governance are equally critical.

With the Friday deadline looming, negotiations are expected to intensify. The outcome could set an important precedent for how AI companies engage with defense agencies — and how far government authority extends in shaping the use of transformative technologies.

U.K. Streaming Platforms to Face Enhanced Regulation Under New Ofcom Oversight | Maya

Leave a Reply

Your email address will not be published. Required fields are marked *