Anthropic refused to remove safety guardrails for military use and was blacklisted by the Pentagon
PolicyDefense Secretary Pete Hegseth demanded Anthropic allow unrestricted military use of Claude, including for mass surveillance and autonomous weapons. Anthropic refused. The Pentagon designated Anthropic a 'supply chain risk' — a label historically reserved for foreign adversaries — cutting it off from all federal contracts. Anthropic sued the Defense Department in March 2026, with Microsoft filing in support. The dispute became the defining test of whether AI companies can maintain safety principles under government pressure.
Anthropic proposed two specific conditions for defense contracts: Claude would not support mass surveillance programs targeting Americans, and would not power autonomous weapons systems. These weren't blanket refusals of military work — Anthropic was willing to support many defense applications — but represented hard limits on use cases the company considered fundamentally unsafe.
When Anthropic refused to capitulate by Hegseth's 5:01 PM Friday deadline, the Pentagon designated Anthropic a 'supply chain risk' — the same category used for Chinese telecom companies like Huawei. This effectively banned all federal agencies and contractors from using Claude, and required defense contractors to assess their reliance on Anthropic's technology.
In March 2026, Anthropic sued the Defense Department, arguing the blacklist was arbitrary and retaliatory. Microsoft — despite its $13B investment in OpenAI — filed in support of Anthropic, urging a temporary restraining order. Microsoft's involvement reflected the broader industry's concern that government coercion of AI companies set a dangerous precedent.