
U.S. military AI race intensifies as Anthropic battles Pentagon over guardrails
Defence Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline - comply with Pentagon demands to remove Claude's ethical guardrails for unrestricted military use, or lose a $200 million government contract and face being placed on a government blacklist. A Pentagon official bluntly stated the company had until 5:01pm Friday to "get on board or not." Threats included designating Anthropic a supply chain risk - a label typically reserved for foreign adversaries - or invoking the Defence Production Act , a 1950s emergency law compelling compliance regardless of consent. Amodei sat through the meeting with five senior Pentagon officials and didn't move on his two non-negotiable lines: no autonomous weapons targeting and no mass domestic surveillance of American citizens.
The dispute was partly triggered by Pentagon claims that Anthropic questioned whether Claude was used in a military operation targeting Venezuelan leader Nicolás Maduro - implying potential disapproval. Amodei denied it. But the episode crystallised the Pentagon's core demand: no private company retains oversight or veto power over military operations.
Amid the Pentagon’s concern over ideological bias in AI, strip away the anti-woke rhetoric and the demand is blunt, an AI capable of mass surveillance, autonomous targeting, and unrestricted operational use with no corporate conscience attached. Amodei himself warned of exactly this: a powerful AI scanning billions of conversations, detecting dissent, suppressing it. The Pentagon is asking him to make that available.
Every major competitor - Google, OpenAI, Meta, and xAI - has already complied with government requests by modifying their AI systems to remove safeguards that block surveillance, disable targeting for military operations, or prevent engagement with sensitive classified data. For example, OpenAI reportedly adjusted its models to allow monitoring of internal communications in ways previously restricted, while Google’s internal AI projects were cleared for experimental use in defense simulations. Ironically, Pentagon officials privately admit that Claude leads all competitors in classified settings, confessing, ‘the only reason we're still talking to these people is we need them.’ Meanwhile, China, Russia, and Israel are rapidly accelerating military AI deployment worldwide with zero ethical frameworks in place, raising concerns that the race for unrestrained AI in defense is moving faster than any oversight can manage.
