
Pentagon taps OpenAI for classified AI deployment amid Anthropic standoff
OpenAI has reached a landmark agreement with the U.S. Department of Defense to deploy its AI models, including advanced versions of ChatGPT, on classified Pentagon networks , marking a major milestone in integrating artificial intelligence into national security operations. The deal comes as the Trump administration escalates its confrontation with rival AI company Anthropic, which has been banned from federal use over disagreements about safety restrictions in its Claude model.
Under the agreement, OpenAI will implement strict safety guardrails to ensure human oversight in all military applications. The deal explicitly prohibits mass domestic surveillance and fully autonomous weapons, reflecting principles that OpenAI CEO Sam Altman described as essential to responsible AI deployment. Altman said, “We and the Department of Defense aligned on safety standards that keep humans responsible for all AI-driven actions,” signaling a model for how private AI companies can collaborate with the government without compromising ethical boundaries.
The deal positions OpenAI as the Pentagon’s preferred AI provider following Anthropic’s refusal to remove guardrails on Claude. President Donald Trump ordered federal agencies to cease using Anthropic technology and confirmed the termination of the company’s $200 million Pentagon contract , citing national security risks. Defense Secretary Pete Hegseth labeled Anthropic a “supply-chain risk,” a designation normally reserved for foreign adversaries, effectively barring U.S. contractors from working with the company. Analysts note that this subjects Anthropic to pressures similar to those faced by Chinese AI companies under strict government oversight , including operational limits and legal consequences.
OpenAI’s deal has been welcomed by Pentagon officials and AI safety advocates alike. Over 550 AI professionals across Silicon Valley praised the collaboration as a model for combining innovation with ethical safeguards. Political observers warn that the administration’s punitive measures against Anthropic could undermine U.S. technological leadership if other companies hesitate to work with the government.
The contrasting approaches underscore a central question in modern warfare: who controls AI - private companies, or the government? Analysts say that strategic resources of the future will not be oil or steel, but algorithms, computational power, and sovereign control over AI , highlighting the critical role of trusted AI providers in national security. OpenAI’s classified deployment illustrates a path for ethical AI integration, balancing military needs with technological responsibility.
