Anthropic vs. The Pentagon: The $200M Disagreement
by Dan Roque | Reading Time: 8 minutes | In Current Events
On Friday, February 27, 2026, at 5:01 p.m. Eastern Time, a deadline passed without an agreement, and the relationship between one of America's leading artificial intelligence companies and its largest institutional client collapsed in public. By evening, President Trump had ordered all federal agencies to cease using Anthropic's products. Defense Secretary Pete Hegseth had designated the company a "supply-chain risk to national security" — a label traditionally reserved for firms with ties to foreign adversaries. Hours later, rival company OpenAI announced a deal with the same Pentagon, on what it described as substantially similar terms.
The story that led there raises questions worth sitting with: about who gets to define the rules governing powerful technology, about what it means when the same safeguards are acceptable from one company and grounds for blacklisting from another, and about whether the institutions overseeing AI deployment are moving faster or slower than the technology itself.
How It Started
Anthropic signed a contract with the Department of Defense worth up to $200 million in July 2025. The agreement made Claude the first frontier AI model to operate on the Pentagon's classified networks — a significant milestone for both the company and the broader industry. The contract included, as it had since Anthropic began supporting military applications in June 2024, explicit prohibitions against two uses: the mass domestic surveillance of American citizens, and the deployment of Claude in fully autonomous weapons systems — meaning systems capable of making lethal targeting decisions without a human being in the decision chain.These were not secret provisions slipped in at the last moment. They were part of Anthropic's published acceptable use policy. The Pentagon signed a contract that incorporated them.
Tensions rose in the months that followed. Reports, including from Axios and DefenseScoop, indicate the situation was inflamed by the discovery that Claude had been used, through Palantir's military software platform, in the operation that led to the capture of Venezuelan President Nicolás Maduro. Whether Anthropic had been adequately informed of that use, and what it implied about how the Pentagon understood its own contract constraints, became a source of significant friction. A high-stakes in-person meeting between Amodei and Hegseth on Tuesday, February 24, was described as "cordial" by one source familiar with the meeting — but whatever cordiality existed didn't survive the week.
The Sticking Point
The Pentagon's position, stated plainly and repeatedly, was
that it required the freedom to use AI tools for "all lawful
purposes." Military officials argued that this was a reasonable, standard
expectation of any contractor: the government, not the vendor, determines how
lawfully acquired tools are used. They maintained that domestic mass
surveillance is already illegal under federal law, that existing military
policy prohibits fully autonomous lethal weapons, and that Anthropic was
effectively demanding the right to second-guess whether specific operations met
its own definitions of those prohibitions — inserting a private company's
judgment into operational military decisions.
Anthropic's position, as articulated by CEO Dario Amodei in a public letter published Thursday, February 26, was more technical and more urgent in its framing. On autonomous weapons, Amodei argued that current frontier AI models — including Claude — are not yet reliable enough for use in systems that fire without human authorization. These models can hallucinate, misinterpret context, and fail in ways that human soldiers are trained to avoid. Deploying them in autonomous weapons, he argued, would put American warfighters and civilians at risk. On surveillance, his concern was that AI dramatically changes the nature of data collection: where individual pieces of personal information are individually innocuous, AI can aggregate them into detailed behavioral portraits of private citizens at a scale and speed that no existing legal framework was designed to govern.
From Anthropic's perspective, these were not philosophical preferences or ideological red lines — they were technical assessments of what the technology could safely do, and constitutional concerns about what it should be allowed to do.
Defense Undersecretary Emil Michael publicly called Amodei a "liar" with a "God complex." Trump, in a Truth Social post, wrote that Anthropic's leadership had made a "DISASTROUS MISTAKE trying to STRONG-ARM the Department of War." The company, for its part, maintained that it had tried in good faith to accommodate "all lawful uses of AI for national security aside from the two narrow exceptions" — and that, to its knowledge, those exceptions had never actually affected a single government mission.
The Designation and Its Contradictions
When the 5:01 deadline passed, the consequences were swift and significant. President Trump ordered all federal agencies to immediately cease using Anthropic products, with a six-month phase-out window for departments already integrated with Claude. Hegseth posted on X that the Pentagon would designate Anthropic a "Supply-Chain Risk to National Security" and that any contractor, supplier, or partner doing business with the U.S. military was henceforth barred from conducting any commercial activity with the company. The General Services Administration also moved to remove Anthropic from its centralized federal AI marketplace.
What's notable about the supply-chain risk designation is its usual context. The mechanism has historically been applied to companies from adversarial foreign states — Huawei is the most commonly cited example; the banning of Kaspersky Lab's U.S. subsidiary in 2024 involved a company with Russian government entanglements. Applying it to an American company, over a contractual dispute about the terms of its own acceptable use policy, is, as multiple analysts noted immediately, legally unusual and strategically strange.
The strangeness is sharpest when set beside another threat the Pentagon had floated during negotiations: invoking the Defense Production Act, a Korean War-era statute that allows the government to compel private companies to supply goods or services deemed vital to national security. The government was, simultaneously, threatening to force Anthropic to hand over access to its technology on the grounds that it was indispensable, and threatening to designate it a national security risk. Mark Dalton, senior director of technology and innovation at the R Street Institute, identified the contradiction directly, warning that deploying the supply-chain risk label against a domestic company in a contractual dispute risks diluting the designation's credibility for cases where it might genuinely be needed.
Anthropic announced Friday evening that it would challenge the supply-chain risk designation in court, calling it "legally unsound" and a "dangerous precedent for any American company that negotiates with the government." The company also contested the scope of Hegseth's contractor ban, arguing that under federal statute, a supply-chain risk designation in this context would apply only to the use of Claude within Department of War contracts — and could not legally extend to how contractors use Claude to serve other clients.
The OpenAI Question
This is the most structurally curious part of the story, and the one that most rewards careful attention.
If the principles are the same, what was actually different? Several things, based on available reporting. First, OpenAI agreed to limit Claude's deployment to cloud environments rather than edge systems — the category that would include aircraft and drones operating in contested environments without reliable connectivity to centralized servers. This is a technical concession, but not a trivial one: it draws a clearer line between what AI can and cannot touch in active operations. Second, OpenAI committed to sending "forward-deployed engineers" to the Pentagon — its own staff, embedded with the military to support and monitor the models in use. This shifts the relationship from a remote vendor managing terms of service to a collaborative partnership with skin in the game. Third, and perhaps most simply, the tone was different. Altman's statement praised the Department of War's "deep respect for safety." He called on the Pentagon to offer the same terms to all AI companies. He positioned OpenAI not as a reluctant concession-maker but as a willing partner.
Whether this tells us something about different negotiating philosophies, or something about how political the designation of Anthropic ultimately was, depends on facts not yet fully in evidence. A senior Pentagon official, according to Axios, was on the phone offering Anthropic a deal at the same moment Hegseth was posting the supply-chain risk designation on X. The deal reportedly would have required Anthropic to permit the collection and analysis of data on Americans — including geolocation, web browsing history, and personal financial data purchased from data brokers. Whether that offer represents the government's actual minimum requirement, or a maximalist opening position in a negotiation that was already over, is unclear.
What This Moment Actually Asks
The technical questions at stake here are not trivial, and
they don't resolve neatly into partisan positions. There are serious people who
believe that current AI models are not safe enough to be used in fully
autonomous weapons, and serious people who believe that restricting AI in
military applications will simply cede advantage to adversaries who face no
such restrictions. There are serious people who believe that existing law is
sufficient to govern AI-enabled surveillance, and serious people who believe
that the aggregative power of AI has outpaced what existing law was designed to
address.
What makes this dispute structurally significant, independent of where one lands on those questions, is the procedural question it raises: in a world where the most consequential capabilities of a technology are controlled by a small number of private companies, who governs the conditions of that technology's deployment? Pentagon contracting experts interviewed by multiple outlets noted that what Anthropic attempted is genuinely novel. Contractors do not typically place use restrictions on government customers. The Pentagon's position — that it determines legality, not the vendor — reflects how contracting has always worked. Anthropic's position — that some uses are technically unsafe or constitutionally inadvisable regardless of legality — reflects something the contracting framework has never really had to accommodate.
The precedent now established — that a domestic company can be designated a supply-chain risk over a contractual dispute about its own published acceptable use policy — will be interpreted differently by different observers. Some will see it as a necessary reassertion of government authority over the tools it deploys. Others will see it as a chilling signal to any company that wishes to maintain limits on what its technology can be used for. Both interpretations can be argued from the same set of facts.
What seems worth holding onto, regardless of position, is this: the safeguards Anthropic refused to drop were not opposed in principle by the government that banned it, or by the company that replaced it. The disagreement was not ultimately about whether AI should be used for mass surveillance or autonomous weapons. It was about who gets to enforce that prohibition, how, and through what mechanism.
That is a narrower question than the headlines suggest — and, arguably, a more important one.
Works Cited
Amodei, Dario. "Statement from Dario Amodei on Our
Discussions with the Department of War." Anthropic, 26 Feb. 2026,
www.anthropic.com/news/statement-department-of-war.
Altman, Sam [@sama]. "Tonight, we reached an agreement with the Department of War to deploy our models in their classified network." X (formerly Twitter), 27 Feb. 2026, x.com/sama.
Hegseth, Pete [@PeteHegseth]. "In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security." X (formerly Twitter), 27 Feb. 2026, x.com/PeteHegseth.
Trump, Donald J. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War." Truth Social, 27 Feb. 2026, truthsocial.com/@realDonaldTrump.
Bellan, Rebecca. "Anthropic CEO Stands Firm as Pentagon Deadline Looms." TechCrunch, 26 Feb. 2026, techcrunch.com/2026/02/26/anthropic-ceo-stands-firm-as-pentagon-deadline-looms/.
Brandom, Russell. "Pentagon Moves to Designate Anthropic as a Supply-Chain Risk." TechCrunch, 27 Feb. 2026, techcrunch.com/2026/02/27/pentagon-moves-to-designate-anthropic-as-a-supply-chain-risk/.
Dieterle, C. Jarrett. "Anthropic Labeled a Supply Chain Risk, Banned from Federal Government Contracts." Reason, 28 Feb. 2026, reason.com/2026/02/28/anthropic-labeled-a-supply-chain-risk-banned-from-federal-government-contracts/.
Quiroz-Gutierrez, Marco. "OpenAI Sweeps in to Snag Pentagon Contract after Anthropic Labeled 'Supply Chain Risk' in Unprecedented Move." Fortune, 28 Feb. 2026, fortune.com/2026/02/28/openai-pentagon-deal-anthropic-designated-supply-chain-risk-unprecedented-action-damage-its-growth/.
Vincent, Brandi, and Drew F. Lawrence. "Experts Raise Questions and Concerns about Pentagon's Threat to Blacklist Anthropic amid AI Spat." DefenseScoop, 27 Feb. 2026, defensescoop.com/2026/02/27/pentagon-threat-blacklist-anthropic-ai-experts-raise-concerns/.
"Anthropic CEO Amodei Says Pentagon's Threats 'Do Not Change Our Position' on AI." CNBC, 26 Feb. 2026, cnbc.com/2026/02/26/anthropic-pentagon-ai-amodei.html.
"Anthropic Faces a Lose-Lose Scenario in Pentagon Conflict as Deadline for Policy Change Looms." CNBC, 27 Feb. 2026, cnbc.com/2026/02/27/anthropic-pentagon-ai-policy-war-spying.html.
"Anthropic Faces Friday Deadline in Defense AI Clash with Hegseth." CNBC, 24 Feb. 2026, cnbc.com/2026/02/24/anthropic-ai-hegseth-spying-defense.html.
"Anthropic Rejects Latest Pentagon Offer: 'We Cannot in Good Conscience Accede to Their Request.'" CNN Business, 26 Feb. 2026, cnn.com/2026/02/26/tech/anthropic-rejects-pentagon-offer.
"Anthropic Says Pentagon's 'Final Offer' Is Unacceptable." Axios, 26 Feb. 2026, axios.com/2026/02/26/anthropic-rejects-pentagon-ai-terms.
"Anthropic to Take Trump's Pentagon to Court over Claude Dispute." Axios, 28 Feb. 2026, axios.com/2026/02/28/anthropic-trump-pentagon-lawsuit-ai-dispute.
"Deadline Looms as Anthropic Rejects Pentagon Demands It Remove AI Safeguards." NPR, 26 Feb. 2026, npr.org/2026/02/26/nx-s1-5727847/anthropic-defense-hegseth-ai-weapons-surveillance.
"OpenAI Announces Pentagon Deal after Trump Bans Anthropic." NPR, updated 28 Feb. 2026, originally published 27 Feb. 2026, npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban.
"OpenAI Strikes Deal with Pentagon Hours after Trump Admin Bans Anthropic." CNN Business, 27 Feb. 2026, cnn.com/2026/02/27/tech/openai-pentagon-deal-ai-systems.
"OpenAI Strikes Deal with Pentagon, Hours after Rival Anthropic Was Blacklisted by Trump." CNBC, 27 Feb. 2026, cnbc.com/2026/02/27/openai-strikes-deal-with-pentagon-hours-after-rival-anthropic-was-blacklisted-by-trump.html.
"Pentagon-Anthropic AI Standoff Is Real-Time Testing Balance of Power in Future of Warfare." CNBC, 27 Feb. 2026, cnbc.com/2026/02/27/defense-anthropic-ai-war-risks-hegseth-amodei.html.
"Pentagon Draws Scrutiny with Anthropic Threats, Defense Production Act." The Hill, 26 Feb. 2026, thehill.com/policy/technology/5757667-pentagon-threatens-anthropic-dpa/.
"Pentagon Takes First Step toward Blacklisting Anthropic." Axios, 25 Feb. 2026, axios.com/2026/02/25/anthropic-pentagon-blacklist-claude.
"Pentagon Threatens to Label Anthropic's AI a 'Supply Chain Risk.'" Axios, 16 Feb. 2026, axios.com/2026/02/16/anthropic-defense-department-relationship-hegseth.
"Sam Altman Says OpenAI Shares Anthropic's Red Lines in Pentagon Fight." Axios, 27 Feb. 2026, axios.com/2026/02/27/altman-openai-anthropic-pentagon.
"The Clock Is Ticking Down on a Critical Pentagon Deadline for Anthropic." CNN Business, 27 Feb. 2026, edition.cnn.com/2026/02/27/tech/anthropic-pentagon-deadline.
"Trump Admin Blacklists Anthropic as AI Firm Refuses Pentagon Demands." CNBC, 27 Feb. 2026, cnbc.com/2026/02/27/trump-anthropic-ai-pentagon.html.
"Trump Moves to Blacklist Anthropic's Claude from Government Work." Axios, 27 Feb. 2026, axios.com/2026/02/27/anthropic-pentagon-supply-chain-risk-claude.
"Trump Orders U.S. Government to Stop Using Anthropic but Gives Pentagon Six Months to Phase It Out while Hegseth Adds Supply-Chain Risk Designation." Fortune, 27 Feb. 2026, fortune.com/2026/02/27/trump-us-government-anthropic-claude-pentagon-6-months-phaseout-ai-standoff/.
"What Trump Labeling Anthropic AI a Supply Chain Risk Means." Axios, 27 Feb. 2026, axios.com/2026/02/27/ai-trump-supply-chain-anthropic-pentagon-blacklist.
"Anthropic vs. the Pentagon: What's Actually at Stake?" TechCrunch, 27 Feb. 2026, techcrunch.com/2026/02/27/anthropic-vs-the-pentagon-whats-actually-at-stake/.
Altman, Sam [@sama]. "Tonight, we reached an agreement with the Department of War to deploy our models in their classified network." X (formerly Twitter), 27 Feb. 2026, x.com/sama.
Hegseth, Pete [@PeteHegseth]. "In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security." X (formerly Twitter), 27 Feb. 2026, x.com/PeteHegseth.
Trump, Donald J. "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War." Truth Social, 27 Feb. 2026, truthsocial.com/@realDonaldTrump.
Bellan, Rebecca. "Anthropic CEO Stands Firm as Pentagon Deadline Looms." TechCrunch, 26 Feb. 2026, techcrunch.com/2026/02/26/anthropic-ceo-stands-firm-as-pentagon-deadline-looms/.
Brandom, Russell. "Pentagon Moves to Designate Anthropic as a Supply-Chain Risk." TechCrunch, 27 Feb. 2026, techcrunch.com/2026/02/27/pentagon-moves-to-designate-anthropic-as-a-supply-chain-risk/.
Dieterle, C. Jarrett. "Anthropic Labeled a Supply Chain Risk, Banned from Federal Government Contracts." Reason, 28 Feb. 2026, reason.com/2026/02/28/anthropic-labeled-a-supply-chain-risk-banned-from-federal-government-contracts/.
Quiroz-Gutierrez, Marco. "OpenAI Sweeps in to Snag Pentagon Contract after Anthropic Labeled 'Supply Chain Risk' in Unprecedented Move." Fortune, 28 Feb. 2026, fortune.com/2026/02/28/openai-pentagon-deal-anthropic-designated-supply-chain-risk-unprecedented-action-damage-its-growth/.
Vincent, Brandi, and Drew F. Lawrence. "Experts Raise Questions and Concerns about Pentagon's Threat to Blacklist Anthropic amid AI Spat." DefenseScoop, 27 Feb. 2026, defensescoop.com/2026/02/27/pentagon-threat-blacklist-anthropic-ai-experts-raise-concerns/.
"Anthropic CEO Amodei Says Pentagon's Threats 'Do Not Change Our Position' on AI." CNBC, 26 Feb. 2026, cnbc.com/2026/02/26/anthropic-pentagon-ai-amodei.html.
"Anthropic Faces a Lose-Lose Scenario in Pentagon Conflict as Deadline for Policy Change Looms." CNBC, 27 Feb. 2026, cnbc.com/2026/02/27/anthropic-pentagon-ai-policy-war-spying.html.
"Anthropic Faces Friday Deadline in Defense AI Clash with Hegseth." CNBC, 24 Feb. 2026, cnbc.com/2026/02/24/anthropic-ai-hegseth-spying-defense.html.
"Anthropic Rejects Latest Pentagon Offer: 'We Cannot in Good Conscience Accede to Their Request.'" CNN Business, 26 Feb. 2026, cnn.com/2026/02/26/tech/anthropic-rejects-pentagon-offer.
"Anthropic Says Pentagon's 'Final Offer' Is Unacceptable." Axios, 26 Feb. 2026, axios.com/2026/02/26/anthropic-rejects-pentagon-ai-terms.
"Anthropic to Take Trump's Pentagon to Court over Claude Dispute." Axios, 28 Feb. 2026, axios.com/2026/02/28/anthropic-trump-pentagon-lawsuit-ai-dispute.
"Deadline Looms as Anthropic Rejects Pentagon Demands It Remove AI Safeguards." NPR, 26 Feb. 2026, npr.org/2026/02/26/nx-s1-5727847/anthropic-defense-hegseth-ai-weapons-surveillance.
"OpenAI Announces Pentagon Deal after Trump Bans Anthropic." NPR, updated 28 Feb. 2026, originally published 27 Feb. 2026, npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban.
"OpenAI Strikes Deal with Pentagon Hours after Trump Admin Bans Anthropic." CNN Business, 27 Feb. 2026, cnn.com/2026/02/27/tech/openai-pentagon-deal-ai-systems.
"OpenAI Strikes Deal with Pentagon, Hours after Rival Anthropic Was Blacklisted by Trump." CNBC, 27 Feb. 2026, cnbc.com/2026/02/27/openai-strikes-deal-with-pentagon-hours-after-rival-anthropic-was-blacklisted-by-trump.html.
"Pentagon-Anthropic AI Standoff Is Real-Time Testing Balance of Power in Future of Warfare." CNBC, 27 Feb. 2026, cnbc.com/2026/02/27/defense-anthropic-ai-war-risks-hegseth-amodei.html.
"Pentagon Draws Scrutiny with Anthropic Threats, Defense Production Act." The Hill, 26 Feb. 2026, thehill.com/policy/technology/5757667-pentagon-threatens-anthropic-dpa/.
"Pentagon Takes First Step toward Blacklisting Anthropic." Axios, 25 Feb. 2026, axios.com/2026/02/25/anthropic-pentagon-blacklist-claude.
"Pentagon Threatens to Label Anthropic's AI a 'Supply Chain Risk.'" Axios, 16 Feb. 2026, axios.com/2026/02/16/anthropic-defense-department-relationship-hegseth.
"Sam Altman Says OpenAI Shares Anthropic's Red Lines in Pentagon Fight." Axios, 27 Feb. 2026, axios.com/2026/02/27/altman-openai-anthropic-pentagon.
"The Clock Is Ticking Down on a Critical Pentagon Deadline for Anthropic." CNN Business, 27 Feb. 2026, edition.cnn.com/2026/02/27/tech/anthropic-pentagon-deadline.
"Trump Admin Blacklists Anthropic as AI Firm Refuses Pentagon Demands." CNBC, 27 Feb. 2026, cnbc.com/2026/02/27/trump-anthropic-ai-pentagon.html.
"Trump Moves to Blacklist Anthropic's Claude from Government Work." Axios, 27 Feb. 2026, axios.com/2026/02/27/anthropic-pentagon-supply-chain-risk-claude.
"Trump Orders U.S. Government to Stop Using Anthropic but Gives Pentagon Six Months to Phase It Out while Hegseth Adds Supply-Chain Risk Designation." Fortune, 27 Feb. 2026, fortune.com/2026/02/27/trump-us-government-anthropic-claude-pentagon-6-months-phaseout-ai-standoff/.
"What Trump Labeling Anthropic AI a Supply Chain Risk Means." Axios, 27 Feb. 2026, axios.com/2026/02/27/ai-trump-supply-chain-anthropic-pentagon-blacklist.
"Anthropic vs. the Pentagon: What's Actually at Stake?" TechCrunch, 27 Feb. 2026, techcrunch.com/2026/02/27/anthropic-vs-the-pentagon-whats-actually-at-stake/.

Comments
Post a Comment