Pentagon vs. Anthropic & OpenAI

Pentagon vs. Anthropic & OpenAI: The Battle Over Military AI and the Defense Production Act
Comprehensive report on the conflict between Defense Secretary Pete Hegseth and AI company Anthropic over military use of artificial intelligence, including the OpenAI Pentagon deal and Iran strike developments.

Executive Summary

In late February 2026, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei until 5:01 PM on Friday, February 27, to sign a document granting the U.S. military unrestricted access to Anthropic’s AI model Claude — or face severe consequences. Anthropic refused, holding firm on two red lines: no mass surveillance of American citizens and no fully autonomous lethal targeting. The Pentagon followed through, designating Anthropic a “supply chain risk” and imposing a government-wide ban. Hours later, the military used Claude anyway during strikes on Iran. Meanwhile, OpenAI rushed to fill the void with its own Pentagon deal — only to reverse course days later when CEO Sam Altman admitted the contract was “opportunistic and sloppy” and added the same safeguards Anthropic had demanded. The standoff has exposed deep contradictions in the government’s approach to military AI and raised profound questions about the future of AI safety, national security, and the rule of law.

Timeline of Events

July 2025: Anthropic awarded $200 million Pentagon contract for national security AI development. Claude had been deployed on DOD classified networks since June 2024.
January 9, 2026: Hegseth issues memo demanding AI companies remove military-use restrictions; declares military AI “will not be woke.”
February 16, 2026: Pentagon threatens to label Anthropic a “supply chain risk,” a designation normally reserved for foreign adversaries such as Chinese firms.
February 22, 2026: Anthropic publishes Responsible Scaling Policy Version 3.0, reinforcing its safety commitments.
February 24, 2026: Hegseth and Amodei meet at the Pentagon. Hegseth issues Friday deadline for compliance. Amodei publishes official statement on Anthropic’s website.
February 25–26, 2026: Pentagon sends “final offer” with revised contract language. Anthropic rejects it, saying it contained legalese that would allow safeguards to be “disregarded at will.” Amodei publishes second statement responding to Hegseth’s public comments.
February 27, 2026 (5:01 PM): Deadline passes. Pentagon designates Anthropic a “supply chain risk,” cancels $200M contract, and orders all DOD contractors to certify they do not use Anthropic products. DPA invocation threat is dropped.
February 27, 2026 (evening): OpenAI announces Pentagon deal, positioning itself as the replacement for Anthropic across DOD systems.
February 28, 2026: U.S. and Israel launch strikes on Iran. Pentagon uses Claude for intelligence analysis, target selection support, and operations planning — hours after imposing the ban on Anthropic.
March 2, 2026: Sam Altman admits OpenAI’s Pentagon deal was “opportunistic and sloppy,” announces contract revisions adding explicit prohibitions on mass surveillance and autonomous weapons — the same red lines Anthropic had insisted on.
March 3, 2026: OpenAI formally amends Pentagon contract. Anthropic confirms it is exploring legal challenges to the supply chain designation. Legal experts predict the designation will not survive judicial review.

Background

Anthropic, the San Francisco-based AI safety company and maker of the Claude AI model, was awarded a $200 million contract by the Pentagon in July 2025 to develop AI capabilities for U.S. national security applications. Claude had already been deployed on DOD classified networks since June 2024, and the company was not new to defense work.

The relationship deteriorated after Defense Secretary Pete Hegseth issued a January 9, 2026 memo calling on AI companies to remove all restrictions on their technology for military use, declaring that military AI “will not be woke.” Negotiations broke down over the following weeks as the Pentagon demanded blanket consent to “all lawful use cases” with zero company-imposed restrictions.

What the Pentagon Demanded

Hegseth demanded that Anthropic consent to “all lawful use cases” for Claude without any company-imposed restrictions. The specific demands included:

  • Full, unrestricted military control over Claude’s capabilities for defense operations, with no company-imposed safety guardrails limiting military applications.
  • A signed compliance document by Friday, February 27, guaranteeing these terms.
  • Integration into the Pentagon’s internal AI network, which competitors including OpenAI and Google had already joined.
  • Removal of Anthropic’s internal safety guardrails that limited how Claude could be used in military contexts.

Pentagon spokesperson Sean Parnell argued that Anthropic’s concerns were moot because the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.” Anthropic’s response was simple: then put it in writing. The Pentagon refused.

What Anthropic Refused — and Why

Anthropic was willing to support virtually all military applications but held firm on exactly two exceptions:

Red Line 1: No Mass Domestic Surveillance

Anthropic refused to allow Claude to be used for mass surveillance of American citizens. Amodei argued this goes beyond what current law adequately addresses: the government can currently purchase detailed records of Americans’ movements, web browsing, and associations from public sources without a warrant — a practice the Intelligence Community itself has acknowledged raises privacy concerns. Powerful AI makes it possible to assemble this “scattered, individually innocuous data into a comprehensive picture of any person’s life — automatically and at massive scale.” Amodei argued that to the extent such surveillance is currently legal, “this is only because the law has not yet caught up with the rapidly growing capabilities of AI.”

Red Line 2: No Fully Autonomous Weapons

Anthropic refused to allow Claude to power weapons systems that remove humans from the loop entirely in selecting and engaging targets. Amodei’s rationale was rooted in multiple concerns:

  • Hallucination risk: Every large language model generates false information with complete confidence. In civilian life, a hallucination is an inconvenience; in military operations, it is a strike order. Amodei told CBS News: “It doesn’t show the judgment that a human soldier would show — friendly fire or shooting a civilian, or just the wrong kind of thing. We don’t want to sell something that could get our own people killed, or that could get innocent people killed.”
  • Accountability gap: When a human operator makes a targeting error, there is a chain of accountability — investigation, discipline, court-martial. When an AI hallucinates a target and a missile destroys a civilian building, there is no clear answer for who is responsible.
  • Automation bias: The International Committee of the Red Cross has documented that military personnel “typically privilege action over non-action in a time-sensitive human-machine configuration” without thoroughly verifying AI output. AI’s speed enables “mass production targeting,” reducing human control to merely pressing a button.
  • Engineering reality: Amodei stated: “Anyone who’s worked with AI models understands that there’s a basic unpredictability to them that in a purely technical way, we have not solved.”

What Anthropic Was Willing To Do

Amodei was careful to distinguish his position from a blanket anti-military stance. Anthropic explicitly supported:

  • Intelligence analysis, modeling and simulation, operational planning, and cyber operations
  • Lawful foreign intelligence and counterintelligence missions
  • Partially autonomous weapons systems (like those used in Ukraine with human oversight)
  • All other lawful military applications
  • Continued deployment across DOD classified networks
  • R&D collaboration to improve AI reliability for future autonomous systems
  • A smooth transition to another provider if the Pentagon chose to end the relationship

The Pentagon did not accept the offer for collaborative R&D, nor did it agree to put its verbal assurances into binding written form.

The “Final Offer” Rejection

On the night of February 25–26, the Pentagon sent Anthropic revised contract language positioned as a compromise. Anthropic rejected it because the new language “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons” and “was paired with legalese that would allow those safeguards to be disregarded at will.”

The Pentagon’s Response

After the Friday deadline passed without Anthropic’s compliance, the Pentagon executed two of its three threatened actions:

Action TakenDescriptionImpact
Contract TerminationCancellation of Anthropic’s $200 million Pentagon contractDirect financial loss and loss of government credibility for Anthropic
“Supply Chain Risk” DesignationDesignation typically reserved for foreign adversaries (e.g., Chinese firms); requires all DOD vendors to certify they do not use Anthropic productsEffectively blacklists Anthropic across the entire defense industrial base; cascading commercial losses
Defense Production Act (dropped)The threatened DPA invocation to compel software modification was not executedLegal scholars had warned this would face immediate court challenge; the Pentagon likely recognized it was on weak legal ground

DOD Rationale

The Pentagon justified its actions on several grounds:

  • Operational readiness: The DOD argued it cannot depend on AI systems with company-imposed restrictions that could limit warfighter capabilities in active combat.
  • Precedent: Allowing one company to dictate terms of military use would set a dangerous precedent for the defense industrial base.
  • Competitive pressure: A senior defense official told Axios: “The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good.”
  • Existing law sufficiency: The Pentagon maintained that existing statutes already prohibit mass surveillance and that the DOD’s own 2020 AI ethics principles address autonomous weapons concerns.

The Iran Strike Contradiction

Critical Contradiction: On February 28, 2026 — hours after designating Anthropic a “supply chain risk” and banning its use across the DOD — the Pentagon used Claude for intelligence analysis, target selection support, and operations planning during U.S.-Israeli strikes on Iran. The Wall Street Journal confirmed that Claude was “deeply embedded” in CENTCOM planning systems and could not be extracted on short notice.

This created what Amodei had predicted in his February 24 statement: an inherent contradiction. As he wrote, the Pentagon’s threats “are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” The Iran strikes proved the point — the military simultaneously declared Anthropic a threat and relied on its technology for active combat operations.

The revelation undercut the Pentagon’s legal position and provided Anthropic with powerful evidence that the supply chain designation was pretextual rather than based on genuine security concerns.

The OpenAI Pentagon Deal — and Reversal

The Initial Deal (February 27)

Hours after the Anthropic ban took effect, OpenAI announced a new agreement with the Pentagon, positioning itself as the replacement for Anthropic across DOD systems. The deal appeared to give the Pentagon precisely what it had demanded from Anthropic: access to AI models for “all lawful use cases” without company-imposed restrictions.

The announcement triggered immediate backlash. A “Cancel ChatGPT” boycott movement surged online, and AI ethics researchers noted that OpenAI had previously maintained its own restrictions on military use before quietly removing them in early 2024.

The Reversal (March 2–3)

By March 2, OpenAI CEO Sam Altman publicly admitted the Pentagon deal had been “opportunistic and sloppy.” Altman said OpenAI was renegotiating the contract to add explicit written prohibitions on:

  • Mass surveillance of American citizens
  • Fully autonomous weapons systems without human oversight

These were the exact same red lines Anthropic had insisted on and been banned for defending. Altman went further, telling reporters that OpenAI “shares Anthropic’s red lines” on these issues. The New York Times reported that the amended contract included binding language — the very thing the Pentagon had refused to give Anthropic.

The reversal raised a critical question: if the Pentagon was willing to accept these safeguards from OpenAI, why did it ban Anthropic for demanding them?

The Defense Production Act

The Defense Production Act (DPA) is a 1950 federal law originally passed to accelerate industrial mobilization for the Korean War. It grants the president sweeping authority to direct private industry in service of national defense.

Core Authorities

TitleAuthorityDescription
Title IPrioritization & AllocationAllows the president to designate goods as “critical and strategic” and compel businesses to accept and prioritize government contracts ahead of all commercial commitments.
Title IIIExpansion of Productive CapacityAuthorizes financial incentives such as loans, loan guarantees, and equipment installation to boost domestic production of critical materials.
Title VIIVoluntary AgreementsPermits the president to authorize coordination among private companies that might otherwise violate antitrust laws.

Several original titles — including the power to requisition private property, fix wages and prices, and ration consumer goods — were allowed to expire by Congress and are no longer in effect.

Historical Use

The DPA has been invoked hundreds of times across administrations, but historically for tangible supply chain crises involving physical goods:

  • Korean War (1950s): The original use — prioritizing military production and controlling the civilian economy during wartime.
  • Cold War era: Continuous use for defense industrial base management and strategic material stockpiling.
  • COVID-19 Pandemic (2020–2021): President Trump invoked it to compel production of ventilators, N95 masks, and PPE; President Biden later used it to accelerate vaccine production.
  • Rare Earth Minerals: Used to boost domestic mining of materials critical to military technology.
  • Baby Formula Shortage (2022): Invoked to address a consumer health emergency.

Application to AI: Uncharted Territory

The DPA’s use in the AI domain is largely untested. President Biden invoked it in Section 4.2 of his October 2023 AI executive order to require safety test disclosures. Legal scholars at the Mercatus Center argued this exceeded the DPA’s statutory scope because the order lacked connection to the DPA’s traditional goals of “boosting production, stockpiling, or prioritizing the acquisition of tangible goods.”

The Pentagon’s threat to use the DPA to force Anthropic to remove safety guardrails represented an even more aggressive stretch — ordering a company to fundamentally alter its software’s behavior rather than compelling production of an existing product. The Pentagon ultimately dropped this threat, likely recognizing it would face immediate legal challenge.

Legal Analysis

Will the Supply Chain Designation Survive?

Legal experts have broadly predicted that the “supply chain risk” designation will not survive judicial review. Lawfare published a detailed analysis arguing the designation is legally vulnerable on multiple grounds:

Legal IssueAnalysis
Designed for Foreign AdversariesThe supply chain risk framework was created to address threats from entities like Huawei and Kaspersky — foreign companies with ties to hostile governments. Applying it to a U.S.-headquartered company with existing security clearances is unprecedented.
Pretextual MotivationThe Iran strike revelation — where the Pentagon used Claude hours after the ban — provides strong evidence the designation was retaliatory rather than based on genuine security concerns. Courts examine whether government actions have a rational basis or are pretextual.
Procedural ViolationsThe designation process typically involves formal investigation, evidence gathering, and opportunity for the company to respond. The compressed timeline (ultimatum to designation in days) suggests procedural shortcuts.
Arbitrary and CapriciousUnder the Administrative Procedure Act, courts can overturn government actions that lack a rational basis. Banning a company while simultaneously depending on its technology for active combat operations is difficult to defend as rational.
First AmendmentSoftware code has been recognized as protected speech in some contexts. Punishing a company for refusing to modify its product’s speech-related behavior could raise free expression issues.

DPA Constraints for AI

Although the DPA threat was ultimately dropped, the legal analysis remains relevant for future disputes:

  • Nexus requirement: Courts require a clear connection between executive action and the DPA’s core purposes. In Youngstown Sheet & Tube Co. v. Sawyer (1952), the Supreme Court established limits on presidential seizure of private industry.
  • Specific threat requirement: Historical DPA usage addressed specific, demonstrable risks. Generic concerns about “technological competition” are likely insufficient.
  • Tangible vs. intangible goods: The DPA’s language references “materials, services, and facilities.” Compelling a company to rewrite its safety architecture stretches this language beyond precedent.
  • Judicial deference: Courts have applied rational basis review to DPA actions in national security, meaning the government would receive significant benefit of the doubt — but not unlimited latitude.

Congressional AI Framework

Congress has been legislating military AI through the FY 2026 National Defense Authorization Act, which takes a markedly different approach from the Pentagon’s confrontational stance:

  • Establishment of an AI Futures Steering Committee by April 2026 to guide policy
  • Development of department-wide AI cybersecurity policy with structured safeguards
  • Creation of AI sandbox testing environments for controlled military experimentation
  • Building of AI research institutes for long-term capability development

Legal expert Alan Rozenshtein observed: “This struggle exists because Congress hasn’t established concrete regulations for military AI.” Amodei himself conceded it is “not tenable over the long term for a private company to decide these issues” and called on Congress to establish binding regulations.

Broader Implications

For the AI Industry

If the supply chain designation stands, it sets a precedent that any AI company refusing to remove safety guardrails for government use can be blacklisted across the defense sector. OpenAI’s reversal — adding the same safeguards Anthropic was banned for — highlights the inconsistency and suggests the industry recognizes these red lines as reasonable.

For National Security

The Pentagon’s aggressive posture reflects genuine urgency around AI competition with China. However, alienating the most capable domestic AI companies through coercion rather than collaboration could weaken the U.S. defense technology base. The Iran strike episode demonstrated that the military cannot easily replace Anthropic’s capabilities.

For AI Safety

This dispute is the first major test of whether AI safety principles can withstand direct government pressure. The outcome will shape whether the AI industry’s voluntary safety commitments hold meaningful weight when challenged by state power — or whether they are purely aspirational.

For Rule of Law

The contradiction between banning Anthropic and using its technology hours later for combat operations raises fundamental questions about whether national security designations are being used as political tools rather than genuine security measures. Courts will likely weigh this heavily.

DPA Invocations: Historical Context

References

  1. Anthropic — Statement from Dario Amodei on discussions with Department of War (Feb. 24, 2026)
  2. Anthropic — Statement on the comments from Secretary of War Pete Hegseth (Feb. 26, 2026)
  3. NPR — Hegseth threatens to blacklist Anthropic over ‘woke AI’ concerns (Feb. 24, 2026)
  4. New York Times — Pentagon Summons Anthropic Chief in Dispute Over A.I. Limits (Feb. 23, 2026)
  5. AP News — Hegseth warns Anthropic to let the military use the company’s AI (Feb. 24, 2026)
  6. Politico — Pentagon sets Friday deadline for Anthropic to abandon ethics rules (Feb. 24, 2026)
  7. Politico — Anthropic rejects Pentagon’s AI demands (Feb. 26, 2026)
  8. CNN — Anthropic rejects latest Pentagon offer (Feb. 26, 2026)
  9. CNN — Trump administration orders contractors to stop using Anthropic (Feb. 27, 2026)
  10. CBS News — AI executive Dario Amodei on the red lines Anthropic would not cross (Feb. 28, 2026)
  11. CBS News — Hegseth declares Anthropic a supply chain risk (Feb. 27, 2026)
  12. Axios — Pentagon threatens to label Anthropic’s AI a “supply chain risk” (Feb. 16, 2026)
  13. Wall Street Journal — A ‘Fight About Vibes’ Drove the Pentagon’s Breakup with Anthropic (Mar. 2, 2026)
  14. Wall Street Journal — U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban (Feb. 28, 2026)
  15. New York Times — OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash (Feb. 27, 2026)
  16. New York Times — OpenAI Amends A.I. Deal With the Pentagon (Mar. 2, 2026)
  17. Fortune — Sam Altman says OpenAI renegotiating ‘opportunistic and sloppy’ Pentagon deal (Mar. 3, 2026)
  18. Fortune — Anthropic CEO Dario Amodei: ‘We are patriotic Americans’ (Feb. 28, 2026)
  19. OpenAI — Our agreement with the Department of War (Feb. 27, 2026)
  20. Lawfare — Pentagon’s Anthropic Designation Won’t Survive First Contact with Legal System (Mar. 2, 2026)
  21. Mayer Brown — Pentagon Designates Anthropic a Supply Chain Risk: What Contractors Need to Know (Mar. 1, 2026)
  22. Mercatus Center — Executive Orders on AI: How to Lawfully Apply the Defense Production Act (Jan. 2025)
  23. Yale SOM — Usage of the Defense Production Act Throughout History (Feb. 2025)
  24. Council on Foreign Relations — What Is the Defense Production Act? (May 2025)
  25. Democracy Now — Pentagon Used Claude AI to Attack Iran Hours After Trump’s Ban (Mar. 3, 2026)
  26. ICRC — The risks and inefficacies of AI systems in military targeting support (Sep. 2024)
  27. BBC — Anthropic boss rejects Pentagon demand to drop AI safeguards (Feb. 26, 2026)
  28. OPB — Anthropic refuses to bend to Pentagon on AI safeguards (Feb. 27, 2026)
  29. Euronews — Cancel ChatGPT: AI boycott surges after OpenAI Pentagon deal (Mar. 2, 2026)
  30. Holland & Knight — Department of War’s AI-First Agenda: A New Era for Defense Contractors (Feb. 2026)

Report prepared by Lodi411.com — Lodi’s Community Information Source
Contact: info@lodi411.com

Next
Next

Your Voice Matters