Pentagon vs. Anthropic

Pentagon vs. Anthropic: The Battle Over Military AI and the Defense Production Act
Report on the conflict between Defense Secretary Pete Hegseth and AI company Anthropic over military use of artificial intelligence.

Executive Summary

Defense Secretary Pete Hegseth has given Anthropic CEO Dario Amodei until 5:01 PM on Friday, February 27, 2026, to sign a document granting the U.S. military unrestricted access to Anthropic's AI model Claude — or face contract termination, a “supply chain risk” blacklisting, and forced compliance under the Defense Production Act. Anthropic is willing to work with the Pentagon but insists on two red lines: no mass surveillance of American citizens and no fully autonomous lethal targeting. The standoff raises profound legal, ethical, and strategic questions about the future of AI in national defense.

Background

Anthropic, the San Francisco-based AI safety company and maker of the Claude AI model, was awarded a $200 million contract by the Pentagon in July 2025 to develop AI capabilities for U.S. national security applications. The relationship began to deteriorate after Defense Secretary Pete Hegseth issued a January 9, 2026 memo calling on AI companies to remove all restrictions on their technology for military use, declaring that military AI “will not be woke.”

Negotiations between Anthropic and the Department of Defense broke down over the following weeks, culminating in a tense in-person meeting at the Pentagon on Tuesday, February 24, 2026, between Hegseth and Anthropic CEO Dario Amodei. The meeting ended without resolution, and Hegseth subsequently issued a Friday deadline for compliance.

Key Timeline

July 2025: Anthropic awarded $200 million Pentagon contract for national security AI development.
January 9, 2026: Hegseth issues memo demanding AI companies remove military-use restrictions; declares military AI “will not be woke.”
February 16, 2026: Pentagon threatens to label Anthropic a “supply chain risk,” a designation normally reserved for foreign adversaries.
February 24, 2026: Hegseth and Amodei meet at the Pentagon; Hegseth issues Friday deadline for compliance.
February 27, 2026 (5:01 PM): Deadline for Anthropic to sign compliance document or face consequences.

What Hegseth Is Demanding

Hegseth is demanding that Anthropic consent to “all lawful use cases” for its AI models without any company-imposed restrictions. The Pentagon’s specific demands include:

  • Full, unrestricted military control over Claude’s capabilities for defense operations, with no company-imposed safety guardrails limiting military applications.
  • A signed compliance document by Friday, February 27, guaranteeing these terms.
  • Integration into the Pentagon’s internal AI network, which competitors including OpenAI and Google have already joined.
  • Removal of Anthropic’s internal safety guardrails that currently limit how Claude can be used in military contexts.

Pentagon officials have argued they are simply asking for a license to use Claude for “lawful activities” and that concerns about mass surveillance are moot because such surveillance is already illegal under existing law.

Threatened Consequences

If Anthropic does not comply by the Friday deadline, Hegseth has threatened a three-pronged response:

Threat Level: Severe

The combination of contract cancellation, blacklisting, and DPA invocation represents the most aggressive government action ever directed at a domestic AI company.

Threatened Action Description Impact
Contract Termination Cancellation of Anthropic’s $200 million Pentagon contract Direct financial loss and loss of government credibility
“Supply Chain Risk” Designation A label typically reserved for foreign adversaries (e.g., Chinese firms) requiring all DoD vendors to certify they do not use Anthropic products Effectively blacklists Anthropic across the entire defense industrial base; cascading commercial losses
Defense Production Act Invocation A 1950s-era law that would compel Anthropic to tailor its model for Pentagon use “whether they agree or not” Unprecedented forced modification of a private company’s software safety architecture

A senior defense official acknowledged the stakes, telling Axios: “The only reason we’re still talking to these people is we need them and we need them now. The problem for these guys is they are that good.”

Anthropic’s Position

Anthropic has stated it is willing to adapt its policies for Pentagon use and is not categorically refusing military cooperation. Claude was reportedly already used in the U.S. military operation to capture former Venezuelan President Nicolás Maduro, demonstrating its operational value to the defense community.

However, CEO Dario Amodei held firm on two specific red lines during the February 24 meeting:

Anthropic’s Two Red Lines

1. No Mass Surveillance of American Citizens: Anthropic wants written, binding guarantees that Claude will not be used for domestic surveillance programs targeting U.S. citizens. The company is seeking contractual protections rather than relying on verbal assurances from Pentagon officials.

2. No Fully Autonomous Lethal Targeting: Amodei insists on human-in-the-loop requirements for all military targeting decisions. He has cited Claude’s susceptibility to hallucinations and the risk of “potentially lethal mistakes, like unintended escalation or mission failure without human judgment.”

Anthropic’s position is rooted in its founding mission as an AI safety company. The company is not opposed to military use of its technology in principle but believes that certain applications — particularly autonomous kill decisions and mass surveillance — require hard constraints regardless of user identity.

The Defense Production Act

The Defense Production Act (DPA) is a 1950 federal law originally passed to accelerate industrial mobilization for the Korean War. It grants the president sweeping authority to direct private industry in service of national defense and has been reauthorized by Congress multiple times, most recently in 2025.

Core Authorities

The DPA currently contains three active titles:

Title Authority Description
Title I Prioritization & Allocation Allows the president to designate goods as “critical and strategic” and compel businesses to accept and prioritize government contracts ahead of all commercial commitments.
Title III Expansion of Productive Capacity Authorizes financial incentives such as loans, loan guarantees, and equipment installation to boost domestic production of critical materials.
Title VII Voluntary Agreements Permits the president to authorize coordination among private companies that might otherwise violate antitrust laws.

Several original titles — including the power to requisition private property, fix wages and prices, and ration consumer goods — were allowed to expire by Congress and are no longer in effect.

Historical Use

The DPA has been invoked hundreds of times across administrations, but historically for tangible supply chain crises involving physical goods:

  • Korean War (1950s): The original use — prioritizing military production and controlling the civilian economy during wartime.
  • Cold War era: Continuous use for defense industrial base management and strategic material stockpiling.
  • COVID-19 Pandemic (2020–2021): President Trump invoked it to compel production of ventilators, N95 masks, and PPE; President Biden later used it to accelerate vaccine production.
  • Rare Earth Minerals: Used to boost domestic mining of materials critical to military technology.
  • Baby Formula Shortage (2022): Invoked to address a consumer health emergency.

Application to AI: Uncharted Territory

The DPA’s use in the AI domain is largely untested and legally novel. There are two key precedents:

Biden’s AI Executive Order (October 2023)

President Biden invoked the DPA in Section 4.2 of his AI executive order to require companies developing large AI models to submit safety test results and other critical information to the government. Legal scholars at the Mercatus Center argued this exceeded the DPA’s statutory scope because the order focused on information gathering and disclosure — activities lacking a connection to the DPA’s traditional goals of “boosting production, stockpiling, or prioritizing the acquisition of tangible goods.”

Hegseth’s Threatened Invocation (February 2026)

The Pentagon’s threat to use the DPA to force Anthropic to remove safety guardrails represents an even more aggressive stretch of the statute. Rather than compelling production or prioritization of an existing product, the government would be ordering a company to fundamentally alter its software’s behavior — a use case with no historical precedent.

Legal Analysis

The legality of the Pentagon’s threatened actions raises several significant constitutional and statutory questions.

Legal Constraints on DPA Use

Legal Principle Implication for AI
Nexus Requirement Courts require a clear connection between executive action and the DPA’s core statutory purposes. In Youngstown Sheet & Tube Co. v. Sawyer (1952), the Supreme Court established limits on presidential seizure of private industry.
Specific Threat Requirement Historical DPA usage addressed specific, demonstrable risks to national defense. Generic concerns about “technological competition” are likely insufficient justification.
Tangible vs. Intangible Goods The DPA’s language references “materials, services, and facilities.” Compelling a company to rewrite its safety architecture — rather than deliver an existing product — stretches this language to its limit.
Judicial Deference Courts have historically applied rational basis review (the most deferential standard) to DPA actions in the “area of national security,” meaning the government would likely receive significant benefit of the doubt if challenged.
First Amendment Concerns Software code has been recognized as protected speech in some contexts. Compelled modification of an AI model’s behavior could raise free expression issues.

Are Hegseth’s Demands Illegal?

The demands themselves — asking a contractor to allow “all lawful use cases” — are not inherently illegal. The government has broad authority to set terms for its contracts. However, several aspects of the threatened enforcement raise legal red flags:

  • Supply chain risk designation applied to a domestic U.S. company (rather than a foreign adversary) would be highly unusual and potentially subject to legal challenge as arbitrary and capricious under the Administrative Procedure Act.
  • DPA invocation to compel software modification has no precedent. The statute was designed to make factories produce more goods, not to force technology companies to remove ethical constraints from their products.
  • Anthropic’s two red lines — no mass surveillance and no autonomous lethal targeting — arguably align with existing U.S. law. Mass surveillance of American citizens violates the Fourth Amendment and multiple federal statutes. The DoD’s own existing AI ethics principles (adopted in 2020) call for human judgment in the use of force.

The core legal tension is that the DPA was designed for industrial mobilization crises, not for forcing software companies to remove safety features. While the “national security” framing gives the executive branch substantial latitude, using the DPA in this manner would almost certainly face immediate legal challenge and could ultimately be resolved by the courts.

Congressional AI Framework

Congress has been actively legislating around military AI through the FY 2026 National Defense Authorization Act (NDAA), which takes a markedly different approach from the Pentagon’s confrontational stance:

  • Establishment of an AI Futures Steering Committee by April 2026 to guide policy.
  • Development of department-wide AI cybersecurity policy with structured safeguards.
  • Creation of AI sandbox testing environments for controlled military experimentation.
  • Building of AI research institutes for long-term capability development.

These legislative frameworks suggest Congress envisions a structured, policy-driven approach to military AI adoption — rather than the forced compliance model Hegseth is pursuing.

Broader Implications

The Anthropic-Pentagon standoff has implications that extend well beyond a single contract dispute:

For the AI Industry

If the government successfully compels Anthropic to remove safety guardrails, it could set a precedent for similar demands on other AI companies. Conversely, if Anthropic prevails, it may establish that AI safety constraints are a protected aspect of product design that the government cannot override by fiat.

For National Security

The Pentagon’s aggressive posture reflects a genuine urgency around AI competition with China and other adversaries. However, alienating the most capable domestic AI companies through coercion rather than collaboration could ultimately weaken, not strengthen, the U.S. defense technology base.

For AI Safety

This dispute is the first major test of whether AI safety principles can withstand direct government pressure. The outcome will shape whether the AI industry’s voluntary safety commitments hold any meaningful weight when challenged by state power.

DPA Invocations: Historical Context

References

  1. NPR — Hegseth threatens to blacklist Anthropic over ‘woke AI’ concerns (Feb. 24, 2026)
  2. New York Times — Pentagon Summons Anthropic Chief in Dispute Over A.I. Limits (Feb. 23, 2026)
  3. AP News — Hegseth warns Anthropic to let the military use the company’s AI (Feb. 24, 2026)
  4. Politico — Pentagon sets Friday deadline for Anthropic to abandon ethics rules (Feb. 24, 2026)
  5. Axios — Pentagon threatens to label Anthropic’s AI a “supply chain risk” (Feb. 16, 2026)
  6. CNBC — Anthropic faces Friday deadline in Defense AI clash with Hegseth (Feb. 24, 2026)
  7. CBS News — Hegseth demands full military access to Anthropic’s AI (Feb. 23, 2026)
  8. Fox News — Pentagon gives Anthropic Friday ultimatum on military AI restrictions (Feb. 23, 2026)
  9. PBS NewsHour — Hegseth warns Anthropic to let the military use company’s AI tech as it sees fit (Feb. 24, 2026)
  10. Reuters — Anthropic digs in heels in dispute with Pentagon (Feb. 24, 2026)
  11. CNN — Pentagon threatens to make Anthropic a pariah if it refuses to drop safeguards (Feb. 24, 2026)
  12. DW — Pentagon gives ultimatum to Anthropic over AI curbs (Feb. 24, 2026)
  13. Mercatus Center — Executive Orders on AI: How to Lawfully Apply the Defense Production Act (Jan. 2025)
  14. Yale SOM — Usage of the Defense Production Act Throughout History (Feb. 2025)
  15. Council on Foreign Relations — What Is the Defense Production Act? (May 2025)
  16. Al Jazeera — Anthropic vs the Pentagon: Why AI firm is taking on Trump administration (Feb. 25, 2026)
  17. Democracy Now — Pentagon Pressures Anthropic to Allow Full Access to Its AI Models (Feb. 25, 2026)
  18. Holland & Knight — Department of War’s AI-First Agenda: A New Era for Defense Contractors (Feb. 2026)

Report prepared by Lodi411.com — Lodi’s Community Information Source
Contact: info@lodi411.com

Previous
Previous

The Port of Stockton & the Railroad That Runs Through Lodi

Next
Next

Anduril and Palantir: AI-Enabled Transformation of US Defense