The standoff between [Anthropic](/en/companies/anthropic) and the US government has reached a breaking point. On February 27, 2026, President Donald Trump ordered all federal agencies to immediately stop using Anthropic's technology — the company behind Claude. The reason: Anthropic refused to lift safety restrictions on the military use of its AI.
Timeline: The Anthropic vs Pentagon Conflict
The Original $200 Million Contract
In summer 2025, Anthropic signed a contract estimated at $200 million with the US Department of Defense. Claude was deployed across multiple federal agencies, including the Pentagon, for analysis, synthesis, and decision-support tasks. The contract included usage restrictions imposed by Anthropic under its Responsible Scaling Policy — an ethical framework governing sensitive applications of its AI.
The Pentagon's Ultimatum (February 26, 2026)
On February 26, 2026, the Pentagon delivered an ultimatum to Anthropic: remove all restrictions on military use of Claude by 5:01 PM ET the following day, or lose the contract. Specifically, the Department of Defense demanded that Anthropic authorize the use of Claude for domestic mass surveillance of American citizens and for fully autonomous weapons systems — meaning weapons capable of selecting and engaging targets without human oversight.
A notable detail: the Pentagon claimed it had never actually considered these specific applications, but categorically refused to prohibit them in the contract.
Anthropic's Response: "Cannot in Good Conscience"
That same day, Anthropic publicly rejected the ultimatum. CEO Dario Amodei issued an unequivocal statement:
“Threats do not change our position: we cannot in good conscience accede to their request.”
Anthropic made clear that its red lines are non-negotiable: no domestic mass surveillance, no autonomous weapons without human control. The company said it remains willing to work with the government on responsible AI applications, but not at the cost of abandoning its safety principles.
Trump's Executive Order (February 27, 2026)
On February 27, 2026, Donald Trump directed every federal agency to immediately cease all use of Anthropic's technology. On Truth Social, the president wrote:
“We don't need it, we don't want it, and will not do business with them again!”
Shortly after, Defense Secretary Pete Hegseth officially designated Anthropic as a "supply chain risk" — a classification typically reserved for companies suspected of being extensions of foreign adversaries, such as certain Chinese suppliers. This marks the first time an American AI company has received such a designation.
Government agencies have been given 6 months to fully transition away from Anthropic products.
Anthropic Takes Legal Action (February 28, 2026)
On February 28, Anthropic announced it would challenge the decision in court. The company considers the "supply chain risk" designation abusive and disproportionate, arguing that the Pentagon's decision constitutes retaliation against the legitimate exercise of its contractual rights.
Why Anthropic Refused: AI's Red Lines
Domestic Mass Surveillance
Anthropic refused to allow Claude to be used for mass surveillance of American citizens on domestic soil. This type of application raises fundamental questions about civil liberties and the Fourth Amendment, which protects against unreasonable searches and seizures.
Autonomous Weapons Without Human Oversight
The second red line concerns lethal autonomous weapons — systems capable of selecting and engaging targets without human intervention. This is a subject of global debate: numerous AI experts and international organizations advocate for maintaining human control in the lethal decision chain (the human-in-the-loop principle).
Anthropic's Responsible Scaling Policy
Since its founding, Anthropic has positioned itself as a safety-focused AI company. Its Responsible Scaling Policy defines risk levels (ASL) for its models and imposes proportional usage restrictions. This policy is central to Anthropic's identity and serves as a key differentiator against competitors: Claude is marketed as the safest and most aligned AI on the market.
Consequences for the AI Industry
OpenAI Signs with the Pentagon
In what may be coincidence or opportunism, OpenAI — the company behind ChatGPT — announced a new contract with the Pentagon within hours of Anthropic's ban. The timing underscores the fierce competition among AI companies for government contracts, and the divergent ethical choices they make.
"Supply Chain Risk": An Unprecedented Designation
Anthropic's designation as a "supply chain risk" is unprecedented for an American technology company. This category, created to counter threats from foreign adversaries (China, Russia), is being used here against a San Francisco-based company. Legal experts and defense analysts immediately raised concerns about the legality and proportionality of this measure.
A 6-Month Transition Period for Federal Agencies
Government agencies have until late August 2026 to migrate to alternative solutions. This represents a significant logistical challenge: Claude was integrated into many federal workflows. Likely alternatives include OpenAI's models (GPT-4), Google's Gemini, and xAI's Grok (owned by Elon Musk).
What This Means for Claude Users
Impact on Claude's Consumer Service
For individual users and private businesses, nothing changes. The ban only applies to US federal agencies and government contractors. Claude remains fully available for individuals, businesses, and developers through claude.ai, the API, and Claude Code.
Claude vs ChatGPT: A Divergence in Ethics
This affair crystallizes the difference in positioning between Claude and ChatGPT. Anthropic chose to maintain its safety principles even at the cost of a $200 million contract and a direct confrontation with the executive branch. OpenAI, meanwhile, seized the business opportunity. For users who value the ethics and safety of their AI tools, this positioning is a significant factor in choosing between platforms.
Anthropic's Future After the Ban
Despite losing the government contract, Anthropic remains in a strong position. The company has raised over $7 billion, serves millions of users worldwide, and its Claude models are recognized as among the most capable on the market. The lawsuit against the Pentagon could set an important legal precedent regarding AI companies' rights when facing government demands.
In the longer term, this affair could actually strengthen the Anthropic brand among users who prioritize ethics and safety — particularly in Europe, where regulations on military AI are stricter.
Frequently Asked Questions
This article will be updated as the situation develops. Last updated: February 28, 2026.
Stay up to date with AI news
Get the latest analyses and comparisons delivered straight to your inbox.
No spam. Unsubscribe in 1 click.


