In early March 2026, the United States Department of Defense did something unprecedented. It classified Anthropic, one of America's leading artificial intelligence companies, as a supply chain risk under 10 U.S.C. § 3252. The authority was designed to block foreign adversaries from infiltrating American defence procurement. It had never been used against a domestic company over a policy disagreement.

The trigger was straightforward. Anthropic refused to strip safety restrictions from its AI model, Claude, for military applications. The Pentagon wanted unrestricted access. Anthropic said no. What followed is now the most consequential legal battle in the brief history of the AI industry.

The first time the US government weaponised supply chain risk authority against an American AI company was not about espionage or sabotage. It was about safety guardrails.

The Blacklist

The classification came swiftly. After negotiations between Anthropic and the Department of Defense broke down, Defence Secretary Pete Hegseth publicly called the company "unpatriotic." President Trump went further, labelling Anthropic "radical" and "woke" for its refusal to comply.

The practical effect was immediate. Anthropic products were barred from use across all federal defence contracts. Any contractor relying on Claude's capabilities, including some of the Pentagon's largest technology partners, was forced to begin planning alternative integrations.

Anthropic filed suit against the Trump administration on March 9, challenging the classification as an unconstitutional overreach of executive authority. The company argues that 10 U.S.C. § 3252 was never intended to coerce private companies into abandoning their product design principles.

The Red Lines

At the core of the dispute are two specific capabilities the Pentagon sought from Claude. First, the ability to conduct mass domestic surveillance, processing and analysing communications data from American citizens at scale. Second, the integration of Claude into autonomous lethal weapons systems, where the AI would make targeting decisions without meaningful human oversight.

Anthropic has maintained that both uses cross ethical boundaries that no commercial pressure can justify. The company's Acceptable Use Policy explicitly prohibits applications involving mass surveillance and autonomous weapons. Removing those restrictions would, in Anthropic's view, undermine the safety-first approach that distinguishes it from competitors.

Anthropic drew a line: no mass domestic surveillance, no autonomous lethal targeting. The Pentagon called it insubordination. The courts will decide what to call it.

As we explored in our analysis of AI's role in agentic commerce security, the question of where to draw safety boundaries in AI deployment is not abstract. It is the defining tension of the industry. This case brings that tension from the commercial world into the military one, with far higher stakes.

The Coalition

What makes this legal fight extraordinary is the breadth of support rallying behind Anthropic.

Microsoft, one of the Pentagon's largest technology contractors, filed an amicus brief arguing that Anthropic's products form a "foundational layer" in its own military offerings. Immediate enforcement, Microsoft warned, would force contractors to reconfigure critical systems mid-operation. The company stated plainly that AI should not be used for domestic mass surveillance or autonomous warfare.

Then came the researchers. 37 employees from OpenAI, Google, and Google DeepMind, including Google Chief Scientist Jeff Dean, signed a letter supporting Anthropic's position. Their argument was technical: current AI systems hallucinate, lack transparency, and are fundamentally unsuitable for lethal decision-making. Deploying Claude in autonomous weapons without safety guardrails, they argued, would put lives at risk from unreliable technology.

The military establishment weighed in next. 22 former senior officials, including former CIA and NSA Director Michael Hayden and two former Navy Secretaries, signed a brief arguing the classification violates the rule of law. These are not pacifists. They are intelligence and defence veterans who believe the supply chain authority was designed for foreign adversaries, not domestic policy disputes.

Civil rights organisations completed the coalition. FIRE, the Electronic Frontier Foundation, and the Cato Institute filed briefs arguing the blacklist constitutes compelled speech, violating First Amendment protections. Their position: the government cannot force a private company to design its products in a specific way by threatening its commercial viability.

This case will set precedent regardless of outcome. It is the first time supply chain risk authority has been weaponised against an American company in what amounts to a policy disagreement. If the classification stands, any AI company that resists government demands on product design could face similar treatment.

The constitutional questions are significant. Can the executive branch use procurement authority to override a company's design decisions? Does forcing an AI company to remove safety features constitute compelled speech? Where does national security end and regulatory overreach begin?

As we noted in our coverage of the regulatory landscape shaping AI governance, the current administration has shown a willingness to use unconventional tools to advance its technology agenda. This case tests whether the courts will impose limits.

The Anthropic Institute

Amid the legal battle, Anthropic announced the launch of a new internal think tank: the Anthropic Institute. Co-founder Jack Clark, previously head of public policy, will lead the organisation under a new title of head of public benefit.

The Institute merges three existing research teams: the Frontier Red Team (stress-testing AI systems), the Societal Impacts team (studying real-world AI deployment), and the Economic Research team (tracking AI's effect on jobs and economies). Founding members include Matt Botvinick, formerly a Senior Director of Research at Google DeepMind, and Zoë Hitzig, who studied AI's social and economic impacts at OpenAI.

The timing is deliberate. Anthropic is building the institutional infrastructure to argue that AI safety is not an obstacle to national security. It is a prerequisite.

What Comes Next

The case is moving quickly. Oral arguments are expected within weeks, and the coalition of amicus briefs has transformed what could have been a narrow procurement dispute into a landmark test of government authority over AI development.

Three outcomes are worth watching. If Anthropic wins, it establishes that supply chain risk authority cannot be used to coerce AI companies on product design, effectively creating a legal shield for safety-focused development. If the government wins, every AI company in America will face a new calculus: comply with any government demand or risk being cut from federal contracts. A settlement, the most likely outcome, would probably produce a narrower ruling that satisfies neither side but avoids setting broad precedent.

The broader implications extend well beyond defence procurement. Every technology company building AI products is watching this case. The answer to whether a company can say "no" to the government on safety grounds will shape the industry for a generation.

Sources

If the government can blacklist an AI company for refusing to remove safety guardrails, what incentive does any company have to build them in the first place?

Reply

Avatar

or to participate

Keep Reading