When Washington Weaponises Procurement: Anthropic's Fight for the Right to Say No
The AI company's two federal lawsuits challenge a supply chain risk label normally reserved for foreign adversaries, and the implications stretch far beyond defence contracts.
Until today, the "supply chain risk" designation under U.S. defence procurement law had only ever been pointed at foreign adversaries. Companies like Huawei and Kaspersky earned the label for suspected ties to hostile governments. Anthropic, the maker of Claude, just became the first American company in history to receive it.
The company's offence was not espionage, sabotage, or subversion. It was refusing to remove two contractual guardrails: a prohibition on using its AI models for fully autonomous weapons without human oversight, and a prohibition on mass domestic surveillance of U.S. citizens.
On March 9, Anthropic filed two federal lawsuits challenging the designation, one in the U.S. District Court for the Northern District of California and another in the U.S. Court of Appeals for the D.C. Circuit. The company is asking courts to vacate the label, block its enforcement, and require federal agencies to withdraw directives to drop Claude.
When the government can brand an American company a national security threat for exercising contractual discretion, the precedent extends far beyond AI.
The Two Red Lines
The roots of this dispute sit in a contract renegotiation that collapsed in late February. Anthropic signed a deal worth up to $200 million with the Department of Defense in July 2025, making Claude the first frontier AI model approved for use on the Pentagon's classified networks. As part of that contract, the department agreed to abide by Anthropic's acceptable use policy.
When renegotiation began, the Pentagon wanted the restrictions removed. Defence Secretary Pete Hegseth insisted the military needed access to Claude for "any lawful purpose," arguing that private companies cannot dictate how the government uses technology in national security scenarios.
CEO Dario Amodei met with Hegseth on February 24. The meeting failed to produce agreement. Amodei's position was straightforward: current AI models are not reliable enough for fully autonomous weapons deployment, and domestic surveillance at scale would violate fundamental rights. These were not negotiating positions. They were hard limits.
On February 27, Hegseth formally issued the supply chain risk designation. Anthropic was officially notified on March 3. President Trump separately directed all federal agencies to stop using Anthropic's technology via a Truth Social post the same day, writing that the country's fate would not be decided by what he called an "out-of-control, Radical Left AI company." The General Services Administration subsequently terminated Anthropic's "OneGov" contract, ending the availability of its services across the federal government's centralised AI platform.
The Legal Challenge
Anthropic's complaint calls the government's actions "unprecedented and unlawful" and mounts a five-count challenge. The core arguments fall into three categories.
First, First Amendment retaliation. Anthropic argues the designation punishes the company for expressing views on AI safety, both publicly and in direct negotiations with the government. The lawsuit states plainly: "The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech." The Pentagon has a right to disagree, even to walk away, but it cannot stigmatise a vendor as a security threat over protected speech.
Second, statutory overreach. The relevant law, 10 U.S.C. § 3252, defines supply chain risk as the danger that "an adversary" may sabotage, subvert, or maliciously introduce unwanted function into a covered system. Legal scholars at Lawfare have argued the designation "exceeds what the statute authorizes," that "the required findings don't hold up," and that Hegseth's own public statements may have undermined the government's legal position before litigation even began. The statute's legislative history points exclusively to foreign threats. A domestic vendor in a contract dispute does not fit the framework Congress created.
Third, procedural failures. The law generally requires agencies to conduct a risk assessment, notify the targeted company, allow it to respond, make a written national security determination, and notify Congress before excluding a vendor. Anthropic's complaint alleges none of these steps were properly followed.
There is also a glaring logical contradiction at the heart of the government's position. If Anthropic genuinely poses an acute supply chain threat requiring emergency exclusion, why is the Pentagon simultaneously allowing a six-month phaseout? As the Lawfare analysis put it, the government "cannot simultaneously claim a vendor poses an acute supply-chain threat requiring emergency exclusion and that it's perfectly safe to keep using the vendor for half a year." Claude has reportedly continued to support military operations, including intelligence assessments in the U.S. conflict with Iran, even after the blacklisting.
The Competitive Scramble
The fallout created immediate market dynamics that would be darkly comic if the stakes were not so high.
Within hours of Anthropic's designation, OpenAI CEO Sam Altman announced a new Pentagon deal to deploy ChatGPT on classified systems. The timing drew sharp criticism. Altman later admitted the move was "opportunistic and sloppy" and said the company "shouldn't have rushed" to get the agreement out. OpenAI then renegotiated its own contract to add explicit prohibitions on domestic surveillance and restrictions on intelligence agency access, provisions that looked remarkably similar to the guardrails Anthropic had been insisting on from the start.
The backlash was not just external. An OpenAI robotics leader, Caitlin Kalinowski, resigned on principle over the deal. Multiple OpenAI employees signed an open letter supporting Anthropic's stance. Chalk graffiti appeared outside OpenAI's San Francisco offices reading "Where are your redlines?" Meanwhile, Claude surged past ChatGPT to become the most downloaded free app on Apple's App Store for the first time, and Anthropic reported more than a million new daily sign-ups.
Elon Musk's xAI has also reportedly been cleared for use on classified Pentagon systems. The irony is thick: Lawfare has previously noted xAI's Grok has a "documented history of biased, misleading, antisemitic, and harmful outputs," yet faces none of the scrutiny directed at the company that insisted on safety guardrails.
What This Means Beyond Defence
For anyone working in payments, commerce, or enterprise technology, this case should command attention. The principle at stake is not specific to AI.
Every technology vendor that serves government buyers, or serves companies that serve government buyers, operates within acceptable use policies. Card networks set rules about what transactions processors can facilitate. Cloud providers maintain terms of service that restrict certain workloads. SaaS companies define boundaries around how their tools can be deployed. These are standard commercial practices, not acts of sabotage.
The question Anthropic's lawsuit forces is whether the government can weaponise procurement law to punish a domestic company for maintaining those boundaries. If the supply chain risk designation survives legal challenge, it creates a template: any vendor that refuses to grant unlimited access to its technology could face the same treatment. As Just Security argued, allowing this designation to stand would let future administrations use procurement authorities as a tool for coercion against any company whose terms of service conflict with government preferences.
The broader structural problem, as Lawfare's Alan Z. Rozenshtein has written, is that the rules governing military AI are being set through ad hoc negotiations between executive officials and individual companies, with no democratic input and no durable framework. Congress, not the Pentagon or any single AI lab, should be setting these boundaries. Without legislation, the rules change with every administration and every contract negotiation.
Anthropic's legal fight will take months, possibly years, to resolve. But the designation has already achieved one thing the Pentagon likely did not intend: it has made the company's safety commitments the most publicly visible and commercially valuable brand position in the AI industry.
Sources
If a government can blacklist a domestic technology company for refusing to remove safety guardrails, what does that mean for every vendor that sets terms on how its products are used?
