Most companies have a quiet week or a loud one. Anthropic had a week that touched national security, enterprise economics, and product capability in the same five-day span. Each story on its own would be significant. Together, they paint a picture of a company operating under extraordinary pressure from multiple directions simultaneously.
Anthropic is fighting the government with one hand and shipping product with the other.
The Pentagon Blacklist
On March 4, the US Department of War formally designated Anthropic as a supply chain risk. It was the first time a US company had received this classification over its AI models' ethical guardrails.
The reasoning was blunt. Emil Michael, the Department of War's CTO, said Anthropic's Claude models "pollute" the supply chain because they contain "a different policy preference" regarding ethics and safety. He argued that Claude's constitutional approach to AI safety could result in "ineffective weapons, ineffective body armor, ineffective protection" for soldiers.
Anthropic sued. The company has consistently opposed its models being used for mass surveillance and autonomous weapons. The lawsuit challenges the designation on the grounds that the government is penalising a company for building safety into its products.
The support has been notable. Microsoft, OpenAI, Google employees, and former military personnel have all backed Anthropic's position. Bank Information Security reported that the standoff is being watched across the entire defence-tech ecosystem as a test case for whether AI safety principles will survive contact with procurement politics.
The framing matters. The administration has positioned this as opposing "woke AI," but as The Decoder noted, the approach mirrors China's efforts to align AI models with state values. The question is not whether AI should serve the military. The question is whether the government can compel a company to remove safety features from its models.
The Pricing Reset
While the legal battle dominated headlines, Anthropic quietly made one of the most significant pricing changes in the AI industry this year.
Claude Opus 4.6 and Claude Sonnet 4.6 no longer charge extra for long-context requests. Previously, any request exceeding 200,000 tokens incurred a surcharge of up to 100 percent. A 900,000-token request could cost double the base rate. That surcharge is gone.
The new flat rates: Opus 4.6 at $5 per million input tokens and $25 per million output tokens. Sonnet 4.6 at $3 per million input tokens and $15 per million output tokens. Processing 900,000 tokens now costs the same as processing 9,000.
Alongside the pricing change, Anthropic increased the media processing limit from 100 to 600 images or PDF pages per request.
This matters for anyone building agent workflows. As we explored in our analysis of MCP vs agent skills, agents consume context at scale. Every tool call, every document read, every conversation turn adds tokens. The long-context surcharge was a real constraint on how much context an agent could use before the economics broke down.
The changes are available through Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry, with the media limit expansion excluded on Bedrock.
The economics of agentic AI just shifted. Long-context workflows that were cost-prohibitive last week are viable today.
Visual Intelligence
Lost in the noise of lawsuits and pricing was a product update that changes how Claude works in practice. Claude can now generate charts, diagrams, and visualisations during conversations.
If Claude determines a visual would be useful based on the context of a conversation, it inserts the image directly into the response. No separate tool needed. No export step. The model reasons about when a visual would communicate more effectively than text and produces it inline.
For enterprise users, this closes a gap that mattered. Financial analysis, data exploration, architecture discussions, and strategic planning all benefit from visual thinking. Claude can now participate in those conversations with the same modality as a human colleague drawing on a whiteboard.
What Connects These Stories
Three stories, one pattern. Anthropic is simultaneously fighting for the right to build AI its own way, making that AI cheaper to use at scale, and expanding what it can do.
The Pentagon fight is about control. Who decides what values are embedded in AI models? The company that builds them, or the government that wants to deploy them? The answer will shape the entire industry, not just Anthropic.
The pricing change is about adoption. Long-context surcharges were a barrier to the kind of deep, persistent agent interactions that enterprises need. Removing them is a bet that usage will expand enough to offset the revenue per token.
The visual update is about capability. Claude is moving from a text-in, text-out model to a multimodal collaborator. Each expansion in what it can produce makes it harder to replace with a competitor.
As SaaStr noted in their weekly roundup, the "gentle deceleration" narrative is over. The pace of change in AI is accelerating, and Anthropic is at the centre of multiple collisions at once.
As we covered in our Anthropic Pentagon blacklist analysis, the safety-versus-deployment tension has been building for months. This week, it became a court case.
Sources
When the government says your AI is too ethical, who gets to decide what safety means?