Anthropic's CEO met Susie Wiles and Scott Bessent on April 17. The meeting was not about one model. It was about who writes the rules when the frontier lab and the federal government disagree on what "safe" means.
On Friday April 17, Dario Amodei met Susie Wiles, White House Chief of Staff, and Scott Bessent, Treasury Secretary, in the West Wing. Axios first reported the meeting. CNN described the conversation as productive. The session capped a two-month standoff that started with Anthropic's refusal to release Mythos, its cyber-capable frontier model, and escalated into a Pentagon procurement dispute that we covered in detail in February.
The outcome of the meeting matters less than the fact that it happened. A frontier AI lab and the federal government could not resolve their disagreement through normal procurement channels. They had to send the CEO to the Chief of Staff.
The Pentagon fight was a product dispute. The West Wing meeting made it a policy dispute. That is a permanent category change.
What Led to the Meeting
The backstory is short and sharp. Anthropic built Mythos, an agent capable of autonomously discovering zero-day vulnerabilities at scale. Anthropic decided the risk of release outweighed the benefit and withheld it. The Department of Defense interpreted that decision as a failure to provide a capability the government had contracted for. The Pentagon began reviewing Anthropic's position on the Defense approved product list.
We wrote in February that the dispute was not really about one model. It was about who defines safe, who bears the risk of release, and whether a lab can unilaterally decide that a capability the government wants is too dangerous to exist.
The MIT Technology Review published a broader piece earlier this month on the "human in the loop" illusion in AI warfare. The framing was sharp. If the model is the loop, the human oversight that regulators and ethicists point to is already cosmetic. That argument gave cover to both sides of the Anthropic-Pentagon fight. Either the human is still meaningfully involved (Pentagon position) or Mythos is already the decision maker (Anthropic position). The two cannot both be right.
The Friday Meeting
Amodei, Wiles, and Bessent met for what multiple outlets described as a thaw rather than a resolution. The specific concessions on either side have not been published. What we know publicly is that Anthropic committed to continued engagement with national security use cases and that the White House committed to working through the specific Mythos concerns rather than moving forward with broader procurement actions.
That framing matters. "Continued engagement" is not the same as "release Mythos to approved defenders." "Working through concerns" is not the same as "suspend the review." Both sides bought time. Neither side conceded the core question.
The involvement of the Treasury Secretary is the detail worth noting. Bessent is not part of the AI policy apparatus in any formal sense. His presence signals that the administration is thinking about frontier AI as an economic and regulatory question, not only a defense question. That is a larger frame than the Pentagon dispute alone.
The Vendor Breach
Five days after the West Wing meeting, the policy question got a new piece of evidence. On April 22, PYMNTS reported that Anthropic was investigating unauthorized access to the Claude Mythos Preview through one of its third-party vendor environments. A small group of users had reached the model via a private online forum, around the time Anthropic announced its controlled release program. The company said it had found no evidence the activity extended beyond the vendor or touched Anthropic's own systems.
Mythos is distributed through Project Glasswing, which gives a limited set of software companies access for security testing. That is where the access happened.
Read the report carefully. Anthropic is the lab that has argued most forcefully that unilateral withholding is the only defensible answer to Mythos-class capability. The vendor breach does not refute that argument. It sharpens it. Even the most restrictive release the lab could design had a hole. The Pentagon position, that approved defenders should have access, has not been refuted either. It has just gotten harder to operationalize without the exact vendor controls Anthropic is now investigating.
Both sides can read the breach as supporting their position. Anthropic can point to it as proof that broad release would be catastrophic when narrow release is already leaking. The Pentagon can point to the same facts and ask why a commercial third party got access before a vetted national security defender. Neither reading is wrong.
The first confirmed Mythos access outside Anthropic was not a Pentagon defender. It was a forum.
That is a detail the policy response will have to carry.
Why This Is Bigger Than Mythos
The Anthropic-OpenAI split on cyber policy is the context. In the two weeks before the Friday meeting, Anthropic declined to ship a cyber-capable frontier model. OpenAI announced Trusted Access for Cyber with GPT-5.4-Cyber and $10 million in credits for vetted defenders. We wrote about that split as a structural asymmetry. One lab withholds. One lab arms a shortlist. Neither is regulation.
What Friday's meeting signals is that the federal government is not going to leave that asymmetry to the labs. Not indefinitely. There will be a policy response. It will probably not be a single law. It will be a set of procurement terms, export controls, liability frameworks, and informal norms that work out over the next eighteen months.
The meeting also signals that the meeting itself is the new norm. Frontier labs are now strategic counterparties to the US government in the way that defense contractors and semiconductor foundries have been for decades. The CEO of Lockheed Martin meets Chief of Staff every few months. The CEO of TSMC meets Commerce Secretaries regularly. Anthropic, OpenAI, and Google DeepMind are entering that tier.
That is a mixed blessing for the labs. It brings access and legitimacy. It also brings the obligation to say yes when the administration needs capabilities for reasons the lab might not fully endorse.
The Commerce Layer
Every time we write about frontier AI policy, the commerce and payments layer gets cited as an afterthought. That is a mistake. The same cyber capabilities that worry the Pentagon are the ones that will decide whether agentic payments and commerce flows can be deployed at scale.
Consider the risk stack. An agent with Mythos-class capabilities could find vulnerabilities in payment authorization logic, tokenization services, or merchant checkout integrations. The payments industry has spent five years pushing fraud detection upstream, specifically because the detection window closed. If the attacker is a Mythos-class model running at scale, the defense has to be a Mythos-class model too. That is why the Pentagon wants access. That is also why arming defenders is consequential.
The outcome of the Anthropic-White House dialogue will shape what defenders in payments, banking, and commerce get to use. Today that capability is withheld. Tomorrow it might be available to a shortlist of card networks and processors. A year from now it might be baseline for any regulated financial entity. Or it might stay withheld entirely and force a different defensive architecture.
None of those outcomes are off the table this week.
What To Watch
Three specific signals over the next 60 days.
First, whether Anthropic ships a gated version of Mythos to a vetted set of national security or financial-sector defenders. A gated release would confirm that the Friday meeting produced a concrete path. Continued silence suggests the standoff continues informally.
Second, whether the administration publishes any procurement guidance on frontier AI capabilities. A formal policy document or executive order targeting approved-product lists, export controls, or defender-access frameworks would be the first public trace of the broader policy response.
Third, whether OpenAI's Trusted Access for Cyber program expands or contracts. Expansion would signal that the administration sees the arm-the-defenders model as acceptable. Contraction would suggest the policy pendulum is swinging toward Anthropic's withholding approach.
Fourth, what the vendor breach investigation actually finds. If Anthropic confirms the access stayed inside a single third-party environment and did not reach production systems or Pentagon counterparties, the policy fallout stays contained. If the scope widens, the entire gated-release architecture becomes harder to defend.
The Deeper Question
This is the part no policymaker has answered publicly. Can a frontier AI lab decide, unilaterally, that a capability the US government wants is too dangerous to exist?
Anthropic said yes and withheld Mythos. The Pentagon said no and pushed procurement review. The Friday meeting suggested both sides are looking for a middle ground. The middle ground is probably an informal agreement that the lab can withhold and the government can set procurement terms that penalise withholding, with both sides agreeing not to litigate the underlying principle publicly.
That is functional. It is also fragile. It relies on the personal relationships between a small number of AI executives and a small number of administration officials. It does not survive a change of administration, a cyber incident attributable to Mythos-class capability, or a decision by either side to make the standoff public.
The Friday meeting bought time. It did not answer the question. That question is going to come back.
Sources
Who writes the rules when the frontier lab and the federal government disagree on what safe means?
Charlie Major is a Product Development Manager at Mastercard. The views and opinions expressed in Major Matters are his own and do not represent those of Mastercard.