Anthropic is briefing EU regulators on why it refused to ship Mythos. OpenAI is handing enterprises a cyber-tuned model and $10 million in API credits. The offensive-defensive split just became policy, written by the companies that built the risk.

Anthropic sat down with the European Commission on April 15 to explain why one of its most capable models is not shipping. The next day, OpenAI announced the opposite move: a cyber-tuned version of GPT routed to selected security firms, with $10 million in API grants attached. Same problem, opposite answers, one week apart.

Here is what that split actually means. When a frontier lab decides a model is too dangerous to release, the capability does not disappear. It becomes private. When a competing lab decides the same class of capability should be routed to defenders, the capability also does not become public. It becomes selectively commercial. Neither choice is regulation. Both are editorial decisions made by the labs that built the risk.

The offensive capability and the defensive capability are now both private. The middle, where regulators and banks and payment processors actually live, is where the asymmetry lands.

What Mythos Does

Anthropic unveiled Mythos earlier this month and almost immediately refused to release it. The reason, according to PYMNTS reporting on the EU briefing, is that the model can autonomously discover zero-day vulnerabilities in code that is already in production. Not test code. Not simulated environments. Live systems, including some Anthropic says were already sitting in the wild.

European Commission officials met Anthropic on Wednesday. More meetings are planned. The Commission has not said what it wants out of the process. Anthropic has not said what it would take to ship the model. That conversation is happening in private, which is its own kind of signal.

We covered the initial Mythos leak when the capability first surfaced. What has changed in the weeks since is that the regulatory response is now active, not theoretical. An EU regulator is sitting across from an AI lab asking why a specific model has not shipped. That is not how this conversation used to work.

What Trusted Access Does

OpenAI's answer runs in the opposite direction. Trusted Access for Cyber, announced April 16, puts GPT-5.4-Cyber in the hands of selected security firms and enterprises, backed by $10 million in API grants. The pitch is clear. If offensive AI capability is going to exist somewhere, defenders should not be the last to get it.

So OpenAI is picking the defenders. Selected ones. That is a reasonable position. It is not the same as a regulatory framework.

OpenAI decides who counts as a defender. OpenAI decides the access terms. The $10 million in credits is a commercial subsidy for a commercial model, useful but not neutral. When your defense depends on one vendor's shortlist, you have not solved the problem. You have concentrated it.

The Policy Gap Nobody Is Filling

No regulator has defined what counts as too capable to ship. The EU's AI Act does not have a specific zero-day-discovery threshold. US frameworks do not either. So labs are writing the rules by acting on them.

Anthropic is writing one rule. If a model can find zero-days autonomously, withhold it. OpenAI is writing a different rule. If offensive capability exists, route the defensive version to people you trust. Both rules are defensible. Neither is enforceable beyond the lab that wrote it.

Read that again. The policy on whether the most dangerous AI capabilities ship is being made by the companies that built them. Not legislators. Not standards bodies. Two labs with very different views on where the line is.

Why This Lands on Payments Infrastructure

Zero-day vulnerabilities in banking and payments codebases are the highest-value target in the whole stack. Authorisation flows. Tokenisation services. Clearing and settlement systems. The places where a single exploit is worth more than most fraud rings make in a year.

If Mythos-class capability exists privately inside Anthropic, and GPT-5.4-Cyber capability exists semi-privately inside a circle of OpenAI customers, then banks and processors are in an unusual position. They have to assume attackers will eventually replicate the offensive capability. They cannot assume they will get the defensive capability on the same timeline.

That is the asymmetry. Attackers do not need official access. A well-funded adversary, and plenty of them target payments, can build or buy its way to the same class of capability Anthropic is withholding and OpenAI is renting out. The offensive side does not wait for policy. The defensive side does.

Attackers do not need official access. Defenders do.

The supply-chain side of this story is already live. Our coverage of the LiteLLM breach showed how a single compromised AI infrastructure component can cascade through payment-adjacent systems. Mythos-class capability does not need to ship for that pattern to accelerate. It just has to leak.

What This Says About Governance

Between Anthropic and OpenAI, we now have two unilateral cyber policies and zero ratified ones. That is the uncomfortable part. Governance is being performed by the parties most exposed to the downside of getting it wrong, and also by the parties most exposed to the upside of getting it right.

We wrote earlier about the evidence base around agent security, in our analysis of AI agent security. The pattern is the same here, one level up. Capability outpaces deployment. Deployment outpaces governance. Governance is still being drafted.

The FDIC released updated supervisory guidance this week on payment processing relationships with higher-risk merchants. Useful, but one regulator, one vertical, one layer of the stack. Nothing in it addresses what happens when the attacker running fraud against those merchants is using a model the regulator has never seen.

What To Watch

Three things will tell us which way this goes.

First, whether the EU Commission moves from briefings to a formal position on autonomous vulnerability discovery. If it does, Anthropic's withholding strategy becomes a compliance requirement, not a company choice. That reshapes how every frontier lab thinks about the release question.

Second, whether OpenAI's Trusted Access list expands past a handful of large security vendors. A small list is a concentration play. A broad list starts to look like a public good. The difference matters for every bank that is not on it.

Third, whether a third lab makes a different choice. A model like Mythos is not unique to Anthropic. Someone else will build one. What they do with it tells us whether this week's split becomes an industry norm or an outlier.

Sources

When one lab withholds and another arms a chosen few, who decides which capability the attackers actually get to build?

Charlie Major is a Product Development Manager at Mastercard. The views and opinions expressed in Major Matters are his own and do not represent those of Mastercard.

Reply

Avatar

or to participate

Keep Reading