
The authentication gap between pilot and production is where the real risk lives.
Card networks are racing to pilot agentic commerce. The compliance frameworks to govern it do not exist yet.
Last week, DBS became the first bank in Asia Pacific to pilot Visa Intelligent Commerce for everyday payments. Days earlier, Mastercard completed Australia's first authenticated agentic transactions with Westpac and CommBank using its Agent Pay technology. The demos were polished: cinema tickets purchased via AI agent, hotel rooms booked in Thredbo, food and beverage orders placed in Singapore.
The press releases were confident. The underlying compliance infrastructure was nowhere to be seen.
Agentic payments are arriving faster than the governance frameworks needed to make them safe. The question is not whether AI agents can move money. It is whether anyone has verified they should.
The Pilots Look Impressive. Look Closer.
The Mastercard and Visa pilots share something in common beyond their timing. Both demonstrated agentic payments in tightly controlled, low-risk scenarios. A cinema ticket. A hotel booking. A coffee order. Transactions where the cost of failure is a refund, not a regulatory incident.
This is not a criticism of the technology. It is a criticism of what the pilots leave unsaid.
Mastercard's Agent Pay processed transactions through IPSI using Maincode's sovereign large language model, Matilda. Visa's Intelligent Commerce pilot with DBS demonstrated that AI agents can complete purchases using DBS/POSB credit and debit cards via issuer-controlled flows.
Both networks pointed to research suggesting that fifty-five percent of Australian consumer transactions could be AI-influenced by 2030, worth up to A$670 billion. That is an extraordinary volume of money to route through systems whose authentication, liability, and dispute resolution frameworks have not been stress-tested, regulated, or even fully defined.
The Dual Authentication Crisis
Traditional payment authentication answers one question: is this person who they claim to be? Agentic payments require answering two questions simultaneously, and existing infrastructure cannot handle either of them well.
The first question is intent. Did the human actually authorise this specific agent to make this specific purchase at this specific price? The second is integrity. Is the agent itself operating as designed, free from manipulation, and acting within the boundaries it was given?
Fraud specialist David Barnhardt calls this the "dual authentication crisis", and the data suggests banks are not ready for it.
According to the Cloud Security Alliance and Oasis Security, seventy-eight percent of organisations have no formal policies governing non-human identities. Ninety-two percent lack confidence that their identity and access management tools can handle AI agent authentication. And seventy-nine percent of non-human identities have excessive permissions, meaning the agents that do exist already have more access than they should.
Banks built their authentication for humans. Agentic payments require verifying delegated authority, something the entire identity stack was never designed to do.
This is not a gap that pilots can paper over. Point-in-time authentication checks, the kind banks use today, cannot detect an agent that starts within its boundaries and gradually drifts outside them.
Barnhardt argues that what banks need are revocable cryptographic identities for every AI agent, combined with continuous trust scoring that monitors behaviour in real time and revokes access the moment something deviates. No major bank has deployed this at scale.
The Fifth Player Nobody Planned For
The payments value chain was built around four parties: cardholder, merchant, issuer, acquirer. Every rule governing disputes, chargebacks, and liability assumes this structure. Agentic payments introduce a fifth: the AI platform.
Visa's own T.R. Ramachandran acknowledged this directly, noting that there is now a "fifth player in the value chain" and adding: "You almost have to assume mistakes will happen and create guardrails and protection around that."
This is a refreshingly honest admission from a network executive. It is also deeply concerning. The guardrails he describes do not yet exist.
When an AI agent buys the wrong product, books the wrong hotel, or authorises a payment the consumer did not intend, who bears the cost? The consumer did not make the purchase directly. The merchant fulfilled a legitimate-looking order. The AI platform executed instructions it believed were valid. The issuer approved a transaction that passed authentication. The acquirer processed it normally.
Every party in the chain can point to someone else. And if history is any guide, the party with the least leverage, the merchant, will absorb the losses until regulation forces a different outcome. As GR4VY's analysis of the merchant burden puts it: agentic payments do not remove fraud exposure, chargebacks, or regulatory consequences. In many ways, they increase them.
JPMorgan Chase's Mike Lozanoff raised the most practical concern at Money 20/20: can an agent "hallucinate and buy something we didn't tell it to buy?" The answer, based on how large language models work today, is unambiguously yes. The chargeback rules for that scenario do not exist.
The Attack Surface Nobody Is Discussing
The compliance gap gets worse when you consider what happens when bad actors deliberately target AI agents.
PYMNTS recently documented a newly identified threat called "recommendation poisoning", where hidden prompts embedded in web content manipulate what agentic AI systems recommend and purchase. This is not theoretical. Enterprise AI systems are already being influenced by concealed instructions that steer purchasing decisions toward specific products or vendors.
Meanwhile, Bank Information Security reports that researchers have classified "promptware" as a dangerous new class of attack, distinct from traditional prompt injection. The implications for agentic payments are severe.
An agent making purchasing decisions on behalf of a consumer can be manipulated not just into buying the wrong thing, but into routing payments to fraudulent merchants, accepting inflated prices, or bypassing security checks entirely. Barnhardt warns that the threat landscape is about to shift: where attackers currently focus on stealing credentials, the next frontier is agent compromise. The fraud surface area this creates is orders of magnitude larger than anything current chargeback or dispute systems were built to handle.
The same efficiency that makes agentic commerce attractive will amplify fraud just as quickly, unless identity signals persist beyond the point of entry and adapt to changing behaviour in real time.
What Would Actually Work
The picture is not entirely bleak. Serious work is underway, it is just not keeping pace with the pilots.
Prove has launched a "Know Your Agent" initiative that enables continuous lifecycle identity authentication for AI agents. Mastercard's Agent Suite provides tools for building, testing, and deploying AI agents with built-in security, alongside published agentic commerce standards.
Google DeepMind has proposed a formal framework for intelligent AI delegation that addresses the trust and authority problems head-on. Unlike the network-led pilots, DeepMind's approach starts with governance and works outward to capability.
Nearly fifty percent of banks and insurers are now creating dedicated roles to supervise AI agents, according to Capgemini's World Cloud Report for Financial Services 2026. Accenture's Top Banking Trends report recommends that banks establish an agent identity framework enabling authentication, authorisation, and permission management across operations.
These are the right instincts. But frameworks and standards are not the same as deployed infrastructure. The gap between "we published guidelines" and "our systems can revoke an agent's access in real time when it drifts outside its mandate" is measured in years, not quarters.
What the industry actually needs is a compliance-first approach to agentic payments: revocable cryptographic identities for every agent, continuous behavioural trust scoring, embedded compliance logic within agent workflows, clear liability allocation that does not default to merchants, and regulatory frameworks that address the fifth player before the first major incident forces them to.
The Uncomfortable Truth
The card networks are not wrong that agentic commerce is coming. The 17,000+ MCP servers in production, the SDKs from Anthropic, Microsoft, and Google, the pilots from Visa and Mastercard: these all point to a technology wave that will fundamentally reshape how money moves.
But there is a pattern in payments innovation that repeats itself. New capability ships. Adoption accelerates. Fraud follows. Regulation arrives after the damage. Merchants and consumers absorb the cost in between.
Agentic payments do not have to follow this pattern. The compliance tools exist. The identity frameworks are being built. The research on attack vectors is public. The only question is whether the industry will deploy the governance before or after the first agent-initiated payment goes catastrophically wrong.
Based on the pace of these pilots versus the pace of these frameworks, we know which one is winning.
Related reading from Major Matters:
Sources
The card networks are building the rails for agentic commerce. Who is building the brakes?