Visa published a striking number in its 2026 payments predictions: 80 percent of global consumers were targeted by a scam attempt in the past year. Not a small subset. Not a vulnerable demographic. Four out of five people.
That figure lands at a moment when the payments industry is simultaneously deploying AI as its primary fraud defence and watching criminals deploy the same technology as their primary weapon. According to PYMNTS, 68 percent of banks have now turned to AI-powered systems as traditional rule-based fraud detection fails to keep pace. The technology that was supposed to be the shield has also become the sword.
The AI fraud paradox is simple to state and difficult to solve: the same models that detect fraud at scale also generate it at scale. And the attackers do not have compliance departments.
The Defensive Stack
The case for AI in fraud detection is well established and increasingly urgent.
Traditional rule-based systems work by flagging transactions that match known patterns: unusual amounts, unfamiliar locations, velocity spikes. The problem is that these rules are static. Criminals learn them, adapt, and route around them. By the time a rule is written, the attack vector has already evolved.
Mastercard has moved aggressively into predictive fraud detection, using AI algorithms trained on network-wide transaction data to identify stolen card details and block compromised accounts. The company says its systems can now detect compromised credentials twice as fast as previous-generation tools. The advantage of operating at network scale is that patterns invisible at the individual bank level become detectable when you can see billions of transactions simultaneously.
Visa has taken a similar path, processing and monitoring more than 125 billion payment transactions in the past year using AI models that update continuously. The shift from batch-processed rules to real-time, model-driven scoring represents a genuine capability leap. Transactions that would have taken hours to flag can now be intercepted in milliseconds.
At the bank level, the adoption curve is steep. The 68 percent figure from PYMNTS reflects not just early adopters but a broad industry recognition that rule-based systems are no longer sufficient. The question is no longer whether to deploy AI for fraud detection. It is whether the defensive models can evolve as fast as the offensive ones.
The Offensive Stack
The same large language models that power chatbots, code assistants, and agentic commerce tools are also powering a new generation of fraud.
Visa warned in its 2026 outlook that this year will see a material increase in the sophistication and volume of AI-powered identity attacks. The company described this as a new era for identity fraud, driven by three capabilities that did not exist at scale even two years ago.
First, deepfakes. AI-generated audio and video that can replicate a person's voice and appearance with enough fidelity to bypass voice authentication and video verification systems. What once required a sophisticated lab now requires a laptop and a few minutes of sample audio.
Second, synthetic identity fraud. Rather than stealing an existing person's identity, criminals use AI to construct entirely new identities by combining real and fabricated data. These synthetic identities pass initial verification checks, build credit histories, and then execute fraud at scale before disappearing.
Third, agentic scams. As AI agents become capable of conducting multi-step tasks autonomously, criminals are deploying agents that can initiate and manage fraud operations across multiple targets simultaneously. The human bottleneck in fraud, the time it takes to manually execute each scam, is being removed. A single operator can now run dozens of concurrent social engineering campaigns, each personalised to the target, each adapting in real time to responses.
The economics have shifted dramatically. The cost of mounting a convincing phishing campaign has dropped from thousands of dollars to near zero. Voice cloning that once required studio equipment now requires a three-second audio sample and a consumer-grade laptop. And the output quality is improving faster than detection capabilities can keep up.
The financial impact is already measurable. The Chicago Federal Reserve noted at its October payments symposium that fraud is a mushrooming problem for companies and consumers. Roughly three quarters of respondents to an Association for Financial Professionals survey said their companies had been targets of actual or attempted fraud in the past year.
The critical shift is from transaction-level fraud to identity-level fraud. Criminals are moving upstream. Instead of stealing one payment at a time, they are stealing entire identities using AI-powered impersonation that is increasingly difficult to distinguish from the real thing.
The Agentic Commerce Dimension
The rise of agentic commerce introduces an entirely new attack surface that the industry is only beginning to grapple with.
When an AI agent makes a purchase on behalf of a consumer, the merchant needs to answer a question that has never existed before in payments: is this agent legitimate? The transaction is not initiated by a human clicking a button or tapping a card. It is initiated by software acting on instructions that may have been given hours or days earlier.
Cloudflare developed Web Bot Auth specifically to address this problem, creating a cryptographic authentication layer that allows agents to prove their identity using HTTP Message Signatures. Both Visa and Mastercard have built on this foundation. Visa's Trusted Agent Protocol and Mastercard's Agent Pay both use Web Bot Auth to help merchants distinguish legitimate shopping agents from malicious bots.
Mastercard's Verifiable Intent framework, announced on March 5, adds another layer by creating a tamper-resistant cryptographic record linking consumer identity, agent instructions, and transaction outcomes. The goal is to ensure that when a dispute arises, all parties have access to facts rather than guesswork.
These are meaningful advances. But they also reveal the scale of the challenge. Every new commerce protocol creates new surface area for fraud. Every new authentication mechanism becomes a new target for circumvention. The history of payments security is a history of escalating countermeasures, and agentic commerce is no exception.
The Arms Race Dynamics
What makes the current moment different from previous fraud cycles is the symmetry of capability.
In earlier eras of payments fraud, the tools available to criminals were materially different from those available to defenders. Skimming devices, phishing emails, and social engineering relied on physical access, scale constraints, and human effort. Banks had structural advantages: more data, better technology, and regulatory frameworks that forced collaboration.
That asymmetry is collapsing. The same foundation models available to Mastercard's fraud detection teams are available to anyone with an API key. The same reasoning capabilities that power legitimate AI agents can power fraudulent ones. The cost of mounting a sophisticated fraud operation has dropped by orders of magnitude while the potential payoff has grown.
Vitus Rotzer, speaking on the Bottomline payments podcast, argued that banks remaining in a silo approach, protecting themselves individually without sharing data across the ecosystem, are going to fail. Fraudsters already operate cross-platform, probing multiple institutions simultaneously. If defenders do not share intelligence at the same speed, they will always be a step behind.
Visa echoed this in its predictions report, calling for the industry to develop shared capabilities and technologies to fight identity fraud collectively. The company acknowledged that no single bank, merchant, fintech, or government can win this fight alone.
The fraud arms race has a structural problem: defenders must protect every entry point while attackers only need to find one. AI amplifies both sides, but the amplification favours the attacker because the cost of probing has dropped to near zero.
What Comes Next
Several developments will shape how this arms race evolves over the next 12 to 18 months.
Digital identity wallets are gaining momentum, particularly in Europe where the revised eIDAS regulation is driving adoption of the EUDI wallet. These wallets aim to give consumers a portable, privacy-preserving way to prove their identity across financial, government, and commercial services. Mastercard has been investing in digital identity verification tools, including biometric solutions that could make in-store checkout seamless while strengthening authentication.
The FIDO Alliance standards, which underpin passkeys and passwordless authentication, are being extended to cover agentic commerce interactions. Mastercard is contributing to the FIDO Payments Working Group to define how verifiable credentials can authenticate both agent and consumer interactions. If successful, this would replace the fragile username-and-password infrastructure that criminals exploit most often.
Data-sharing frameworks between financial institutions are also evolving, though slowly. The Federal Reserve's fraud detection tools, including the FedDetect Notification Services, have seen strong adoption. The Exception Resolution Service has been expanded beyond ACH to cover instant payment transactions on FedNow. These are incremental steps, but they signal a recognition that fraud defence must operate at network level, not institution level.
The real-time payments dimension adds urgency to all of this. As FedNow scales beyond 1,500 participating institutions and transaction limits rise to $10 million, the speed of payments creates a narrower window for fraud detection. A rule-based system that flags a suspicious transaction for manual review within 24 hours is adequate for batch-processed ACH. It is entirely inadequate for instant payments that settle in seconds. The shift to real-time rails demands real-time fraud intelligence, which in turn demands AI models that are continuously learning and adapting.
The industry is also grappling with the human factor. Fraud awareness training has historically focused on teaching employees and consumers to recognise phishing emails and suspicious links. When AI-generated deepfakes can replicate a CEO's voice on a phone call or create a convincing video of a colleague requesting a wire transfer, the traditional playbook breaks down. The next generation of fraud defence will need to account for the fact that human judgment, the traditional last line of defence, is becoming less reliable against AI-powered deception.
The tension at the heart of all of this is the same tension that runs through every layer of the payments stack: frictionless commerce requires trust, and trust requires verification, and verification creates friction. Every generation of payments technology has had to navigate this trade-off. The AI era simply raises the stakes on both sides.
Sources
Defenders and attackers now have access to the same AI models, the same reasoning capabilities, and the same scale. In an arms race where both sides are equally armed, what determines the outcome: technology, collaboration, or regulation?