LexisNexis analyzed more transaction data than any fraud report we have seen. The finding: the same agentic technology powering commerce is powering the attacks. And the fraud is growing faster than the defenses.
Every week brings a new announcement about AI agents that can shop, negotiate, and pay on your behalf. Visa launched Intelligent Commerce Connect. Mastercard rolled out Agent Pay across three continents. Juniper Research just projected that agentic commerce will generate $1.5 trillion globally by 2030.
Here is the other side of that story.
LexisNexis Risk Solutions published its 2026 Cybercrime Report last week, drawing on 116 billion online transactions processed through its Digital Identity Network in 2025. The headline number: an 8 percent global rise in fraud attack rates. The drivers: synthetic identities stitched together from stolen data, and agentic bots sophisticated enough to fool behavioral detection systems designed to catch them.
The same technology stack powering the agentic commerce buildout is powering the fraud that feeds on it. The 8 percent growth rate is not a blip. It is a trend line, and it is steepening.
The Scale of the Data
Let's start with what makes this report different. Most fraud studies draw on survey data or a few billion transactions. The LexisNexis dataset covers 116 billion transactions across the full year of 2025. That volume matters because it captures patterns invisible at smaller scales: seasonal spikes, regional variations, the slow creep of synthetic identities that take months to build before they strike.
The 8 percent headline is a global average. Some sectors got hit far harder. Ecommerce fraud attack rates grew 64 percent year over year. Account takeover attempts at login jumped 216 percent. Gaming and gambling sites saw a 76 percent rise in global attack rates.
Those are not rounding errors. They are sector-level surges happening while the industry talks about building trust frameworks for AI agents.
Synthetic Identities: The Patient Fraud
Synthetic identity fraud now accounts for 11 percent of all fraud globally, an eight-fold year-over-year increase. It is the fastest-growing fraud type LexisNexis tracked.
The mechanics are straightforward. Fraudsters combine real identity fragments, a valid Social Security number here, a fabricated name there, and build a person who does not exist but passes verification checks. They nurture these identities for months, establishing credit histories, passing KYC screens, behaving like legitimate customers. Then they cash out.
What makes synthetics particularly dangerous in an agentic commerce context is the authentication problem. The agent protocols being built by Visa, Mastercard, Google, and OpenAI all rely on some form of identity verification to authorize transactions. If the identity itself is fabricated, the authentication layer does exactly what it is designed to do. It authenticates a fiction.
The regional patterns are revealing. In Latin America, synthetic identity fraud accounts for 48.3 percent of all fraud. In EMEA, first-party fraud dominates at 51.7 percent. The split tells us something about infrastructure maturity. Where digital identity systems are newer and verification gaps wider, synthetics thrive.
The Federal Reserve Bank of Boston flagged generative AI as a significant accelerant last year. Creating convincing synthetic identities used to require effort. Now it requires a prompt. The barrier to entry collapsed, and the volume followed.
Agentic Bots: 450 Percent Growth in a Single Year
The bot numbers are staggering. Agentic traffic on the LexisNexis Digital Identity Network rose 450 percent between January and December 2025. Malicious bot attacks specifically increased 59 percent.
That distinction matters. Not all agentic traffic is malicious. Legitimate AI agents performing price comparisons, managing subscriptions, and executing purchases are a growing share of commerce traffic. The problem is telling them apart from the ones committing fraud.
"Cybercriminals are experimenting with the same technologies transforming digital commerce," Stephen Topliss, VP of Fraud and Identity at LexisNexis Risk Solutions, said in the report. Organizations now face the challenge of distinguishing between legitimate humans, authorized bots, and malicious agents. That three-way sorting problem did not exist two years ago.
The bots are getting better at impersonation. LexisNexis found that modern agentic bots can mimic genuine human actions, including cursor movements, typing cadence, and login patterns, with enough plausibility to fool behavioral fraud detection tools. The defenses built specifically to catch bots are being defeated by bots that learned from the same behavioral data.
We have covered this arms race dynamic before. AI powers the detection. AI powers the evasion. The LexisNexis data puts a number on where the balance currently sits, and the attackers are gaining ground.
The Agentic Commerce Collision
Here is where the fraud data meets the commerce projections.
Juniper Research estimates agentic commerce spending will reach $1.5 trillion by 2030, up from essentially pilot deployments today. That growth requires AI agents to transact autonomously at scale: browsing, negotiating, purchasing, and paying without human intervention at every step.
Every one of those capabilities is also a fraud capability. An agent that can autonomously initiate a payment can also autonomously initiate a fraudulent payment. An agent that can pass identity verification can pass it with a synthetic identity. An agent that can mimic human browsing behavior to complete a purchase can mimic it to evade detection.
If the current 8 percent annual fraud growth rate holds, and transaction volumes multiply through agentic commerce adoption, the raw fraud numbers compound fast. The industry is not just adding a new channel. It is adding a channel where the attackers have the same tools as the defenders, operate at machine speed, and can scale without adding headcount.
Nasdaq's Verafin unit reported similar patterns earlier this year. We have also examined how the agentic security reckoning extends beyond payments into enterprise AI deployments more broadly. The LexisNexis data confirms what those earlier signals suggested: the fraud surface is expanding faster than the fraud defenses.
At $1.5 trillion in agentic commerce by 2030, even a stable fraud rate would mean tens of billions in losses. A growing fraud rate, against a multiplying transaction base, means the problem is compounding on two axes simultaneously.
What the Defenses Need
The report is not entirely bleak. North America's overall attack rate held steady at roughly 2.2 percent, suggesting that mature fraud infrastructure can contain the growth even as attack volumes rise. But EMEA's attack rate jumped 27 percent year over year, and APAC's is climbing. The defenses are holding in some markets and failing in others.
The core challenge is classification. Legacy fraud systems were built to distinguish humans from bots. The new requirement is distinguishing authorized agents from unauthorized ones, and real identities from synthetic ones, all while processing transactions at speeds that leave no room for manual review.
That is a fundamentally harder problem. And it is the problem the agentic commerce protocols need to solve before the $1.5 trillion projection becomes real. Trust is not just a feature of agentic commerce. It is the prerequisite.
Sources
If agentic commerce scales to $1.5 trillion by 2030 and the fraud growth rate compounds alongside it, who bears the liability when an autonomous agent authenticates a synthetic identity and completes a transaction that never should have happened?