Global losses to fraud and scams climbed 9.2 percent in 2025, according to a new report from Nasdaq's Verafin financial crime-fighting unit. Much of that increase is attributed directly to bad actors leveraging artificial intelligence. Not as a novelty. As infrastructure.

The numbers landed the same week that Bold Security exited stealth with $40 million to build an endpoint security platform for the AI era, and a week after Mastercard announced it was building a virtual C-suite of AI agents for small businesses. The message is consistent: AI is reshaping fraud on both sides of the table, and the pace is accelerating.

The same technology that automates legitimate commerce is automating the fraud that exploits it.

What the Numbers Show

The Verafin report, published by Nasdaq's financial crime intelligence division, tracks global fraud losses across financial services. A 9.2 percent increase in a single year is not a spike. It is a trend line steepening.

The AI contribution is structural. Generative AI has lowered the barrier to creating convincing phishing emails, deepfake identity documents, synthetic voices for authorisation fraud, and automated social engineering at scale. What previously required a skilled fraudster now requires a prompt.

Payments Dive reported that much of the increase is concentrated in identity fraud and authorised push payment scams, two categories where AI-generated content makes detection harder because the fraudulent communications are indistinguishable from legitimate ones.

The Defence Side Is Moving Too

The defence ecosystem is not standing still. The same week the Verafin report landed, several moves signalled that the industry recognises the scale of the problem.

Bold Security launched with $40 million in funding and a specific thesis: cloud-heavy AI models do not scale for enterprise endpoint security. CEO Nati Hazut told Bank Information Security that companies can no longer rely on older controls designed for a pre-AI threat landscape. The platform processes security decisions at the endpoint rather than routing everything through cloud models, reducing latency and improving real-time response.

In our AI Tools Directory, we have reviewed multiple fraud prevention platforms that are deploying AI defensively. Sardine uses device intelligence and behavioural biometrics to detect fraud patterns before transactions complete. Sift applies machine learning across a network of 70,000+ sites to score transaction risk in real time. Both represent the defensive application of the same technology being weaponised by fraudsters.

The Paradox for Payments

Financial services faces a specific version of this problem. The industry is simultaneously deploying AI agents to automate payments, onboarding, and customer service while defending against AI agents designed to exploit those same processes.

JP Morgan Payments is building agentic commerce infrastructure with Mirakl that enables autonomous payments by AI agents. MoonPay is running crypto transactions through AI agents secured by hardware signers. These are legitimate deployments. But every new AI-powered payment channel is also a new AI-powered attack surface.

As we explored in our analysis of the AI fraud paradox in payments, the companies building AI-powered payment systems and the companies building AI-powered fraud defences are often the same companies. The arms race is internal as much as external.

The 9.2 percent increase is not the ceiling. It is the baseline for a new era of AI-accelerated financial crime.

What Comes Next

Three things to watch.

First, regulatory response. A 9.2 percent annual increase in fraud losses will attract attention from financial regulators who are already scrutinising AI deployment in financial services. Expect new guidance on AI-specific fraud controls by Q3 2026.

Second, the endpoint security thesis. Bold Security's argument that cloud-based AI models are too slow for real-time fraud detection at the endpoint will be tested at enterprise scale this year. If correct, it reshapes where fraud decisions are made in the payments stack.

Third, the authentication gap. As AI-generated content becomes indistinguishable from human-created content, the industry needs new authentication mechanisms that do not rely on content analysis. Hardware-based verification, like MoonPay's Ledger signer approach, may become standard rather than optional.

Sources

AI is making fraud cheaper, faster, and harder to detect. Is your defence stack keeping pace?

Reply

Avatar

or to participate

Keep Reading