This website uses cookies

Read our Privacy policy and Terms of use for more information.


Consumers want AI agents to shop for them. They just don't trust them to buy for them. Closing that gap is the defining challenge of 2026.

Here is a number that should keep every commerce executive awake at night: 72 percent of consumers have used AI in their shopping journey, but only 10 percent have actually let an AI agent complete a purchase on their behalf.

That is not a technology problem. The infrastructure is there. The models are capable. The payment rails are being built. This is a trust problem. And in commerce, trust is not a feature you can ship in a software update.

In our previous piece on agentic commerce, we flagged the tension at the centre of this market: the technology is moving faster than the confidence to use it. We promised to go deeper. This is that piece.

The gap between what AI agents can do and what consumers will let them do is not closing. It is the central obstacle to a market that could be worth $500 billion by the end of the decade.

The Enthusiasm Gap

The data tells a remarkably consistent story across multiple research firms: consumers are curious about AI shopping assistants, intrigued by the promise, and then deeply hesitant the moment real money is involved.

Bain & Company published one of the most comprehensive studies on agentic commerce readiness earlier this year. Their findings are stark. While 72 percent of consumers surveyed have used AI in some form during the shopping process, only 24 percent said they were comfortable with an AI agent making a purchase decision. Just 10 percent have actually done it.

Those are not the numbers of a market about to explode. Those are the numbers of a market stuck in the browse-but-don't-buy phase.

Salsify's 2026 Consumer Research Report breaks the hesitation down further. Only 22 percent of shoppers are actively incorporating AI tools into their purchase decisions. Of those, just 14 percent trust AI recommendations on their own. Another 27 percent will use AI suggestions but verify everything manually before buying. A full third say they do not use AI in shopping at all.

The pattern holds across every study we reviewed. Contentsquare found that only 30 percent of consumers are willing to let AI complete a purchase, with 79 percent saying accuracy is the single most important factor in whether they would trust an agent. The Acosta Group found a similar dynamic in grocery: 70 percent of shoppers have used AI tools in some capacity, but only 12 percent trust AI enough to let it handle a purchase.

There is one data point from the Bain study that deserves particular attention: consumers trust AI agents operated by retailers they already shop with at roughly three times the rate they trust third-party AI agents. That is a massive asymmetry. It suggests that trust in agentic commerce is not being built from the technology up. It is being borrowed from existing brand relationships.

Consumers are not asking "is this AI smart enough?" They are asking "do I trust who is behind it?" That distinction changes everything about how this market will develop.

The Fraud Equation

If the enthusiasm gap is the demand-side problem, fraud is the supply-side crisis that could prevent the trust gap from ever closing.

When an AI agent acts on your behalf, a new category of risk opens up. It is not just the traditional question of "is this transaction legitimate?" It is now "is this agent who it claims to be, acting for the person it claims to represent, within the boundaries it was given?" That is a fundamentally harder problem.

Experian's 2026 fraud forecast calls this the year of "Machine-to-Machine Mayhem," marking the first time agentic AI has overtaken human error as the leading predicted cause of fraud escalation. Their research found that 60 percent of companies surveyed have already seen fraud increase, and 72 percent of business leaders now cite AI-enabled fraud as their top security challenge.

The scale of what is at stake is enormous. Global credit card fraud losses are projected to reach $43 billion by 2028, according to the Nilson Report. The FTC reported $12.5 billion in consumer fraud losses in 2024 alone, a 25 percent jump from the prior year. Now add autonomous AI agents to that equation: systems that can browse, compare, negotiate, and transact at machine speed, across hundreds of merchants simultaneously.

The concern is not hypothetical. According to NVIDIA's latest financial services survey, 42 percent of financial institutions are already using or actively assessing agentic AI for fraud detection and risk management, with 21 percent having deployed it in production. They are building defences because the threat is already materialising.

Consider the attack surface that agentic commerce creates. A compromised AI agent does not steal one card number. It can potentially access spending authority across every account it is connected to, execute transactions at machine speed, and adapt its behaviour to evade detection systems that were designed to spot human fraud patterns. The economics of fraud shift dramatically when the attacker and the victim are both machines.

When machines buy from machines, fraud does not just scale. It automates. And the window between a breach and a billion-dollar loss shrinks from weeks to seconds.

The Identity Question

At the core of the fraud problem is a question the payments industry has never had to answer before: how do you verify the identity of an AI agent?

With human commerce, identity verification is imperfect but understood. You have card numbers, CVVs, billing addresses, biometrics, device fingerprints, behavioural patterns. These form layers of confidence that the person making the transaction is who they claim to be.

AI agents break every one of those assumptions. An agent does not have a fingerprint. It does not have a consistent device. It can operate from any server, any location, any IP address. It might be making purchases for one person or for thousands. The existing identity stack was simply not designed for this.

Vouched is one of the first companies to attack this problem head-on. Their Agent Checkpoint platform introduces what they call KYA: Know Your Agent. If KYC (Know Your Customer) became the bedrock of financial compliance over the past two decades, Vouched is betting that KYA will be the equivalent for the agentic era.

The platform includes a new protocol called MCP-I (Model Context Protocol, Identity), which creates a standardised way for AI agents to present verifiable identity credentials to merchants and payment processors. Think of it as a passport system for AI agents: a way for a merchant to confirm that this agent is authorised, by this specific human, to make this specific type of transaction, up to this specific limit.

The timing is telling. Vouched's own data suggests that between 0.5 and 16 percent of website traffic is already coming from AI agents, depending on the industry. That is not a future problem. That is a current one, and most merchants have no way to distinguish an AI agent from a bot from a human.

The Infrastructure Response

The payments industry has recognised the trust gap, and the response is accelerating.

Visa made its biggest move into agentic commerce with the launch of Intelligent Commerce, a suite of tools designed specifically for a world where AI agents transact on behalf of consumers. The centrepiece is agent-specific payment tokens: unique credentials issued to individual AI agents that tie them to a specific consumer's account with defined spending limits and merchant categories.

This is a meaningful architectural decision. Rather than having AI agents use existing card credentials (which creates obvious security risks), Visa is building a parallel token infrastructure purpose-built for agents. Combined with Passkey authentication, which uses biometric verification on the consumer's device to authorise agent actions, and a new Trusted Agent Protocol built in partnership with Akamai, Visa is attempting to create a full trust chain from consumer intent to agent action to merchant settlement.

The programme already has more than 100 partners, with pilots targeted for the 2026 holiday season. If that timeline holds, this Christmas could be the first in which AI agents are transacting through a purpose-built trust layer at meaningful scale.

Mastercard and Stripe have made parallel moves with their own agent-capable payment frameworks, which we covered in our previous article. Mastercard's Agent Pay framework takes a similar approach to Visa's tokenisation strategy but with a focus on interoperability across its network of issuing banks. The underlying bet is the same: the card networks that define how agents authenticate and transact will shape the economics of the entire agentic commerce ecosystem.

The convergence is clear. Every major payment network is now building agent-specific infrastructure, because they recognise that existing rails were designed for human-initiated transactions and will not hold. This is not incremental evolution. It is a new layer in the payments stack, sitting between consumer authorisation and merchant acceptance, that did not exist 18 months ago.

What makes this moment different from previous payments infrastructure shifts is the speed. EMV chip migration took a decade. Contactless payments took years to reach critical mass. The networks are trying to build the agent trust layer in months, because agent traffic is already hitting merchant sites and the absence of a framework is itself a risk.

The trust layer for agentic commerce is not a product. It is an entire infrastructure stack being built in real time, from identity verification through to settlement.

What Actually Builds Trust

The research offers some clear signals about what moves the needle on consumer confidence, and what does not.

Accuracy is non-negotiable. Contentsquare's survey found that 79 percent of consumers ranked accuracy as the single most important factor in trusting an AI agent. Not speed. Not price savings. Not convenience. Accuracy. Get the product wrong once, recommend something the consumer did not ask for, and trust evaporates. The margin for error in agentic commerce is essentially zero.

Brand trust transfers. The Bain finding that consumers trust retailer-operated agents at three times the rate of third-party agents tells us something fundamental: people are not evaluating the AI. They are evaluating the company behind it. This gives established retailers and brands a massive advantage over pure-play AI agent startups. If you already trust Amazon, you are more likely to trust Amazon's agent. If you have never heard of an AI shopping startup, you are not letting it touch your wallet.

First experiences are make-or-break. PYMNTS research found that trust in AI platforms as agentic assistants sits at just 3 percent among people who have never used one. That is essentially zero. But among those who have had a positive first experience, trust rises dramatically. This creates a cold-start problem: consumers will not try agents because they do not trust them, and they cannot build trust without trying them. Breaking this cycle will require a combination of low-stakes first transactions, transparent controls, and the ability to review and override every decision an agent makes.

Regulation is adding pressure. The Colorado AI Act, set to take effect on June 30, 2026, introduces penalties of up to $20,000 per violation for AI systems that cause consumer harm. It is the first US state law to impose direct financial liability on AI deployment in commerce. More will follow. For companies building agentic commerce products, regulatory compliance is not optional, and it may paradoxically help build trust by forcing transparency and accountability into systems that have neither today.

What Comes Next

Bain projects the US agentic commerce market could reach $300 to $500 billion by 2030. That number assumes the trust gap closes. Right now, that assumption is far from guaranteed.

The companies that will capture disproportionate value in this market are not the ones building the most capable AI agents. They are the ones building the most trustworthy ones. That means accuracy that never wavers, identity frameworks that are verifiable and auditable, spending controls that consumers can set and adjust in real time, and full transparency about what the agent is doing and why.

2026 is the proving year. Visa's holiday pilots will be the first large-scale test of purpose-built agent payment infrastructure. Vouched's KYA rollout will test whether standardised agent identity verification can work across a fragmented ecosystem. The Colorado AI Act will test whether regulation helps or hinders consumer confidence. And consumers will vote with their wallets on whether they are ready to let AI agents move from browsing to buying.

The card networks are in a unique position here. They already sit at the intersection of consumer trust and merchant acceptance. They have spent decades building the infrastructure that makes you comfortable tapping your card at a terminal without thinking about it. The question is whether they can replicate that invisible confidence for a world where you never tap anything at all, because your agent did it for you.

The IAB found that 46 percent of consumers already trust AI recommendations. But 89 percent still verify before buying. The distance between "I trust your suggestion" and "go ahead and buy it" is enormous. In that gap sits the entire future of agentic commerce.

We have been here before, in a sense. Consumers were once deeply sceptical of entering their card number on a website. They did not trust contactless payments when they first appeared. They questioned whether mobile wallets were secure. In each case, trust was built through a combination of infrastructure investment, regulatory clarity, and millions of transactions that went right. The difference now is that the timeline is compressed and the stakes are higher, because the entity you are trusting is not a payment terminal or a website checkout. It is an autonomous system making decisions on your behalf.

The companies, the networks, and the regulators that understand this distinction will define the next era of commerce. The ones that treat trust as a marketing problem rather than an infrastructure problem will not.

Trust in commerce has always been built slowly and lost quickly. The agentic era does not change that. It amplifies it.

Sources

When the AI agent asks to buy on your behalf, what would it take for you to say yes?

Reply

Avatar

or to participate

Keep Reading