It's not a shortage of trust infrastructure. It's a race to build it.

Last week, Alexa ordered cat food.

We had set this up months ago and mostly forgotten about it. The bag was running low, and Alexa, connected to an Amazon account with voice purchasing enabled, placed the order without asking. The confirmation email arrived during a meeting. By the time anyone noticed, the package was already on its way.

This is not new. Millions of households have some version of this automation running in the background. Subscribe and save. Auto-replenishment. Smart reordering. We have been delegating small purchasing decisions to algorithms for years.

But something has changed.

In the past eighteen months, every major technology company has launched systems that go far beyond reordering consumables. These are AI agents that can browse websites, compare prices, negotiate terms, fill in payment details, and complete purchases on your behalf. Not for cat food. For flights. Hotels. Insurance policies. Investment products. The kind of purchases that used to require judgment, comparison, and a moment of hesitation before clicking "confirm."

The question that kept surfacing as we researched this piece is deceptively simple: when an AI agent spends your money, who actually authorized it?

The Race to Let AI Buy Things

The commercial push into agentic commerce accelerated dramatically through 2024 and 2025. What was theoretical a few years ago is now live, processing real transactions, moving real money.

Perplexity moved first in November 2024, enabling paid subscribers to purchase products directly within its AI search interface. By November 2025, the feature expanded to free users through a PayPal partnership covering over 5,000 merchants.

Amazon launched "Buy for Me" in April 2025, an agentic feature that does something remarkable. It lets shoppers purchase from third party brand websites within the Amazon app. The AI securely provides encrypted customer details to complete checkout on external sites. Amazon's AI can now buy things from Amazon's competitors. Products available through this feature grew from 65,000 at launch to over 500,000 items by late 2025.

OpenAI introduced "Instant Checkout" in September 2025, powered by a new Agentic Commerce Protocol built with Stripe. Initial partners included US Etsy sellers, with over a million Shopify merchants rolling out access. PayPal adopted the protocol in October 2025.

Google launched its own Agent Payments Protocol (AP2) the same month, developed with over 60 partners including Mastercard, PayPal, American Express, Coinbase, and Shopify.

Microsoft transformed Copilot into a transaction ready storefront through its Merchant Program in April 2025.

Apple, notably, remains absent. A major Siri overhaul originally planned for 2025 was pushed to Spring 2026, and no direct AI initiated payment capabilities have been announced. For a company that owns the dominant mobile wallet, this silence is conspicuous.

The numbers suggest real momentum. Adobe data shows that AI driven traffic to US retail sites increased 1,300 percent year over year during the 2024 holiday season, with a 4,700 percent spike by July 2025. Boston Consulting Group projects the agentic commerce market will grow at roughly 45 percent annually through 2030. Industry estimates suggest agentic AI could handle up to 20 percent of e-commerce tasks this year.

This is not a pilot program. This is a live transition.

The Authorization Problem

When you tap your phone at a terminal, you are providing authorization through presence and action. Your face unlocks the device. Your finger confirms the payment. There is a clear chain: you are here, you intended this, the money can move.

When an AI agent makes a purchase, that chain dissolves. The agent is acting on your behalf, but you are not present at the moment of transaction. You may be asleep, or in a meeting, or unaware that your digital delegate is about to spend several hundred dollars on a flight it believes you need.

This is not a hypothetical concern. It is the core design challenge facing every company building agentic payment systems.

The current approach borrows from corporate expense management. Just as companies issue employees credit cards with pre-set budgets for specific categories, users can delegate purchasing authority to AI agents within defined boundaries. Authorize your agent to book flights under 500 dollars. Allow grocery purchases up to 150 dollars per week. Permit subscription renewals but require approval for new services.

This model of pre-approved spending limits turns multi-day approval processes into instant, autonomous purchases for transactions under a specified threshold. The agent can handle the entire transaction lifecycle, from confirmation to post-purchase activities like tracking shipments and managing returns.

But spending limits alone do not solve the authorization problem. They answer "how much" but not "how do we know this request actually came from a legitimate agent acting on behalf of a real user?"

This is where things get technically interesting.

Building Trust Infrastructure from Scratch

The payment networks have recognized that agentic commerce requires new infrastructure, not just adaptations of existing systems. Both Visa and Mastercard announced dedicated programs within weeks of each other in April 2025.

Visa's "Intelligent Commerce" program provides APIs for identity verification, spending controls, and tokenized credentials designed specifically for AI agents. The company's Trusted Agent Protocol, built with Cloudflare, creates cryptographically authenticated records for bot initiated transactions. Partners include Microsoft, Shopify, Stripe, Worldpay, and AI companies Anthropic, IBM, Mistral AI, OpenAI, and Perplexity.

Visa's concept of "AI ready cards" replaces static card numbers with tokenized digital credentials limited to specific AI agents. Think of it as giving your agent a restricted version of your card that only works for certain types of purchases from certain merchants.

Mastercard's "Agent Pay" launched the same month in collaboration with Microsoft. It includes an Agentic Tokens system building on existing tokenization infrastructure, plus an Agent Pay Merchant Acceptance Framework providing no-code integration for merchants. Mastercard also introduced "Agent Sign-Up," essentially a registry for AI agents that want to participate in the payment ecosystem.

Google's AP2 protocol takes a different architectural approach. It uses "Mandates," which are cryptographically signed digital contracts providing verifiable proof of user authorization. These are tamper proof records that capture specific constraints: "Buy if price drops below this amount." "Only authorize purchases under this threshold." "Restrict to these merchant categories."

The technical foundation across all these systems is tokenization. When you add a card to an AI agent, the system never stores or transmits your actual card number. Instead, it creates a unique substitute token that is useless to fraudsters if intercepted but can be validated by the payment network. This is the same technology that secures Apple Pay and Google Pay, now extended to autonomous agents.

But tokenization protects the card data. It does not prove that the agent itself is legitimate.

For that, the industry is building a new identity layer:

  • Decentralized Identifiers (DIDs) give each agent its own "digital passport" that can be cryptographically verified by any party it interacts with

  • Verifiable Credentials (VCs) prove what the agent is allowed to do, functioning like a digital permission slip: "This agent is authorized to spend up to 100 dollars on groceries at these merchants"

The combination of DIDs for identity and VCs for authorization creates a framework where a merchant can verify, in real time, that the agent making a purchase is legitimate, is acting within its granted permissions, and has cryptographic proof of user authorization.

This infrastructure did not exist two years ago. It is being built now, in production, handling real transactions.

When Regulations Assume Humans

Here is where the elegant technical solutions run into messy regulatory reality.

In Europe, PSD2 requires Strong Customer Authentication (SCA) for electronic payments. This means two of three factors: something you know (password), something you have (device), or something you are (biometric). The regulation explicitly assumes a human is present at the moment of transaction.

AI agents break this assumption entirely.

There are exemptions in PSD2 for low value transactions, recurring payments, and trusted beneficiaries. None of them explicitly address AI agents. The proposed PSD3 and Payment Services Regulation may clarify exemption criteria, but that legislation is still working through the European Parliament.

In the United States, no AI specific payment regulations exist. The Consumer Financial Protection Bureau stated in August 2024 that existing consumer financial protection laws apply fully to AI. There is no "fancy technology" exemption. The Federal Reserve, FDIC, and OCC have identified model risk and third party risk guidance as relevant frameworks, but specific rules for agentic payments remain undefined.

The EU AI Act, which took effect in 2024, establishes a risk based framework for AI systems. Some agentic commerce platforms could be classified as "high risk" systems subject to strict requirements around transparency, human oversight, and accuracy. But the Act was not designed with payment authorization in mind, and its application to agentic commerce remains uncertain.

This regulatory ambiguity creates real business risk. Merchants may defensively block AI agents using existing fraud controls. Issuers may decline transactions that lack traditional authentication markers. The infrastructure is being built faster than the rules that govern it.

The Fraud Arms Race

Traditional fraud detection evolved to identify humans behaving unusually. Sudden purchases in a new location. Transaction amounts outside normal patterns. Typing rhythms that do not match the account holder.

When the customer is a bot, all of these signals become meaningless. An AI agent has no typing rhythm. It does not have a "usual" location. Its transaction patterns are defined by whatever instructions and permissions it was given.

At the same time, fraudsters are using AI to create increasingly sophisticated attacks. Synthetic identities built from real and fabricated information. Adversarial inputs designed to fool machine learning models. The defenders and attackers are now both powered by AI, creating a high stakes arms race.

The industry response involves profiling the agents themselves. Instead of establishing a baseline for a human user's behavior, fraud systems are learning to recognize the "normal" digital behavior of specific AI agents:

  • The APIs it calls

  • The data centers it operates from

  • The patterns of its requests

  • The typical transaction times and amounts

Deviations from this established agent profile can signal a compromise or a malicious bot masquerading as a legitimate agent. The goal is to move from authenticating humans to continuously authenticating machines.

Real time anomaly detection becomes essential. Algorithms analyze transaction data as it flows, identifying unusual patterns: a purchase from a new category, a transaction amount exceeding typical spending, an interaction with a merchant in an unexpected region. These systems must process massive amounts of data with minimal latency, stopping fraudulent transactions before they complete.

The imbalanced dataset problem makes this harder than it sounds. Fraudulent transactions are rare compared to legitimate ones, which makes training accurate detection models difficult. The industry is using generative adversarial networks (GANs) to create synthetic fraud data, allowing defensive systems to learn patterns of novel fraud schemes before they cause damage.

It is AI fighting AI, with your money in the middle.

Lessons from Automation at Scale

We have been here before. Not with AI shopping assistants, but with other systems that execute high stakes financial transactions autonomously.

High frequency trading provides sobering precedent. On May 6, 2010, the US stock market plummeted and recovered within minutes in an event that was exacerbated by trading algorithms. As the market fell, automated systems pulled their liquidity while others sold aggressively, creating a feedback loop that accelerated the crash.

More directly relevant is the Knight Capital disaster of August 1, 2012. A software glitch in a new trading algorithm caused the firm to lose 440 million dollars in just 45 minutes, effectively bankrupting the company. A bug in the deployment of new code caused the system to execute a flood of erroneous orders, rapidly buying stocks at the ask price and selling at the bid price.

Knight Capital is a powerful lesson in the importance of rigorous testing, deployment protocols, and kill switches for any system that can autonomously spend money at speed.

Programmatic advertising offers another parallel. Real time bidding systems buy and sell ad impressions in milliseconds. The individual stakes are lower, but the aggregate scale is enormous. The industry has struggled with fraud, lack of transparency from black box platforms, and significant waste. One 2025 report estimated that programmatic ad waste surged to 26.8 billion dollars.

The pattern across these domains is consistent: automation at scale requires transparency, accountability, clear metrics, and circuit breakers that can halt runaway systems before they cause irreversible damage.

The Liability Gap

When your AI agent buys the wrong thing, who pays?

Currently, AI agents have no legal status of their own. They cannot own property, sign contracts, or be sued. Responsibility traces back to the humans behind the technology: the developers who wrote the code, the companies that deployed the agent, or the user who configured it.

Product liability provides one framework. The AI agent or its platform can be considered a product. If a flaw in design or code leads to harm, the developer or provider could be held liable.

Negligence offers another path: arguing that a company failed to exercise reasonable care in designing, testing, or deploying the system.

But the complexity and autonomy of modern AI create what legal scholars call the "AI Liability Gap." When an agent makes a purchasing decision based on its training data, its interpretation of your preferences, and market conditions at the moment of transaction, proving fault under traditional rules becomes difficult. The agent did not malfunction. It did exactly what it was designed to do. It just made a choice you disagree with.

Contract law faces similar challenges. Traditional contract formation requires "mutual assent" or a "meeting of the minds." Existing laws like the Uniform Electronic Transactions Act and the Uniform Commercial Code contain provisions for "electronic agents," stating that contracts can be formed by the interaction of electronic agents without direct human involvement. But these rules were written for deterministic systems following explicit instructions, not probabilistic AI making judgment calls.

The EU is advancing an AI Liability Directive that would modify the burden of proof in fault based claims involving AI. In the United States, the FTC has announced it will use existing consumer protection authority to prevent unfair practices involving AI in commerce, but specific frameworks for agentic transactions remain undefined.

For now, we are in a transitional period where the technology is live but the legal frameworks are still catching up.

Would You Trust Your AI to Spend Your Money?

All of this infrastructure, all of these protocols and tokens and cryptographic credentials, exists to answer one question: can we build a system where consumers actually trust AI agents to handle their money?

Industry executives anticipate gradual adoption. Nick Campbell of Xplor Pay notes that consumers need to see completed transactions work before trusting larger purchases. The progression will likely move from low value items to higher value purchases over time. Travel bookings are identified as an early use case where the convenience of agent based comparison and booking could outweigh hesitation.

The spending controls being built into these systems reflect this reality:

  • Users set limits

  • AI manages within those limits

  • The agent can handle a 50 dollar grocery order without asking

  • A 500 dollar flight booking triggers a confirmation request

  • A 5,000 dollar purchase requires explicit approval

This graduated approach mirrors how we built trust with earlier payment innovations. We started with small contactless transactions and gradually increased limits as confidence grew. We tried mobile wallets for coffee before trusting them with larger purchases.

But AI agents introduce a dimension that contactless payments did not: judgment. Your tap to pay card does not decide whether a purchase is a good idea. An AI agent might.

Visa's Rubail Birwadker predicts that 2026 will be "the year we see an enormous amount of material adoption, and consumers really starting to get comfortable in a bunch of different agentic environments."

Whether that prediction proves accurate depends on whether the trust infrastructure can keep pace with the commercial ambition.

The Questions We Have Not Answered Yet

That Alexa cat food order was simple. The product was predetermined. The price was known. The authorization was explicit. It was automation, not agency.

The agentic commerce systems launching now are fundamentally different. They involve AI making choices about what to buy, from whom, at what price. They compress the browsing, comparing, deciding, and purchasing steps into a single delegated action. They ask us to trust machines with both our money and our judgment.

The infrastructure being built is impressive:

  • Tokenization for security

  • Cryptographic identity for agent verification

  • Verifiable credentials for permission management

  • Spending limits for user control

  • Real time anomaly detection for fraud prevention

But the harder questions remain open:

  • How do payment networks adapt their authorization models for non-human customers?

  • How do regulators apply rules designed for human presence to autonomous transactions?

  • How do fraud systems distinguish between legitimate agents and sophisticated attackers when both are AI?

  • How does liability work when the "decision" that led to harm emerged from a model trained on billions of data points rather than a discrete human choice?

And underneath all of it: how do we preserve human agency in a system designed to operate without human involvement?

These are not problems for technologists alone. They require banks, payment networks, merchants, regulators, and consumers to collectively decide what kind of agentic economy we want to build.

What Comes Next

Next issue, we will explore what this means for each player in the payments ecosystem:

  • What should banks do when their customers start delegating purchasing power to AI?

  • How should merchants prepare for customers who are bots?

  • What strategic choices face the card networks as the "moment of payment" fragments into delegated, autonomous transactions?

The AI agent future is not arriving. It is here. The question is whether we will shape it, or simply let it happen to us.

What's your read on this? Would you trust an AI agent to make purchases on your behalf? Where do you draw the line between convenience and control? We'd like to hear from others navigating these questions.

Related reading from Major Matters:

Reply

Avatar

or to participate

Keep Reading