When AI agents can sign you up for credit, convenience meets consequence.

Last week, Klarna joined Google's Universal Commerce Protocol. The largest buy now, pay later provider in the West is now plugging into the infrastructure that lets AI agents shop on your behalf.

This is not a minor integration. When we wrote about agentic commerce in January, we explored AI agents that can browse websites, compare prices, and complete purchases autonomously. The authorization questions were already complex. Who approved this transaction? What are the spending limits? How do we verify the agent is legitimate?

Now add credit to the mix.

When an AI agent can sign you up for future payments, the stakes of "convenience versus control" become very real, very fast.

The previous piece asked: when an AI agent spends your money, who actually authorized it? This piece asks something harder: when an AI agent borrows money on your behalf, who is responsible for paying it back?

The BNPL Landscape Meets Agentic Commerce

Buy now, pay later has become embedded in online checkout flows worldwide. Klarna, Affirm, Afterpay, and PayPal Pay in 4 are now standard options alongside traditional cards. The model is simple: split a purchase into instalments, often interest-free, and pay over weeks or months instead of all at once.

Regulatory scrutiny has been building. The UK's Financial Conduct Authority is bringing BNPL into formal regulation. The US Consumer Financial Protection Bureau applied credit card protections to BNPL providers in May 2024, requiring them to handle disputes and refunds like traditional lenders.

But the integration announced on February 3, 2026 represents something new.

Klarna is joining Google's Universal Commerce Protocol alongside Mastercard, PayPal, American Express, and Shopify. This means AI agents operating within Google's ecosystem can now offer BNPL as a payment method. Not just recommend it. Select it. Complete the transaction.

The question nobody seems to be asking: can an AI agent sign you up for credit?

The Reckoning We Already Had

The BNPL industry already went through a crisis. It was not that long ago.

During COVID lockdowns, buy now, pay later exploded. Consumers stuck at home turned to online shopping. Klarna, Affirm, and Afterpay embedded themselves into checkout flows across thousands of retailers. By June 2021, Klarna had reached a $45.6 billion valuation, making it Europe's most valuable private tech company.

The bust came fast.

By July 2022, Klarna's valuation had collapsed 85 percent to $6.7 billion. Losses approached $1 billion. The company laid off more than 10 percent of its workforce.

What happened was the same pattern that has followed easy credit throughout history. Aggressive expansion prioritised growth over affordability. Consumers took on debt they could not service. Defaults rose.

The data tells the story. Borrowers with deep subprime credit scores, below 580, had a 3.5 percent default rate and accounted for nearly half of all BNPL originations. The six largest BNPL companies issued more than 277 million loans in 2022, totaling nearly $34 billion in merchandise sales.

Debt compounded. Research from UK charity Citizens Advice found that nearly a third of BNPL customers who had made a recent payment had borrowed the money from another lender. They were using debt to pay debt.

Four in ten BNPL users have missed payments in the past year, up from one in three a year earlier.

The regulatory response was swift. The FCA warned BNPL providers about promoting services during the cost of living crisis. The CFPB applied credit card consumer protections. Senator Richard Blumenthal noted that BNPL plans were not protected under the Truth in Lending Act: "The risks, the fees, the late payments, all are concealed. And consumers have no warning."

When affordability checks fail, vulnerable consumers get hurt first.

Klarna has since recovered. Cost cutting, better credit performance, and a path toward an IPO. But the lesson remains.

Now the industry wants to add AI agents to the mix. The question is whether we have learned anything, or whether we are about to automate the same mistakes at scale.

The Authorization Problem, Compounded

In our previous piece, we explored the authorization challenge for AI agent transactions. When you tap your phone at a terminal, there is a clear chain: your face unlocks the device, your finger confirms the payment, you are present at the moment of transaction. When an AI agent makes a purchase, that chain dissolves.

The industry responded with spending controls. Authorize your agent to book flights under $500. Allow grocery purchases up to $150 per week. The model borrows from corporate expense management, where employees get cards with pre-set budgets for specific categories.

BNPL breaks this model.

Single Transaction

BNPL Transaction

One authorization, one payment

One authorization, multiple future obligations

Spending limits work

Spending limits do not capture total commitment

Mistake equals one refund

Mistake equals missed payments, credit impact, collections

A $200 spending limit does not prevent a $200 BNPL purchase that commits you to $200 over six weeks. The limit controls today's spend, not tomorrow's obligation.

The authorization questions multiply. Does pre-approving an AI for "purchases under $500" include BNPL for a $500 item paid over four months? Can an AI agent accept terms and conditions on your behalf for a credit product? What happens when an AI selects BNPL because it calculates you would prefer the cash flow, but you would have chosen to pay outright?

The original question was "who authorized it?" BNPL adds a harder one: who is responsible for the payments that follow?

The Affordability Paradox

BNPL providers are required to conduct affordability assessments. This is a human-centric process designed to answer basic questions. Can this person afford these payments? What is their existing debt load? What is the risk of financial hardship?

When the customer is an AI agent, those questions become difficult to answer.

The AI passes the user's credentials, but who answers the affordability questions? Does the AI agent have visibility into the user's full financial picture? Can an AI accurately represent a human's ability to pay?

Consider the uncomfortable scenario. An AI agent, optimising for "get the best deal," signs you up for multiple BNPL plans across different merchants in a single day. A laptop from one retailer. A hotel booking from another. A piece of furniture from a third. Each individual affordability assessment passes. The aggregate is unaffordable.

No current protocol tracks cumulative future commitments across AI agents and merchants. If you have three AI agents, one for travel, one for shopping, one for groceries, each could independently sign you up for BNPL without knowing about the others.

The BNPL industry learned, painfully, that individual affordability checks are not enough when consumers can stack obligations across multiple providers. AI agents could recreate this problem at machine speed.

Trust Infrastructure Under Stress

The trust infrastructure we described in our previous piece was built for immediate transactions. Tokenisation protects payment credentials by replacing card numbers with unique substitutes that are useless if intercepted. Cryptographic identity gives each agent a "digital passport" that merchants can verify. Verifiable credentials prove what the agent is allowed to do.

BNPL stress-tests each layer.

Tokenisation works the same. It protects credentials regardless of what payment method the agent selects.

Spending limits break down. A $200 limit tells the agent what it can spend today. It says nothing about future obligations. The limit does not know the difference between a $200 card payment and a $200 BNPL purchase that creates an ongoing commitment.

Verifiable credentials need expansion. The original framework described these as digital permission slips: "This agent is authorized to spend up to $100 on groceries at these merchants." For BNPL, the credentials need to capture not just what the agent can buy, but how it can pay. Is the agent permitted to use credit products? Is BNPL explicitly in scope or explicitly excluded?

Obligation tracking does not exist. This is the missing layer. No current protocol tracks cumulative future commitments across multiple agents and merchants. Visa's Intelligent Commerce program and Mastercard's Agent Pay both focus on authenticating individual transactions. Neither addresses aggregate exposure.

The infrastructure was built for transactions. BNPL creates obligations. These are not the same thing.

The Regulatory Collision

Two regulatory hot zones are now converging.

BNPL regulation is already in motion. The UK is bringing buy now, pay later into formal FCA oversight. The US CFPB is treating BNPL as credit cards for dispute and refund purposes. The EU's Consumer Credit Directive revisions will tighten requirements further.

Agentic commerce regulation remains nascent. PSD2 requires Strong Customer Authentication for electronic payments, explicitly assuming a human is present. The EU AI Act establishes risk-based frameworks for AI systems. Neither was designed with AI-initiated credit in mind.

The collision creates a gap. BNPL regulations assume a human is making the credit decision. AI regulations assume a human is in the loop for oversight. Agentic BNPL removes the human from both sides.

Specific questions need answers:

Does an AI agent count as a "credit intermediary" under financial services regulation? Who conducts the affordability assessment when the applicant is a bot? How does the "right to explanation" under GDPR and the AI Act apply when an AI chose BNPL on your behalf?

The infrastructure is being built faster than the rules that govern it. Merchants may defensively block AI agents. Issuers may decline transactions that lack traditional authentication markers. The commercial ambition is running ahead of the regulatory clarity.

The Liability Gap Widened

Our previous piece identified what legal scholars call the "AI Liability Gap." When an agent makes a purchasing decision based on training data, interpreted preferences, and market conditions, proving fault under traditional rules becomes difficult. The agent did not malfunction. It did exactly what it was designed to do. It just made a choice the user disagrees with.

BNPL extends this liability gap into ongoing financial harm.

Scenario

Who is liable?

AI buys wrong product

User returns it, merchant refunds

AI selects BNPL without explicit permission

Unclear

AI signs up for multiple BNPL plans user cannot afford

Unclear

Missed BNPL payments damage user's credit score

Unclear

User disputes, BNPL provider says "your agent agreed to terms"

Unclear

The contract formation question is central. Existing laws like the Uniform Electronic Transactions Act allow contracts "formed by the interaction of electronic agents." But credit agreements typically require explicit consumer consent and specific disclosures. Can an AI agent legally accept a credit agreement on your behalf?

The EU is advancing an AI Liability Directive that would modify the burden of proof in fault-based claims involving AI. In the United States, the FTC has said it will use existing consumer protection authority to prevent unfair practices involving AI in commerce. Specific frameworks for agentic credit remain undefined.

What the Players Should Do

The stakeholders building agentic commerce need to address BNPL explicitly, not as an afterthought.

BNPL providers should build explicit AI agent policies into their terms of service. They need agent-specific affordability protocols that can assess whether the human behind the agent can actually afford the commitment. A "human in the loop" requirement for first-time BNPL via AI would add friction, but friction is sometimes appropriate.

Payment networks and protocol designers should extend spending controls to include "credit product" permissions as a separate toggle. Obligation tracking needs to be built into agentic commerce protocols. Standards for aggregate exposure across multiple agents would help prevent the stacking problem.

AI platforms including OpenAI, Google, and Microsoft should default to excluding credit products from agent permissions. Explicit user opt-in for BNPL-enabled transactions would make the stakes clear. Surface disclosures when an agent selects a credit option.

Regulators need to clarify whether AI agents can accept credit terms on behalf of users. Affordability assessment guidance needs updating for agent-mediated transactions. A mandatory "pause and confirm" for AI-initiated credit would create a human checkpoint where it matters.

Consumers should audit what payment methods their AI agents can access. Consider whether spending limits account for BNPL obligations. Decide a personal policy: should your AI ever be able to use credit?

What Comes Next

The cat food auto-order we described in January was automation. The flight booking was agency. BNPL is obligation.

Obligation is where convenience and control collide.

The BNPL industry learned hard lessons in 2022. Aggressive growth without adequate affordability checks left vulnerable consumers holding debt they could not manage. Valuations collapsed. Layoffs followed. Regulation tightened.

The agentic commerce industry is now building systems that could repeat those mistakes at scale. AI agents that can sign users up for credit products. Multiple agents operating independently without awareness of each other's commitments. Spending controls that do not account for future obligations.

The infrastructure exists to do this differently. Verifiable credentials can include explicit BNPL permissions. Obligation tracking can be built into protocols. Human checkpoints can be required for credit decisions.

Whether the industry chooses to build these safeguards, or races ahead without them, will determine whether agentic BNPL becomes a convenience or a trap.

Sources

Where do you draw the line? Should AI agents ever have access to credit products? We would like to hear from others navigating these questions.

Reply

Avatar

or to participate

Keep Reading