FTC Chairman Andrew Ferguson sent warning letters to the CEOs of Visa, Mastercard, PayPal, and Stripe over debanking. Those four companies are also building the infrastructure for AI agent commerce. The overlap is not a coincidence.
On March 26, the Federal Trade Commission did something unusual. Chairman Andrew Ferguson sent warning letters to the CEOs of four companies: Visa, Mastercard, PayPal, and Stripe. The message was blunt. Denying consumers access to payment services based on political or religious views could violate the FTC Act. The Commission is watching, and it's willing to investigate.
The letters cite President Trump's August 7, 2025 Executive Order on debanking, which called it "unacceptable to debank law-abiding citizens due to political affiliations, religious beliefs, or lawful business activities." Ferguson's letters turn that executive language into a regulatory signal: the FTC considers ideological access denial a potential unfair trade practice.
Read those four names again. Visa. Mastercard. PayPal. Stripe. These aren't just the companies the FTC chose to warn about debanking. They are the four companies building the foundational infrastructure for agentic commerce. Every one of them shipped a production agentic payments protocol in the past six months.
The regulator is telling the architects of the next payment infrastructure that they can't use their current infrastructure to gatekeep access based on ideology. The question is what happens when access decisions involve agents, not just humans.
What the Letters Actually Say
The FTC's letters warn that "deplatforming customers or denying them access to services inconsistent with published terms of service or customer expectations" could trigger an investigation and enforcement action under Section 5 of the FTC Act, which prohibits unfair or deceptive practices.
The letters to PayPal and Stripe go further. They cite specific media reports about individuals and organisations claiming they were denied service for ideological reasons. After January 6, 2021, both companies stopped processing payments for certain groups, including a Trump campaign fundraising site and a Christian crowdfunding platform.
Visa and Mastercard received broader warnings focused on their role as network operators. As card schemes, they set the rules that processors and acquirers follow. If a network decides a merchant category is unacceptable, the entire downstream ecosystem complies. That is an extraordinary amount of power over who can participate in digital commerce.
Ferguson's letter doesn't allege specific violations. It's a warning shot, a statement of regulatory intent. But the FTC press release makes the enforcement posture clear: the Commission will "act to protect Americans from unlawful corporate discrimination."
The Operation Choke Point Shadow
This is not the first time the US government has confronted the problem of payment infrastructure being used to control market access.
Operation Choke Point, launched by the Department of Justice in 2013, pressured banks to cut ties with legal-but-disfavoured industries: payday lenders, firearms dealers, adult entertainment. The mechanism was indirect. Regulators didn't ban the businesses. They made banks afraid to serve them. The FDIC eventually acknowledged the programme went too far, and it was formally ended.
The FTC's letters flip the Choke Point dynamic. Instead of the government pressuring private companies to deny access, the government is warning private companies not to deny access on their own initiative. The concern is the same infrastructure bottleneck, but the regulatory direction is reversed.
That distinction matters. Payment processors sit at a chokepoint in the economy by design. Every digital transaction flows through a small number of networks and processors. When those companies make access decisions based on anything other than fraud risk or legal compliance, they are effectively exercising a veto over economic participation. The FTC is saying that veto has limits.
The Agentic Payments Overlap
Here's where this gets interesting for anyone following what these four companies are actually building right now.
Visa launched the Trusted Agent Protocol (TAP), a cryptographic framework for verifying AI agent identity at the network edge. Mastercard deployed Agent Pay, a governance framework for AI-initiated transactions with tokenisation and fraud controls. Stripe built the Machine Payments Protocol (MPP), designed for autonomous machine-to-machine commerce. PayPal is rolling out agentic commerce services, including integration with OpenAI's Agentic Commerce Protocol.
These are not side projects. They're the core strategic bets these companies are making on the future of payments. We've covered each of them extensively, and we mapped the full stack in our Q1 review.
The FTC's debanking warning, though it never mentions AI or agents, lands directly on the companies that will decide how AI agents access the payment system. That's the connection worth paying attention to.
If a payment processor can deny a human being access to financial services based on ideology, what stops it from making the same kind of access decision about an AI agent? Who decides which agents get to transact? On whose behalf? Under what criteria?
We've already identified the trust gap in agentic commerce as one of the defining challenges of this market. Consumer trust in AI-initiated purchases sits at roughly 10 percent. The debanking question adds a new dimension to that problem. It's not just "do I trust AI agents to buy for me?" It's "do I trust the gatekeepers to let my agent through?"
Access Decisions at Machine Speed
The debanking debate for humans is slow. A person gets denied service. They notice. They complain. Maybe the media picks it up. Maybe a regulator investigates. The feedback loop, while imperfect, at least exists.
Agent commerce doesn't work that way.
When Stripe's MPP processes a machine-to-machine transaction, the access decision happens in milliseconds. When Visa's TAP verifies an agent's cryptographic signature, the authentication is binary: trusted or untrusted. There is no appeals process built into the protocol. No explanation for why an agent was denied. No consumer awareness that their agent was rejected at the network edge.
The infrastructure being built for agentic payments is, by necessity, designed for speed and automation. That same design makes it harder to detect and challenge discriminatory access decisions. A processor that denies service to a human customer at least has to explain itself eventually. A protocol that rejects an agent request returns a 403 error and moves on.
This doesn't mean these companies will abuse agentic infrastructure. But the FTC's letters are a reminder that the same organisations now designing agent access protocols have a documented history of making access decisions that the government considers problematic.
What to Watch
The FTC's letters are not enforcement actions. They carry no fines, no injunctions, no mandated changes. They are a signal. But signals from a regulator with Section 5 authority tend to produce compliance responses.
Expect all four companies to review their terms of service and acceptable use policies in the coming weeks. Bloomberg, American Banker, and PYMNTS all noted that the letters arrive as these companies are actively courting government contracts and regulatory approval for new payment products. Nobody wants an FTC investigation while trying to get agentic commerce protocols adopted.
The deeper question is whether the principles in these letters will extend to agent access. The FTC warned against denying humans access based on ideology. But the same infrastructure handles agent authentication. The same terms of service govern which agents can transact. The same companies make the rules.
No regulator has yet asked the question: can a payment network deny service to an AI agent? On what grounds? With what oversight? The FTC's debanking letters don't answer those questions. But they establish the principle that access to payment infrastructure is not purely a private business decision. It carries public interest obligations.
That principle will matter enormously when the gatekeepers are making millions of access decisions per second, and the entities being granted or denied access aren't humans with the ability to complain. They're agents operating at machine speed, on behalf of humans who may never know their agent was turned away.
Sources
The FTC told four companies they can't gatekeep human access to payments based on ideology. Those same four companies are now building the gatekeeping infrastructure for AI agents. When the first agent gets denied access to a payment network, who will even know it happened?