The company that put processors in every smartphone on Earth just entered the data centre silicon market with a chip explicitly designed for agentic AI workloads. The infrastructure story now goes all the way down to the metal.
For 35 years, Arm did one thing. It designed processor architectures and licensed them to companies that built the actual chips. Qualcomm, Apple, Samsung, MediaTek, Broadcom. Arm drew the blueprints. Everyone else poured the concrete.
That ended on March 24, 2026.
Arm announced the AGI CPU, its first in-house silicon product in the company's history. Not a reference design. Not an IP block. A finished chip, ready to order, targeting data centre workloads. Up to 136 Neoverse V3 cores on a 3nm TSMC process. 300 watts. Air-coolable.
The name isn't subtle. AGI. Not "server processor" or "cloud compute." AGI. And when you read the press materials, the positioning is even more direct: Arm calls this "the silicon foundation for the agentic AI cloud era."
That choice of words matters. Arm didn't name its first chip after raw performance or power efficiency. It named it after a category of AI workloads that most of the tech press is still figuring out how to explain.
The same companies buying this chip are the companies building the agentic payments infrastructure. When the silicon layer brands itself around agentic workloads, the infrastructure story is complete from protocol to processor.
What the AGI CPU Actually Is
Strip away the branding and you're looking at a genuine technical achievement.
The AGI CPU packs up to 136 Neoverse V3 cores across two chiplets, fabbed on TSMC's 3nm process. All-core frequency hits 3.2 GHz with a 3.7 GHz boost. Each core gets 2MB of L2 cache. A shared 128MB system-level cache sits across the package. Twelve channels of DDR5 at 8800 MT/s deliver more than 800 GB/s of memory bandwidth.
The instruction set is Armv9.2 with dual 128-bit SVE2 (Scalable Vector Extension 2) units per core. That's the same vector architecture Apple uses in its M-series chips, scaled up for data centre density.
But the headline number is power. The AGI CPU runs at 300 watts TDP. A comparable Intel Xeon draws 500 watts for 144 cores. AMD's EPYC 192-core parts also sit at 500 watts. Arm's pitch is simple: 0.45 cores per watt versus 0.38 for AMD and 0.29 for Intel. More compute, less heat, smaller electricity bills.
The rack density numbers follow from there. Air cooling gets you 8,160 cores per rack. Liquid cooling pushes that past 45,000. Arm claims 2x performance per rack versus x86, though no independent benchmarks exist yet. Take the claim seriously but not literally until third-party testing lands.
Why Arm Called It "AGI"
Here's where the tech coverage missed the point.
Most outlets reported this as either a hardware story ("Arm's first chip") or a financial story ("stock jumped 16 percent"). Both are true. Neither explains why the chip is called AGI or what "agentic AI infrastructure" means at the silicon level.
Arm's own technical positioning describes the chip's role as CPU-side orchestration. In an agentic AI data centre, the GPU or accelerator runs the model. The CPU does everything else: coordinating accelerators, managing data movement between memory and compute, handling the networking stack, running the orchestration frameworks that route tasks across agents.
Think of it this way. When an AI agent needs to check a flight price, verify a user's payment credentials, query an inventory system, and book a ticket, that workflow doesn't run on a single GPU. It runs across multiple services, each with their own compute, connected by a CPU that manages the choreography. The faster and more efficiently that CPU handles orchestration, the more agents you can run per rack.
Arm's use case list for the AGI CPU is explicit: accelerator management, agentic orchestration, services and applications for agentic task scale-out, networking and data plane compute. This isn't a general-purpose server chip that happens to work for AI. It's a server chip designed around the assumption that agentic workloads are the primary use case.
That assumption is a bet. A big one. And the launch partner list tells you who's making the same bet.
The Launch Partner Map
Meta co-developed the chip and is the first customer. Meta will deploy AGI CPUs alongside its own MTIA accelerators for running AI workloads across Facebook, Instagram, and WhatsApp. Meta already uses Arm architecture in its custom server designs. Buying finished silicon from Arm directly, instead of designing around Arm IP and having someone else fabricate it, cuts a step out of the supply chain.
OpenAI is a launch partner. The company running the largest consumer AI agent platform, the one that processes "Buy it in ChatGPT" checkout sessions through the Agentic Commerce Protocol, wants Arm silicon in its data centres. The CPU managing those agent orchestration workflows matters at OpenAI's scale.
Cloudflare is a launch partner. This is the company that powers Visa's Trusted Agent Protocol infrastructure at the network edge and co-founded the x402 stablecoin settlement protocol with Coinbase. Cloudflare's edge network handles the cryptographic verification that distinguishes legitimate AI agents from bots. Every TAP request that Visa processes gets validated at Cloudflare's edge before reaching a merchant. More efficient CPUs at the edge means more agent verification per watt.
SAP is a launch partner. If Cloudflare represents the network layer of agentic commerce, SAP represents the enterprise layer. SAP runs procurement systems for the world's largest companies. When agentic AI automates purchase orders, supplier negotiations, and invoice reconciliation, those workflows run through SAP's infrastructure. SAP on Arm silicon means the enterprise backbone of agentic commerce runs on the same architecture as the AI orchestration layer.
Cerebras, the AI accelerator company, is building systems where AGI CPUs manage Cerebras wafer-scale engines. SK Telecom, Positron, Rebellions, and F5 round out the named partners. The broader ecosystem includes over 50 companies: AWS, Broadcom, Google, Marvell, Micron, Microsoft, NVIDIA, Samsung, SK hynix, and TSMC.
Read that list again. The companies building AI models (Meta, OpenAI). The companies running the agentic commerce network layer (Cloudflare). The companies managing enterprise procurement (SAP). The companies fabricating chips (TSMC). The companies providing cloud infrastructure (AWS, Google, Microsoft). The GPU makers (NVIDIA). The memory suppliers (Micron, SK hynix, Samsung).
This isn't a chip launch. It's a supply chain declaration.
The Business Model Revolution
The chip itself is interesting. The business model behind it is potentially transformative.
For 35 years, Arm's economics worked like this: a chip company designs a processor using Arm's instruction set architecture and core designs, then fabricates and sells that chip. On a $1,000 server chip, Arm collects roughly $50 in licensing fees. If the customer uses more of Arm's pre-designed blocks rather than custom cores, Arm might get $100. Either way, Arm captures 5 to 10 percent of the chip's value.
Now Arm sells the finished chip. On that same $1,000 chip, Arm captures roughly $500 in gross profit. The margin shift from IP licensing to silicon sales is the difference between being a toll booth and being the highway.
Arm's targets are staggering. The company wants $15 billion annually from chip sales within five years. Total revenue target: $25 billion, which would be roughly 5x its current level. Arm will break out chip revenue as a separate reporting line.
The stock jumped 16 percent on the announcement. The market understood immediately.
But there's a tension here that the financial coverage glossed over. Arm is now competing with its own customers. Qualcomm, MediaTek, Marvell, and Ampere all build Arm-based server chips. Arm just entered their market with a product that has one advantage none of them can match: it's designed by the people who designed the architecture.
Arm CEO Rene Haas addressed this directly on Stratechery. The licensing business continues. Customers can still build their own Arm chips. But the message is clear: if you want the best Arm chip, Arm will sell it to you.
The Vertical Integration Pattern
This is where the payments angle gets sharp.
Arm's move from IP licensing to finished silicon is the same kind of vertical integration we've been tracking across the agentic commerce stack. Companies that occupied one layer are expanding into adjacent layers, because the economics of agentic infrastructure reward control over more of the stack.
Visa used to process card payments. Now it's building the Trusted Agent Protocol, a trust and identity layer for AI agents. That's not payment rails. That's governance infrastructure.
Mastercard used to process card payments too. Now it's building Agent Pay and Verifiable Intent, a framework for proving that a human actually authorised what an AI agent did on their behalf. That's not processing. That's liability architecture.
Stripe used to be a payment gateway. Now it's co-authoring the Agentic Commerce Protocol with OpenAI, defining how AI agents discover and purchase products. That's not payment infrastructure. That's commerce choreography.
And now Arm, which used to license chip designs, sells finished silicon positioned explicitly for the agentic AI workloads that those protocols enable. Same pattern. Different layer.
The logic is consistent across all of them. When a new category of infrastructure emerges, the companies closest to it expand vertically to capture more value. Arm saw that the AI hardware buildout was creating demand for a type of CPU that its licensees weren't building fast enough. So it built one.
The Data Centre Maths
The financial case for the AGI CPU is straightforward, and it connects directly to the infrastructure economics we've been covering.
Arm claims the chip could save up to $10 billion in capex per gigawatt of AI data centre capacity. That figure comes from three things: lower power draw per core, higher density per rack, and the fact that air cooling works at 300 watts (it doesn't at 500).
Liquid cooling for data centres costs millions per facility. If Arm's power envelope lets operators stay on air cooling for longer, the savings compound across every rack. At 8,160 cores per air-cooled rack versus whatever x86 delivers at higher wattages, the density advantage translates into fewer racks, fewer facilities, and lower real estate costs.
For a hyperscaler like Meta deploying tens of thousands of servers, those economics matter. For the companies building AI infrastructure at $650 billion in annual capex, even a 10 percent improvement in cores per watt shifts the ROI calculation on every new data centre.
The catch: Arm's performance claims are unverified. No independent benchmarks exist. "2x performance per rack versus x86" is a vendor claim, full stop. Production silicon is ready to order now, with volume shipments expected by end of 2026. The benchmark gap should close as partners like Meta put the chips into production and real-world numbers emerge.
The Competitive Landscape
Intel is struggling. That's not editorial opinion. It's quarterly earnings. Intel's data centre business has lost market share for four consecutive years, its foundry ambitions have consumed billions without delivering competitive nodes, and its latest Xeon generation draws 500 watts while delivering fewer cores per watt than either AMD or Arm.
AMD has been the beneficiary, with its EPYC processors taking meaningful data centre share on performance per watt. But AMD's advantage is incremental. Better x86, still x86. Same instruction set, same software compatibility, same ecosystem.
Arm is entering from a completely different angle. The Arm instruction set already runs on virtually every smartphone, most embedded systems, and a growing share of cloud instances through AWS's Graviton, Google's Axion, and Microsoft's Cobalt processors. The software ecosystem compatibility argument that kept x86 dominant for two decades has eroded.
What Arm adds is something none of its licensees could: a first-party chip with the architecture team's full attention. AWS Graviton is an excellent Arm chip designed by Annapurna Labs. The AGI CPU is an Arm chip designed by Arm. If the architecture company can't build the best implementation of its own architecture, something is wrong. And based on the specs, nothing is wrong.
The real competitive question isn't Arm versus Intel. It's Arm the chipmaker versus Arm's licensees. If the AGI CPU outperforms Graviton, Axion, and Cobalt, why would hyperscalers keep designing their own? Some will, for differentiation. Others will conclude that buying from Arm directly is cheaper than maintaining a silicon design team.
Why Payments Professionals Should Care
If you work in payments, you might be wondering why a server chip matters to you. Fair question. Here's the answer.
The agentic commerce stack we mapped in Q1 2026 has four layers: discovery, trust, processing, and settlement. We've covered every layer. We've written about the protocols, the companies, the gaps. But we treated the silicon layer as a given. Servers exist. Chips compute. The interesting decisions happen above the metal.
Arm just changed that assumption. When the chip itself is designed for agentic orchestration, branded around agentic workloads, and sold to the companies building agentic commerce infrastructure, the silicon layer stops being invisible. It becomes a strategic input.
Cloudflare running Arm CPUs at the edge means Visa's TAP infrastructure verifies agents faster per watt. SAP running Arm in its data centres means enterprise procurement agents orchestrate more efficiently. OpenAI running Arm means the AI models powering ChatGPT checkout sessions get more orchestration capacity per rack.
None of this changes how a payment is authorised or settled. But it changes the cost and density of the infrastructure that makes agentic payments possible. And as we've covered repeatedly, the companies that control infrastructure costs control the economics of whatever runs on top of them.
Arm just reached the bottom of the stack. It didn't find empty space. It found the same companies, the same ambitions, and the same pattern of vertical integration that we've been tracking all the way from the protocol layer down.
The infrastructure goes all the way to the metal now. And the companies securing it still have gaps to close.
Sources
The agentic commerce stack now reaches from protocol to processor. But Arm is selling chips to its own licensees, Visa and Mastercard are building governance on top of someone else's silicon, and nobody has agreed on how these layers connect. When the infrastructure goes all the way down to the metal, who decides where one company's territory ends and another's begins?