
It's not a shortage. It's a choice.
A 64GB kit of DDR5 memory now costs more than an entire PlayStation 5.
Let that sink in for a moment.
A component. A single part of a computer. More expensive than a complete gaming console with custom silicon, storage, controllers, and a decade of R&D baked in.
DRAM prices are up 171% year-over-year, rising faster than gold. Lenovo is stockpiling RAM with enough inventory to last through 2026. Framework just pulled standalone memory modules from their store to prevent scalping. Some Micro Center locations have reportedly stopped displaying fixed prices on memory because the numbers change too quickly to keep up.
And the experts? They're telling us this is just the beginning. Constraints on both DRAM and NAND are expected to become the new normal throughout 2026 as Big Tech pursues artificial general intelligence at any cost.
We've Seen This Before. Sort Of.
During COVID, the chip shortage became a defining crisis. Automakers couldn't build cars. Console launches were disasters. GPU prices hit absurd highs. The diagnosis seemed straightforward: we didn't have enough manufacturing capacity. The prescription was equally simple: build more fabs.
Governments threw billions at the problem. The CHIPS Act in the US. Similar initiatives in Europe and Asia. Intel announced ambitious expansion plans. TSMC broke ground on new facilities. The message was clear: we'd learned our lesson, and we'd never let supply chain fragility catch us off guard again.
But here's what we got wrong: we assumed the problem was capacity.
The real problem was, and always has been, allocation.
The Allocation Economy
The current memory crisis isn't a manufacturing shortfall in the traditional sense. Samsung, SK Hynix, and Micron, the three companies that dominate global DRAM production, aren't struggling to make chips. They're choosing where those chips go.
And increasingly, they're going to AI.
The numbers tell the story. Memory manufacturers are spending $54 billion on capital expenditure, but that investment is focused on HBM (High Bandwidth Memory) for AI accelerators, not standard DDR5 for the rest of us. HBM requires more wafer space than commodity memory, which means every HBM chip produced is capacity that isn't producing the memory your laptop needs.
Nvidia's pivot to LPDDR5X for its Grace and Vera CPUs has created what analysts are calling a "seismic shift" in the supply chain. Each Grace CPU requires 480GB of LPDDR5X memory. A premium smartphone uses 16GB. Nvidia is now consuming memory at the scale of a major smartphone manufacturer, except they're not building phones. They're building the infrastructure for an AI arms race with no clear finish line.
The result? US and Chinese hyperscalers are receiving just 70% of the server DRAM they order, despite agreeing to contract price increases of up to 50%. Smaller OEMs and channel players? They're seeing fulfillment rates closer to 35-40%.
Why "Build More Fabs" Doesn't Work
The instinct to solve supply problems with more supply is understandable. It's also dangerously naive.
Look at Intel. The company bet heavily on the "build more capacity" strategy. They're now facing an existential crisis, having burned through cash on expansion while struggling to compete on the products that would justify that capacity. Building fabs is extraordinarily capital-intensive. If demand shifts, or if you've misjudged which products the market actually wants, you're left with expensive infrastructure and no path to profitability.
Memory manufacturers learned this lesson painfully in 2022-2023. Oversupply cratered prices and left all three major producers in dire financial straits. It was great for consumers, as DDR5 prices hit historic lows, but it was devastating for the companies that make the stuff.
Now those same companies are being asked to ramp up production to meet AI demand while simultaneously serving traditional markets. Their response has been telling: they're not doing it. TrendForce reports that memory makers are limiting capital expenditure on additional capacity, instead focusing on the far more lucrative HBM chips that AI accelerators require.
This isn't shortsightedness. It's rational self-preservation. If the AI boom cools, and memory manufacturers clearly have doubts about its longevity, excess capacity becomes a death sentence.
The Cascade Effect
Here's where it gets uncomfortable for anyone who builds, sells, or relies on technology products.
This isn't just a PC enthusiast problem. The same memory constraints affecting desktop DDR5 are rippling through every segment of the electronics industry. Smartphones. Automotive. IoT. Consumer electronics. Enterprise infrastructure. Everything that contains silicon also contains memory, and memory is now being rationed.
SMIC's co-CEO warned that carmakers, smartphone manufacturers, and consumer electronics companies are all "facing pressure from price hikes and supply constraints in the coming year." Some customers are reportedly reluctant to place orders because they can't predict how many memory chips they'll actually receive.
The implications extend beyond component costs. When memory becomes scarce and expensive, product launches get delayed. Manufacturers are already pushing new memory module releases from Q4 2025 into 2026. Device makers face impossible choices: absorb higher costs, pass them to consumers, or ship products with less memory than planned.
And unlike the COVID chip shortage, there's no clear end date. Experts suggest this pricing trend could continue for at least four years, the length of contracts that major buyers have already signed with Samsung and SK Hynix.
What This Means
We've spent over a decade in product development, much of it in fintech and retail technology. We've navigated supply chain disruptions before. But what we're witnessing now feels qualitatively different.
The COVID chip shortage was a demand shock: sudden, unexpected, and ultimately temporary. The current memory crisis is a structural reallocation. The semiconductor industry is reorganizing itself around AI infrastructure, and everyone else is fighting for what's left.
This has profound implications for anyone building technology products:
Cost models are broken. If you're planning product launches for 2026 and beyond, your BOM estimates from six months ago are already obsolete. Memory that cost $7-8 per module is now hovering around $13, with no stabilization in sight.
Supply chain relationships matter more than ever. Lenovo's decision to stockpile 50% more inventory than usual isn't paranoia. It's competitive strategy. Companies with strong supplier relationships and the capital to buy ahead will have meaningful advantages over those who can't.
The "democratization of AI" narrative needs scrutiny. We talk about AI becoming accessible to everyone, but the infrastructure buildout is consuming resources that would otherwise support broader technology development. There's a real tension between AI advancement and the health of the broader tech ecosystem.
Pricing power is shifting. When Micro Center stops displaying fixed prices on memory because they change too fast, something fundamental has changed in the market. We may be entering an era where component pricing becomes as volatile as commodity markets, with all the planning challenges that implies.
The Uncomfortable Question
Is this sustainable? Can the global technology economy absorb a permanent shift in component economics, or will AI's appetite trigger cascading effects we haven't begun to price in?
We don't have a definitive answer. But we do know that the "just build more fabs" mentality that emerged from COVID isn't going to save us this time. The problem isn't that we can't make enough chips. The problem is that the most powerful players in the industry have decided where those chips should go, and it's not to the rest of us.
We're not in a bubble. We're in a transition. And transitions have casualties.
The companies and products that thrive in this environment will be those that adapt, whether that means securing supply chains, redesigning products around available components, or finding ways to deliver value with less silicon.
The companies that assume this is temporary, that prices will normalize, that the market will self-correct? They're the ones we'd worry about.
What's your read on this? Is the AI buildout a temporary surge that will eventually balance out, or are we looking at a permanent restructuring of how the semiconductor industry allocates resources? We'd like to hear from others navigating these challenges.
Related reading from Major Matters: