AI Memory Is So Expensive SK Hynix Wants a U.S. Listing

The clearest sign that AI memory has become wildly expensive is not a chip price chart.

It is that SK Hynix wants to raise billions of dollars in the United States to keep up.

On March 25, SK Hynix said it had made a confidential filing for a U.S. listing in the second half of 2026. Reuters reported that the deal could raise as much as $14 billion if the company sells roughly 2% to 3% of its shares, potentially making it the biggest U.S. listing in five years. The company said the goal is not finalized yet. Still, the message is already clear: AI memory has become such a capital-intensive business that even one of the world’s strongest chipmakers wants deeper access to global capital.

That matters because SK Hynix is not a struggling turnaround story looking for rescue money. It is one of the biggest winners of the AI buildout. The company posted record 2025 revenue of 97.1467 trillion won and record operating profit of 47.2063 trillion won, driven by high-value products including HBM, or high-bandwidth memory. Reuters also reported in January that quarterly profit more than doubled and that the company sees the AI boom sustaining rapid memory-chip demand.

So this is not a “we need cash because the business is weak” story. It is almost the opposite. AI memory is now so strategically valuable, capacity-constrained, and equipment-heavy that even the leader wants more financial firepower.

AI memory changed the economics of memory

High-bandwidth memory is not ordinary DRAM with better marketing.

SK Hynix describes HBM as a high-value, high-performance memory that vertically interconnects multiple DRAM chips to dramatically increase data-processing speed versus conventional DRAM. That makes it a crucial ingredient in the GPUs used to train and run advanced AI systems. The company says its Indiana project will mass-produce next-generation HBM for AI products, which shows how central this category has become to the broader AI stack.

That technical edge has turned memory from a familiar cyclical commodity into something closer to strategic infrastructure. Reuters reported in January that customers are increasingly seeking multi-year supply agreements instead of the old one-year contracts as they scramble to secure long-term memory supply. The same report said some memory prices surged more than 300% in the fourth quarter from a year earlier, according to TrendForce, because AI infrastructure demand is straining production capacity.

The strain is not limited to HBM alone. Reuters reported in October 2025 that SK Hynix said its 2026 DRAM, HBM, and NAND output was already sold out. More recently, a Solidigm executive from SK Group told Reuters that AI systems expected later this year may require 35% more storage than earlier models and warned supply strains could last through at least 2030. In other words, AI is not only making premium memory more valuable. It is starting to tighten the whole memory complex around it.

That is why the title story matters. AI memory is expensive not only because the chips sell at better prices, but because the whole process of making enough of them has become brutally capital hungry.

SK Hynix is spending like a company racing a supply wall

The best evidence is what the company is buying.

On March 24, Reuters reported that SK Hynix placed an 11.95 trillion won, or about $7.97 billion, order for ASML EUV lithography tools. Reuters called it the largest single order ever publicly disclosed by an ASML customer. Analysts said the tools would be used in the Yongin plant and the M15X facility in Cheongju to make HBM and advanced DRAM.

That is only one layer of the spending wave. Reuters also reported in January that SK Hynix decided to invest 19 trillion won, or about $12.9 billion, in an advanced chip-packaging plant in South Korea to meet rising AI-memory demand. Separately, the company had already announced a $3.87 billion advanced-packaging and R&D facility in Indiana for AI products, the first investment of its kind on U.S. soil, with mass production targeted for the second half of 2028. The U.S. Commerce Department later said the project could receive up to $450 million in proposed CHIPS Act incentives, plus possible loans.

Put those numbers together and the logic gets obvious fast. Even record profits do not make this expansion cheap. SK Hynix is trying to fund new fabs, advanced packaging, U.S. localization, and next-generation HBM capacity at the same time. Reuters reported this week that the company now wants more than 100 trillion won in net cash eventually, versus 12.7 trillion won at the end of 2025, to better respond to customer demand and stabilize operations.

That is not the language of a company enjoying a casual upcycle. It is the language of a company in an arms race.

Read more posts from Nerd XP

Stay up-to-date on the latest news in the world of finance, geek culture, and skills.

The U.S. listing is about valuation as much as funding

Money is only half the story.

SK Hynix CEO Kwak Noh-jung said the U.S. listing effort is part of trying to have the company’s corporate value reassessed in the United States, the world’s largest equity market where the biggest global semiconductor firms are listed. Reuters reported that analysts see the listing as a way to compare SK Hynix more directly with Micron and close a valuation gap, despite SK Hynix’s stronger profitability and technological position in AI memory.

That makes strategic sense. U.S. investors already know how to value Nvidia, Micron, Broadcom, and other AI-linked infrastructure plays. SK Hynix, by contrast, has been one of the most important companies in the AI hardware chain without having the same direct U.S. market access. A U.S. ADR listing could change that by making it easier for global institutions and ordinary U.S. investors to treat SK Hynix as a core AI infrastructure name rather than a distant Korean memory stock. Reuters and Barron’s both highlighted that comparison angle this week.

There is a deeper point here too. When a company already dominating a key AI bottleneck still thinks it is undervalued, it tells you how powerful the U.S. market’s AI premium has become. SK Hynix is not only trying to fund more capacity. It is trying to move closer to where AI narratives get priced most aggressively.

This also says something bigger about the AI boom

The AI boom is getting more expensive in unglamorous places.

Everyone notices the GPUs. Fewer people notice the memory, wafers, packaging tools, power, and supply contracts sitting underneath them. Reuters quoted SK Group Chairman Chey Tae-won saying, “AI actually wants to have a lot of HBM,” and that once you make HBM, “we have to use a lot of wafers.” He added that wafer shortages could last until 2030. That is a revealing line because it shows where the true constraint may be forming: not only in headline chips, but in the costly memory-and-manufacturing layers beneath them.

You can see the same pattern at Micron. Reuters reported last week that Micron lifted its 2026 capital spending plan by $5 billion to more than $25 billion and said spending would rise further in 2027 as it expands manufacturing for AI-driven memory demand. When the top memory makers all start spending at that pace, the story is no longer “AI is profitable.” It becomes “AI requires an enormous amount of expensive hardware capacity just to stay on schedule.”

That is why SK Hynix’s listing plan is such a useful signal. It shows that AI memory has moved from an attractive niche into a capital structure issue. The products are so valuable, and the competitive race is so intense, that even the market leader wants more capital, more U.S. visibility, and more financial flexibility to stay ahead.

Investors should still see the risks clearly

This is not a simple “buy every AI memory name” message.

First, the proposed U.S. listing could dilute existing shareholders if it uses new shares. Reuters reported that investor groups and fund managers in Korea criticized that possibility and argued SK Hynix could use buybacks or existing shares instead. That debate matters because even a great business can create shareholder tension when it asks current owners to fund the next leg of expansion.

Second, memory remains cyclical even when the cycle looks supercharged. Reuters noted that both SK Hynix and Micron are benefiting from a shortage expected to last until 2027, but longer-term stability is not guaranteed once new capacity arrives. The very capex wave that supports the bull case today can become tomorrow’s oversupply problem if AI demand cools or if too much new production lands at once.

Third, geopolitics is getting closer to the trade. Reuters said the listing comes as SK Hynix navigates new U.S. tariffs on certain AI chips and rising scrutiny of foreign investment. It also reported comments from U.S. officials suggesting South Korean and Taiwanese chipmakers that do not expand in America could face harsher tariff treatment. So the U.S. listing is partly a financing move, but it also looks like a positioning move in a world where semiconductor access is becoming more political.

The bottom line

SK Hynix wants a U.S. listing because AI memory has become too important, too expensive, and too capital-intensive to fund casually.

The company is already posting record profits. Its HBM revenue has more than doubled, its output has been effectively spoken for, and its market leadership in AI memory is real. Yet it is still ordering nearly $8 billion of EUV tools, investing billions more in Korea and Indiana, and seeking up to $14 billion from a U.S. listing while aiming to build a much larger net-cash buffer. That is what a hardware bottleneck looks like when it turns into a balance-sheet event.

So the most useful way to read this is simple: AI memory is no longer just a profitable chip category. It is becoming a strategic capital race. And when the leader wants Wall Street money to run faster, that tells you the AI buildout is getting more expensive than the market may fully appreciate.

HypeBucks
XP of the Day: When a company with record profits still wants up to $14 billion more, the real message is usually that the industry’s capex race just got bigger than its current cash flow.
Next Move: Take 10 minutes today to list the AI names you follow and label each one as compute, memory, networking, power, or packaging.

Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Rolar para cima