Morning Overview

Samsung ships its first HBM4E memory samples — the component that will decide whether the next generation of AI chips ships on time

Samsung has begun shipping early samples of its HBM4E memory to chip-design partners, according to reports from Reuters and South Korean trade publication The Elec. The samples mark a tangible step in a high-stakes race: the next generation of AI accelerators, including Nvidia’s Rubin architecture and AMD’s forthcoming MI-series processors, depend on HBM4-class memory to hit their performance targets. If Samsung can prove its enhanced variant works at scale, it stands to reclaim ground lost to rival SK hynix in the most lucrative segment of the memory market. If it stumbles, AI chip schedules across the industry could slip.

The sampling milestone arrives roughly a year after the global semiconductor standards body JEDEC published the specification that makes it all possible.

The standard that unlocked HBM4

On April 16, 2025, JEDEC published JESD270-4, the formal standard for HBM4 memory. The document defines the electrical interface, signaling protocols, and packaging rules that every HBM4 producer must follow. Its most significant change from HBM3E: the per-stack data interface doubles from 1,024 bits to 2,048 bits, a leap that roughly doubles the theoretical bandwidth available to a processor in a single memory stack.

That doubling matters because large-language-model training and inference are bottlenecked less by raw compute than by how fast data can move between memory and processing cores. Wider interfaces mean more data per clock cycle, which translates directly into faster model training and lower latency during inference.

With the standard now locked, chip architects at Nvidia, AMD, Google, Amazon, and others have a stable target. They can finalize physical layouts, packaging strategies, and power-delivery networks without worrying that the memory spec will shift underneath them. For memory makers, the published standard is a green light to move from prototype silicon to production-grade manufacturing.

HBM4 vs. HBM4E: what Samsung is actually sampling

A point of confusion in much of the coverage is the difference between HBM4 and HBM4E. HBM4 is the base specification codified in JESD270-4. HBM4E is an enhanced variant that Samsung and competitors develop in parallel, pushing bandwidth, capacity, and power efficiency beyond the baseline. Think of HBM4 as the floor and HBM4E as the ceiling for this generation.

Samsung has not issued a formal press release confirming exact shipping dates, sample volumes, or which partners are receiving HBM4E units. The company has, however, signaled its ambitions publicly. During its Q1 2025 earnings call, Samsung flagged aggressive investment in next-generation HBM as a strategic priority, and CEO-level comments have emphasized closing the gap with SK hynix in AI memory qualification.

As of June 2026, no chip designer has publicly confirmed integrating Samsung’s HBM4E into a shipping product. That gap between sample delivery and confirmed design win is normal in the semiconductor industry, but it means the competitive picture is still developing.

The competitive landscape: SK hynix leads, Micron expands

Samsung’s urgency is driven by its position relative to SK hynix, which has held the lead in HBM qualification with Nvidia for two consecutive generations. SK hynix supplied the bulk of HBM3E used in Nvidia’s H200 and B200 accelerators, and the company has publicly stated it is developing its own HBM4 products on a parallel timeline.

Micron, the third major HBM producer, has taken a different approach. Rather than racing to sample first, Micron has focused on expanding raw production capacity at its facilities in Hiroshima, Japan, and Boise, Idaho. In its most recent earnings disclosure, Micron reported that HBM revenue more than doubled year over year and that its HBM order book was sold out through 2025. The company has confirmed HBM4 development but has shared fewer specifics about its “E” variant timeline.

For AI chip designers, a three-supplier HBM4 market is the ideal outcome. Multiple qualified sources reduce the risk of a single-vendor bottleneck and give buyers leverage on pricing. Whether all three vendors can meet the yield, thermal, and power-efficiency requirements for qualification remains an open question that will play out over the coming quarters.

Which AI chips are waiting on HBM4?

The most prominent program tied to HBM4 is Nvidia’s Rubin platform, the successor to the Blackwell architecture that currently powers the company’s flagship data-center GPUs. Nvidia CEO Jensen Huang has outlined a cadence of annual architecture refreshes, and industry analysts widely expect Rubin-based products to be among the first to ship with HBM4 memory. A higher-end “Rubin Ultra” variant is expected to pair with HBM4E for maximum bandwidth.

AMD’s MI-series accelerators are also in the frame. AMD has not disclosed specific HBM4 integration plans, but the company’s roadmap calls for continued memory-bandwidth improvements in each generation of its Instinct data-center GPUs. Custom silicon programs at Google (TPU), Amazon (Trainium), and Microsoft (Maia) are likewise potential consumers of HBM4-class memory, though these companies rarely disclose component-level sourcing details.

The common thread is that every major AI accelerator roadmap assumes a step-function improvement in memory bandwidth sometime in 2026 or 2027. HBM4 and HBM4E are the components that deliver that improvement. Delays in memory qualification ripple directly into chip launch schedules, server availability, and ultimately the pace at which cloud providers can deploy next-generation AI infrastructure.

Execution risk is the real story

Standards milestones and early samples are necessary steps, but they are not sufficient. The semiconductor industry is littered with examples of promising memory technologies that hit walls during volume ramp. Yield rates on advanced HBM stacks, which involve bonding 8 or 12 individual memory dies with through-silicon vias (TSVs), are notoriously difficult to maintain as layer counts increase. A single defective die in a stack can render the entire unit unusable.

Thermal performance is another hurdle. HBM4’s wider interface pushes more data through a physically compact package, generating more heat in a space that is already thermally constrained. Memory stacks sit directly adjacent to the processor die in modern AI accelerators, and any thermal issue in the memory can degrade the performance or reliability of the entire package.

Then there are the external pressures. U.S. export controls on advanced semiconductor technology to China continue to evolve, and South Korean memory makers must navigate those restrictions while maintaining global supply commitments. Equipment availability for advanced packaging, particularly from suppliers like ASE and Amkor, is another potential chokepoint that could slow HBM4 volume production regardless of how well the silicon itself performs.

Samsung’s HBM4E samples represent a credible signal that the company is executing on its roadmap. The JEDEC standard provides the technical foundation the industry needs. But the distance between working samples and qualified, volume-shipping memory remains significant. For the AI chip programs counting on HBM4 to hit their performance marks, the next 12 months of yield data, thermal testing, and qualification results will matter far more than any single sampling announcement.

More from Morning Overview

*This article was researched with the help of AI, with human editors creating the final content.