トレンドトピック
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
はい、冷却は非常に難しいでしょう
ベースダイはサーマルインターフェースレイヤーと接触しませんが、HBMダイよりもはるかに多くの熱を発生させます
「電力供給と熱の放散も大きな問題です。GPU計算コアは高消費電力と大量の熱発生があるため、熱制御がボトルネックになる可能性があります。」

2025年11月26日
Boundaries Between GPU and HBM Collapse… Next-Gen HBM to Embed GPU Cores
A method of mounting Graphics Processing Units (GPUs) onto next-generation High Bandwidth Memory (HBM) is being pursued. This is a new technology attempted by global big tech companies to improve Artificial Intelligence (AI) performance. It signifies that the boundaries between semiconductor companies are being torn down amidst the convergence of memory and system semiconductors.
According to comprehensive reporting on the 26th, Meta and NVIDIA are reviewing plans to mount GPU cores on HBM. Specifically, this involves embedding GPU cores into the base die located at the bottom of the HBM stack, and they are currently exploring cooperation with SK Hynix and Samsung Electronics.
Multiple industry insiders familiar with the matter stated, "Next-generation 'custom HBM' architectures are being discussed, and among them, a structure that directly integrates GPU cores into the HBM base die is being pursued."
HBM is a high-performance memory created by stacking multiple DRAM chips. It was designed for AI applications that need to process massive amounts of data.
Currently, the base die is responsible for communication between the memory and the outside world at the very bottom of the HBM structure. A step forward from here is the inclusion of a 'controller' as implemented in HBM4. The industry is attempting to boost performance and efficiency by adding a semiconductor capable of controlling memory. HBM4 is a product scheduled for full-scale mass production starting next year.
Embedding GPU cores is interpreted as a technology several steps ahead of the HBM4 controller. In GPUs and CPUs, a core is the basic unit capable of independent computation. For example, a 4-core GPU means there are four cores capable of computation; the more cores there are, the more computing performance improves.
Putting cores into HBM is an attempt to distribute the computational functions that were concentrated in the GPU to the memory, thereby reducing data movement and lowering the burden on the main GPU body.
An industry official explained, "In AI computation, an important factor is not just speed but also energy efficiency," adding, "Reducing the physical distance between memory and computational units can reduce both data transfer latency and power consumption."
However, technical challenges remain. Due to the characteristics of the Through-Silicon Via (TSV) process, the space available to contain GPU cores in the HBM base die is very limited. Power supply and heat dissipation are also major issues. Since GPU computational cores consume high power and generate significant heat, thermal control could become a bottleneck.
This development could be both an opportunity and a crisis for the domestic semiconductor industry. If companies possess the foundry or packaging capabilities to implement CPUs or GPUs, it becomes an opportunity to further develop HBM and continue leading the AI semiconductor market. However, there is a concern that if response capabilities are lacking, they could become subordinate to the system semiconductor industry.
Kim Joung-ho, a professor in the School of Electrical Engineering at KAIST, said, "The speed of technological transition where the boundary between memory and system semiconductors collapses for AI advancement will accelerate," and added, "Domestic companies must expand their ecosystem beyond memory into the logic sector to preempt the next-generation HBM market."

@_ueaj マザーボードはどのように設計しますか?
19.89K
トップ
ランキング
お気に入り

