“HBM is really a new class of memory chip. It’s really specifically for the large kind of AI language models, and they require a lot of bandwidth, which is what HBM provides. So while it’s not the biggest part of our business currently, and it will take a long while before it gets to that level, it’s definitely a very fast-moving part,” explains De Backer at a Micron facility site tour in Singapore.
US semiconductor firm Micron Technology, like its peers, is angling for a piece of the growing pie driven by soaring demand for AI applications. Specifically, it is ramping up its production of high-bandwidth memory (HBM) chips, a crucial component used with AI market leader Nvidia’s graphics processing units (GPUs).
According to Koen de Backer, corporate vice president of smart manufacturing and AI, Micron’s HBM complements GPUs through enhancing processing power and enabling the scalability of AI models — a synergy vital for tackling increasingly complex challenges.

