The stellar rise of Generative Artificial Intelligence continues to shake up the whole IT industry. At the end of last week, the world’s largest semiconductor foundry, Taiwan Semiconductor (TSMC) and the world’s second largest memory maker, SK Hynix, announced a new alliance around the development of AI chips.
This news, which has gone almost unnoticed, is very significant as it aims to address one of the current and major bottlenecks of AI training: memory access.
Back in July 2023, we wrote a report about the memory wall mentioning that one of the path leading to a better and more efficient computing architecture would be to directly integrate the memory modules into the processing chips and linking them together with a high-speed interface. This specific type of memory is called HBM (High Bandwidth Memory) and is exactly the focus of the TSMC-Hynix alliance.
As TSMC manufactures almost 100% of the world’s training chips (Nvidia, AMD plus many startups) while SK Hynix currently dominates the HBM memory segment (market share > 50%), this alliance will have a deep impact on the two remaining players of the “memory cartel”, Samsung Electronics and Micron.
To fully appreciate the impact and the reason for this alliance, we must dig a bit into the manufacturing process. First, the HBM memory modules are manufactured in 3D, with memory dies stacked above each other (see image above). This “sandwich” configuration saves a lot of footprints on the chip but also comes with many challenges, both mechanical (heat dissipation, packaging) and electrical (insulation and connections, also called bonding). Second, the memory modules must be “glued” to the whole chip and then directly interconnected to the GPU/CPU, hence necessitating thousands of microscopic links/wires.
These advanced packaging and hybrid bonding techniques have become crucial to the whole semiconductor industry which is on the verge of entering the chiplet era (see our previous report). Since TSMC already produces the logic part (GPU/CPU) with a modular approach, it is natural for Hynix to intervene during the manufacturing process by directly integrating its own HBM chiplets inside TSMC’s customers chips.
HBM memory is in such a high demand that prices have multiplied by 5 in only one year. The direct integration of HBM into CPUs and GPUs will continue at an unabated rate with memory makers, especially SK Hynix (one of our main positions), well positioned to benefit from this rush.
A pick-and-shovel strategy can also be played by selecting the companies which offer advanced packaging and hybrid bonding manufacturing and/or testing equipment like BE Semiconductor, Towa or ASMPT.
And more globally, these technological evolutions in chip architecture and manufacturing are expected to buoy the AI industry even more by sparking major improvements in AI chips’ performance, efficiency and production.