Two days ago, AMD unveiled its long-awaited server-class GPU chip called the Instinct MI300X. This chip is positioned to be a direct competitor to the successful H100, Nvidia’s current Generative AI workhorse.
As GenAI models keep on growing in size, memory and the way to access it represent one of the key elements of focus for chip designers. The MI300X clearly follows that trend with a 50% increase of HBM3 onboard memory (192GB), resulting in almost a doubling of the computing power compared to the previous generation.
AMD gave a set of very flattering benchmarks where the MI300X is leaving the H100 in the dust, tests that where obviously cherry picked (a common standard in the industry…) to leverage AMD’s chip larger memory. Nvidia is actually already a step ahead with the forthcoming H200 and will release a new architecture in 2024, called Blackwell. Anyway, it will not only be a battle of performances but also of pricing, with AMD having room to price aggressively.
For now, AMD has landed a number of top-class cloud endorsers including Microsoft, Oracle and Meta. Even if this was largely expected by investors, there were nevertheless some positive surprises in the MI300X reveal event. Notably, Meta, which was one of the largest Nvidia H100 GPUs purchasers in 2023, sounded upbeat on the deployment of MI300X chips in its data centers.
And the announcement that OpenAI will support AMD’s new generation of GPUs is a significant achievement for AMD in the software ecosystem, giving extra credibility to the company.
Overall, the imminent release of the MI300X marks a significant change in the landscape of AI hardware, potentially ushering in a new era of competition as AMD now appears as a legit alternative to Nvidia which has a monopoly with an estimated 90% share. Other competitive alternatives should follow in 2024 with Intel’s Gaudi 3 as well as hyperscalers’ in-house solutions (Microsoft’s Maia, Google’s TPU v5…).
While AMD did not provide financial projections for MI300X shipments, it raised its estimate for the AI chip market size to $400 billion by 2027, up from $150 billion just six months ago! This expectation should be taken with a grain of salt but, anyway, confirms our view that the market will keep growing much faster than expected in the foreseeable future.
Obviously, Nvidia’s market share should gradually decline, but this should be largely offset by a much larger than expected market growth. Meanwhile, AMD has a massive revenue opportunity ahead, given its tiny size in this AI accelerator market and could then enjoy significant revenue/EPS upside in coming quarters. Both stocks are largely present in several of our portfolios.
As we said above, memory plays a central role in the newest generation of AI chips as the growing speed difference between microprocessors and computer memory leave the processing units idle until the requested/needed data becomes available for computing, an issue that has been exacerbated by the recent explosion of large language models. Accordingly, HBM (high bandwidth memory) makers SK Hynix, Samsung Electronics and Micron appear as attractive collateral plays on the AI chip gold rush.