Over the last few months, some cloud software vendors have faced an unexpected and counter-intuitive risk: the rising efficiency of cloud computing platforms such as Amazon’s AWS stemming from improved chip performances.
Cloud-based data storage company Snowflake notably came under the spotlight. It uses AWS cloud to store and process its customers’ data which are paying the service on a usage basis (93% of Snowflake revenue). When AWS’ processing efficiency made a leap thanks to a new generation of chips, allowing Snowflake’s customers to process their data analysis and queries much faster and reduce their computing time, Snowflake made the decision to pass the savings onto customers, resulting in lower revenue for the company.
Even if Snowflake is an isolated case for now, this relatively new headwind for software vendors must be closely followed as many companies rely on cloud platforms. Among them, we can mention database vendor MongoDB, Crowdstrike in the cybersecurity space or Datadog and Teradata in data analytics.
That said, the impact, which has been material for Snowflake (-5%/-8% on revenue) could be much lower for other SaaS companies, depending on the pace of adaptation of their software to the new AWS chips and on their pricing/commercial strategy (retaining the AWS cost savings and boosting margins or passing them savings onto customers to stimulate demand).
Another major implication of this development resides in the main driver of this significantly improved computing efficiency: Amazon’s proprietary microprocessor (CPU) called Graviton. In 2016, the e-commerce giant acquired Annapurna Labs, a chip design company, to pursue its goal to deliver more efficient (cloud) computing power to AWS customers.
Back then, the only available server-scale CPUs were sold by Intel and, to a lesser extent, by AMD. These chips were designed to be the most powerful in terms of computing power without any consideration for their electric consumption. For cloud computing companies, which are operating dozens of data centers around the world (each running several hundred thousand of CPUs), the energy bill is by far the largest operating cost. Thus, designing a CPU with a better Power-to-Consumption ratio than Intel’s and/or AMD’s offering was clearly a no brainer. Furthermore, designing a “cloud-native” CPU optimized for the cloud’s very specific tasks was also a top priority.
Amazon’s first Graviton processor was put into production in 2018 but this first iteration did not provide a technological and economical gap large enough to be massively rolled out. The breakthrough came two years later with Graviton’s second generation which offered an immediate and considerable improvement in both costs and processing speeds to AWS’ clients.
The ARM CPU architecture – upon which the Graviton chip relies – made this technological leap possible as it consumes less energy at equal computing power compared to Intel/AMD’s designs. For example, Ampere’s ARM-based upcoming Altra chip is expected to slash computing costs by a factor of 2 to 4 times! ARM’s unmatched power efficiency is also the reason why mobile devices ranging from smartphones to Internet-of-Things gadgets are all running on ARM-based chips.
ARM’s penetration in data centers is currently quite low as this segment is largely dominated by Intel, AMD and Nvidia. Nevertheless, ARM’s share will increase over the years as all hyperscalers (AWS, Microsoft Azure, Google Cloud…) are clearly seeing the advantages of developing a proprietary CPU solution to optimize their cloud offerings.
Does this tectonic shift in computing architecture mean trouble ahead for the Intel-AMD-Nvidia oligopoly? To us, it is clear that the trio’s datacenter hegemony will settle down a bit. But the position of these 3 unavoidable players will remain dominant for several reasons: 1) general purpose CPUs will always be needed as ARM server chips are optimized for very specific cloud computing tasks. Furthermore, Intel and AMD’s next server CPU generation will also embark cloud-native optimizations 2) no ARM chip will, in the foreseeable future, deliver the necessary computing power to train large-scale AI models, a field currently dominated by Nvidia (which, by the way, tried to acquire ARM in 2020…) and 3) many tech companies are embracing open source (hence no licenses nor royalties) alternatives to ARM’s design like OpenRISC and RISC-V (heavily backed by Intel).
As a concluding remark, it is worth noting that ARM Holdings is expected to be listed again this year (after having been taken private in 2016), a financial operation that will be closely followed by the whole semiconductor industry as some chatters about a consortium of companies (Intel, Samsung, Qualcomm…) willing to take a significant share or to fully acquire ARM have been circulating since several quarters.
After having reached full domination in the mobile segment, ARM’s “financial” comeback will also mark the entry of the company into the cloud computing space, a venture that could be well successful this time (after several failed attempts over the past decade) given the backing of many tech heavyweights.