The rise of the Chinese AI assistant DeepSeek to the top of the iPhone download charts has sent shockwaves through global markets—and, more specifically, the entire AI value chain. The app’s AI model is widely regarded as highly competitive with OpenAI’s ChatGPT and other leading models, all while being significantly less expensive to train and operate.
Reports claiming that DeepSeek only needed a few million dollars to train its model are highly questionable, as we lack a clear understanding of the chip infrastructure used for its training, as well as the full costs associated with research and algorithm experiments. Notably, it’s unclear whether the Chinese startup had access to Nvidia’s high-end GPUs.
Anyway, it looks like the model has indeed superior efficiency in AI inference, where the cost per token is 85-95% less than various ChatGPT models. The emergence of a superior model requiring smaller resources represents the very essence of technology in our view: better and cheaper. We’ve always thought that a shock could come from the “model” side and not from the hardware or software side (as everything is quasi-open source). In the case of DeepSeek, it’s the post-training phase that’s different, enabling excellent results to be obtained by limiting the use of costly Supervised Fine-Tuning (SFT) operated by humans, amongst many other tricks like Distillation Techniques, Mixture-of-Experts…
The good news here is that DeepSeek has respected the “academic codes” and published all these methods as open source/open research (which is unfortunately no longer the case for OpenAI…). As a result, everyone will replicate them and improve them even further.
Consequently, we think that US AI giants will look to integrate some of DeepSeek techniques into their own models, leading to greater efficiency. While the initial reaction would be to fear a slowdown in AI capex spending, we believe that the DeepSeek breakthrough actually comes with a couple of major positives.
First, reduced costs for AI training and inference are likely to drive higher and faster AI adoption and we would notably expect enterprise users to leverage AI to develop new apps/use cases and/or improve their processes. These new use cases would require massive computing in the near term.
In any case, whatever the method of learning and inference, AI is and will remain a story of data and computing power. Even if models become more efficient, computing will remain key to the models’ performances.
Second, reduced costs for AI training and inference could lower barriers to entry in AI and reshape the competitive landscape that has been dominated by Tech giants or a couple of startups backed by Tech giants. While this could put at risk the largest developers of AI models (OpenAI, Anthropic, xAI …), this could open the way for a new generation of AI startups that will come with their own spending. Notably, expansion of Chinese models will need to be monitored given their recent performance and price competitiveness.
Against this backdrop of increased competition, we clearly don’t see Tech giants slowing down on their capex intentions as we are still in the early stages of AI, suggesting the race is far from over, and as AI is critical to various parts of their business. Accordingly, we believe that any model efficiency improvement will push Tech giants to keep exploiting their technological and balance sheet advantages and refocus their fundamental research on these new methodologies. Interestingly, these companies will report their earnings in coming days and will probably share their views on the DeepSeek breakthrough and its implications.
Provided these management comments prove reassuring, today’s correction, which is reminiscent of last summer’s “AI monetization panic”, could be the opportunity to increase exposure to the AI theme. On the hardware side, we’re favoring optics and SiPho (Silicon Photonics), the companies that will make it possible to move TeraBytes of data quickly and at lower energy levels. Software could also regain some momentum should the faster AI adoption scenario unfolds.