
Bloomberg reported yesterday that Meta and EssilorLuxottica are considering doubling annual production capacity of their Meta Ray-Ban smart glasses to 20 million units by year-end, up from 10 million currently, with the option to scale further to 30 million units if demand warrants.
This move, which points to strong demand for AI-powered wearables, follows Meta’s recent decision to delay the international rollout of its premium Ray-Ban Display model in order to prioritize the U.S. market amid inventory constraints and “unprecedented demand.” The Display model retains the core features of earlier Meta Ray-Ban glasses—hands-free photo and video capture, open-ear audio, microphones and an AI assistant—while adding an augmented reality (AR) microdisplay embedded in the right lens for navigation, messages and other visual information.
While some may argue that the early success of the $799 Ray-Ban Display reflects limited supply and early-adopter demand, we view this momentum as another strong signal that smart glasses are emerging as a compelling new computing platform. Nearly all major Tech players are now investing aggressively in the category: Google has unveiled partnerships with leading eyewear brands and plans to launch Android XR-powered glasses this year, while Apple is reportedly shifting focus from VR headsets toward more mainstream AR glasses, potentially leveraging OpenAI or Google Gemini AI technology. Chinese Tech giants are also active, with Alibaba and Xiaomi among those racing to develop smart-glasses offerings.
Ultimately, smart glasses are shaping up to be the next frontier for AI hardware. Their hands-free form factor and access to contextual data all day long make them an ideal interface for AI agents and position them as a powerful catalyst for mass adoption of AI applications.






