A couple of weeks ago, we commented that the Augmented Reality (AR) theme was heating up with several major new games (Pikmin, the Witcher…) and devices (Snapchat spectacles) hitting the market and that AR glasses could take over smartphones as the new major gaming platform in a near future as they offer more immersive AR experiences and allow players to be hands free.
As a reminder, AR devices aim to overlay computer-processed information, such as images and sounds, on real-world images. Aside from gaming, AR is also expected to become a killer app in many industries and daily activities, ranging from training and support in the medical and construction fields to retail shopping, where it will dramatically enhance users’ experience and efficiency in performing tasks.
However, some technical challenges must be overcome before we see widespread adoption of AR.
First, the accurate superimposition of computer-generated images over transparent glasses necessitates very precise optical guides (called the combiner). Here, several techniques are competing and the winning one will offer the best compromise between visual precision and its ability to be mass produced at reasonable costs.
Second, the AR projection system must offer a wide range of colors, a high resolution, contrast, and brightness in order to come as close as possible to real-world images.
The current principal method is the waveguide technology, which basically consists of a set of reflective surfaces transmitting images from a tiny emitting display located in the glasses’ frame to the user’s eyes. Achieving this is not without some obstacles: making the virtual image look like it is part of the scenery (depth of field, head tracking, luminosity, and contrast) and keeping the eye in focus while doing so requires prominent optics and hardware capabilities.
A lot of work still needs to be done on the primary emitting display. Different displays exist with specificities regarding energy efficiency, luminosity, contrast, and resolution. Standard LCD and even OLED screens are not satisfying enough in that matter, they remain too bulky and not bright enough for AR. Liquid Chrystal on Silicon (LCOS) and Micro LED technologies are currently the most promising solutions and are at the heart of many discussions and R&D investments.
Then, a long-lasting (and thin enough) battery is also needed in order to enjoy AR without having to look for a power plug every 30 minutes (which is the autonomy of the just-released AR glasses from Snapchat…).
The currently available AR solutions are bulky, hence not comfortable to wear and, most of the time, not very good-looking. This brings us to the last but crucial criterion that the main components of AR glasses must fulfill: miniaturization. In fact, AR mass adoption will only occur once manufacturers will be able to integrate the electronics (including the batteries) and the optical waveguide into regular (sun)glasses.
As AR glasses have the same disruptive potential than smartphones, it is no wonder that major tech companies like Facebook, Google and Apple are heavily investing into R&D and racing to find a way to reach the multiple objectives of image quality, integration and affordable selling price. Given the pace of technological advances, many experts believe that this equation will be solved in the next 2 to 3 years (Apple’s glasses are expected in 2023 at the earliest).
Even if it is still early stage, AR glasses could well become a mass-market device in the medium term with implications for many of the themes and technology providers we follow. Aside from mobile gaming that is expected to get a boost from new AR experiences, 5G (to be effective, AR needs high speed communication networks), semiconductors (mini displays, miniaturized chips…), computer vision and tiny and efficient batteries should play a major role in the advent of AR.