Augmented reality has been promising to hit the mainstream for years, but just as soon as an app or concept launches, attention fades only to be inflated again by the next big thing. Pokemon Go may still be the best mass market example, but AR is more eye candy than experience defining. Google’s Tango is unquestionably powerful, but with just two devices available, it has a long way to go.
The recent Augmented World Expo (AWE) in Santa Clara, California, saw participation double from 2016 to over 5,000 attendees, and it underlined the potential of the full spectrum from AR to VR (what I’ll refer to here as Extended Reality or XR). But while it revealed countless use cases and reassuring signs of progress, it also illustrated how much still needs to be achieved before it hits scale beyond industry verticals.
Apple’s ARKit: A much-needed kick in the pants
Apple’s announcement of ARKit is what AR badly needs. With a sizable addressable market consisting of iPhones and iPads with A9 or A10 chips (iPhone 6/6S and beyond), it offers developers immediate scale and incentive to invest.
However, it’s just one step in a journey. The technology has clear scope to evolve into form factors such as a heads-up display and ultimately a head-worn device. ODG is arguably the company that is closest to delivering this vision today, but like Google Glass, it faces an enormous hurdle of consumer acceptance.
Nonetheless, this is where the real potential lies. AR and VR are largely considered to be two distinct use cases, but CCS Insight believes they will ultimately merge. In this scenario, a single head-worn device would be able to seamlessly switch between an opaque screen for VR, to a transparent one for AR applications. It could become a converged solution that complements and potentially even replaces the smartphone, depending on the context.
This is a grand vision over the next decade, but there are some considerable technical challenges to overcome. Qualcomm’s Tim Leland highlighted some of these with a call for wider industry collaboration during an AWE keynote.
Display technology and software need to advance significantly, for one thing. The field of view must be increased with at least 190 degrees supported horizontally and 130 degrees vertically. Fatigue caused by disparity between the surface of the screen and the focal point of objects in VR also needs solving. A potential solution is the simultaneous transmission of images at multiple focal planes to the user’s eyes.
Perhaps most challenging is enabling displays that can switch between largely transparent operation and opaque for VR. Brightness and refresh rate will also need improving by orders of magnitude. Any discussion of 4K displays on mobile devices may seem like overkill today, but place that screen centimeters from your eyes and limitations of anything less becomes apparent.
The challenge of realism
Realistic application of light, shade and reflection is what makes objects look natural rather than fake. This is a tough problem to fix, as the final color of every pixel must be defined and then updated in real time based upon movement and light source. This needs cross-industry collaboration to create new APIs for interaction between cameras, sensors and graphics rendering engines, etc.
Motion tracking is perhaps the area seeing most advancement today with 6 degrees of freedom and inside-out tracking that removes the need for external sensors. What is critical here is sub-10 millisecond motion to photon latency—i.e., the time between your head movement and the virtual scene correctly changing to account for it. This is the biggest cause of nausea. Eye tracking will also be critical for foveated rendering, depth of field and increased accuracy in targeting and interaction.
Connectivity—and cooperation—are key
The vision of extended reality can only be realized with the high throughput and low latency promised by 5G. This is hard to imagine today, but exciting developments are coming such as interactive 8K HDR video streaming at 120 fps directly to extended reality headsets. This could transform the way we watch sports, movies and other video content, but puts considerable strain on the network. For those focused on delivering extended reality, the question is not “what are the use cases for 5G?” but “will the networks be ready with sufficient capacity and consistency of experience even at cell edge?”
All this must be delivered in a package with high power and thermal efficiency. Microprocessors based on architectures conceived for much less stringent PC power constraints are too power hungry for this market. Even the level of heat created by smartphones today won’t be acceptable in a future glass-based form factor attached to a person’s head. Battery capacity will also have to improve considerably to be acceptable in a head worn device.
This is a laundry list of requirements for extended reality and is by no means exhaustive. Products such as the ODG R8 show how far the industry has come but is just the first of many technology leaps required over the next decade and beyond to deliver extended reality devices and experiences that could eventually create a multibillion dollar market.
The last 10 years of mobile proved the value of partnership and ecosystem. The next 10 years will require an unprecedented level of cooperation, but the impact could be no less transformative.
Geoff Blaber is vice president of research for the Americas at CCS Insight. Based in California, Blaber heads CCS Insight’s Americas business and supports the range of clients located in that territory. Blaber's research focus spans a broad spectrum of mobility and technology, including the lead role in semiconductors. He is a well-known member of the analyst community and provides regular commentary to leading news organizations such as Reuters, the Financial Times and The Economist. You can follow him on Twitter @geoffblaber.