
A recent Stanford Computational Imagine Lab paper is a step towards an extremely light form factor for AR glasses. The team accomplished it by combining AI and AR to create a new type of display: https://lnkd.in/gvKha2hK
The convergence of AI and AR has been a hot topic, and it is great to see some research showing what that convergence will look like. It isn’t an easy read, but they have developed a computer algorithm that uses artificial intelligence to generate holograms. It combines AI, physics modelling and camera feedback so that the algorithm can precisely predict how the hologram will look after passing through the waveguide. The waveguide is the ultra-thin, glass optical element that uses metasurfaces to precisely guide light waves carrying 3D holographic information. Its compact design allows for realistic 3D image overlays in a pair of AR glasses.
The most important takeaways:
-This will allow hardware developers to create compact form-factor glasses. AR glasses can be lightweight and follow a wide range of style options. This means anyone in the eyewear field can start thinking about adding AR elements without the bulk of the current AR/ MR headsets like the Apple Vision Pro or the Meta Quest 3.
-Rendering accurate 3D depth cues and focus blur will reduce visual discomfort and allow for more realistic AR experiences.