3 min read

Facebook released a new “AI-powered rendering system”, called DeepFocus yesterday, that works with Half Dome, a special prototype headset that Facebook’s Reality Lab (FRL) team had been working on over the past three years.

HalfDome is an example of a “varifocal” head-mounted display (HMD) that comprises eye-tracking camera systems, wide-field-of-view optics, and adjustable display lenses that move forward and backward to match your eye movements. This makes the VR experience a lot more comfortable, natural, and immersive. However, HalfDome needs software to work in its full potential, that is where DeepFocus comes into the picture.

“Our eyes are like tiny cameras: When they focus on a given object, the parts of the scene that are at a different depth look blurry. Those blurry regions help our visual system make sense of the three-dimensional structure of the world and help us decide where to focus our eyes next. While varifocal VR headsets can deliver a crisp image anywhere the viewer looks, DeepFocus allows us to render the rest of the scene just the way it looks in the real world: naturally blurry,” mentions Marina Zannoli, a vision scientist at FRL.

Facebook is also open-sourcing DeepFocus, making the system’s code and the data set used to train it available to help other VR researchers incorporate it into their work. “By making our DeepFocus source and training data available, we’ve provided a framework not just for engineers developing new VR systems, but also for vision scientists and other researchers studying long-standing perceptual questions,” say the researchers.

DeepFocus

A research paper presented at SIGGRAPH Asia 2018 explains that DeepFocus is a unified rendering and optimization framework based on convolutional neural networks that solve a full range of computational tasks. It also helps with enabling real-time operation of accommodation-supporting HMDs. The CNN comprises “volume-preserving” interleaving layers that help it quickly figure out the high-level features within an image. For instance, the paper mentions, that it accurately synthesizes defocus blur, focal stacks, multilayer decompositions, and multiview imagery. Moreover, it makes use of only commonly available RGB-D images, that enable real-time, near-correct depictions of a retinal blur.

Researchers explain that DeepFocus is  “tailored to support real-time image synthesis..and ..includes volume-preserving interleaving layers..to reduce the spatial dimensions of the input, while fully preserving image details, allowing for significantly improved runtimes”. This model is more efficient unlike the traditional AI systems used for deep learning based image analysis as DeepFocus can process the visuals while preserving the ultrasharp image resolutions that are necessary for delivering high-quality VR experience. The researchers mention that DeepFocus can also grasp complex image effects and relations that includes foreground and background defocusing.

However, DeepFocus isn’t just limited to Oculus HMDs. Since DeepFocus supports high-quality image synthesis for multifocal and light-field display, it is applicable to a complete range of next-gen head-mounted display technologies. “DeepFocus may have provided the last piece of the puzzle for rendering real-time blur, but the cutting-edge research that our system will power is only just beginning”, say the researchers.

For more information, check out the official Oculus Blog. 

Read Next

Magic Leap unveils Mica, a human-like AI in augmented reality

MagicLeap acquires Computes Inc to enhance spatial computing

Oculus Connect 5 2018: Day 1 highlights include Oculus Quest, Vader Immortal and more!

Tech writer at the Packt Hub. Dreamer, book nerd, lover of scented candles, karaoke, and Gilmore Girls.