6 min read

Immersive computing has been touted as a crucial innovation that is going to transform the way we interact with software in the future. But like every trend, there are a set of core technologies that lie at the center, helping to drive it forward. In the context of immersive computing Google ARCore is one of these technologies. Of course, it’s no surprise to see Google somewhere at the heart of one of the most exciting developments in tech. But what is Google ARCore, exactly? And how is it going to help drive immersive computing into the mainstream?

But first, let’s take a look at exactly what immersive computing is. After that, we’ll explore how Google ARCore is helping to drive it forward, and some examples of how to put it into practice with some motion tracking and light estimation projects.

What is Immersive Computing?

Immersive computing is a term used to describe applications that provide an immersive experience for the user. This may come in the form of an augmented or virtual reality experience. In order to better understand the spectrum of immersive computing, let’s take a look at this diagram:

The Immersive Computing Spectrum

The preceding diagram illustrates how the level of immersion affects the user experience, with the left-hand side of the diagram representing more traditional applications with little or no immersion, and the right representing fully immersive virtual reality applications. For us, we will stay in the middle sweet spot and work on developing augmented reality applications.

Why use Google ARCore for Augmented Reality?

Augmented reality applications are unique in that they annotate or augment the reality of the user. This is typically done visually by having the AR app overlay a view of the real world with computer graphics. Google ARCore is designed primarily for providing this type of visual annotation for the user. An example of a demo ARCore application is shown here:

Google ARCore demo application; the dog is real

The screenshot is even more impressive when you realize that it was rendered real time on a mobile device. It isn’t the result of painstaking hours of using Photoshop or other media effects libraries. What you see in that image is the entire superposition of a virtual object, the lion, into the user’s reality. More impressive still is the quality of immersion. Note the details, such as the lighting and shadows on the lion, the shadows on the ground, and the way the object maintains position in reality even though it isn’t really there. Without those visual enhancements, all you would see is a floating lion superimposed on the screen. It is those visual details that provide the immersion. Google developed ARCore as a way to help developers incorporate those visual enhancements in building AR applications.

Google developed ARCore for Android as a way to compete against Apple’s ARKit for iOS. The fact that two of the biggest tech giants today are vying for position in AR indicates the push to build new and innovative immersive applications.

Google ARCore has its origins in Tango, which is/was a more advanced AR toolkit that used special sensors built into the device. In order to make AR more accessible and mainstream, Google developed ARCore as an AR toolkit designed for Android devices not equipped with any special sensors. Where Tango depended on special sensors, ARCore uses software to try and accomplish the same core enhancements. For ARCore, Google has identified three core areas to address with this toolkit, and they are as follows:

  • Motion tracking
  • Environmental understanding
  • Light estimation

In the next three sections, we will go through each of those core areas in more detail and understand how they enhance the user experience.

Motion tracking

Tracking a user’s motion and ultimately their position in 2D and 3D space is fundamental to any AR application. Google ARCore allows you to track position changes by identifying and tracking visual feature points from the device’s camera image. An example of how this works is shown in this figure:

Feature point tracking in ARCore

In the figure, we can see how the user’s position is tracked in relation to the feature points identified on the real couch. Previously, in order to successfully track motion (position), we needed to pre-register or pre-train our feature points. If you have ever used the Vuforia AR tools, you will be very familiar with having to train images or target markers.

Now, ARCore does all this automatically for us, in real time, without any training. However, this tracking technology is very new and has several limitations.

Environmental understanding

The better an AR application understands the user’s reality or the environment around them, the more successful the immersion. We already saw how Google ARCore uses feature identification in order to track a user’s motion. Tracking motion is only the first part. What we need is a way to identify physical objects or surfaces in the user’s reality. ARCore does this using a technique called meshing.

This is what meshing looks like in action:

Google image showing meshing in action

What we see happening in the preceding image is an AR application that has identified a real-world surface through meshing. The plane is identified by the white dots. In the background, we can see how the user has already placed various virtual objects on the surface.

Environmental understanding and meshing are essential for creating the illusion of blended realities. Where motion tracking uses identified features to track the user’s position, environmental understanding uses meshing to track the virtual objects in the user’s reality.

Light estimation

Magicians work to be masters of trickery and visual illusion. They understand that perspective and good lighting are everything in a great illusion, and, with developing great AR apps, this is no exception.

Take a second and flip back to the scene with the virtual lion. Note the lighting and detail in the shadows on the lion and ground. Did you note that the lion is casting a shadow on the ground, even though it’s not really there? That extra level of lighting detail is only made possible by combining the tracking of the user’s position with the environmental understanding of the virtual object’s position and a way to read light levels.

Fortunately, Google ARCore provides us with a way to read or estimate the light in a scene. We can then use this lighting information in order to light and shadow virtual AR objects. Here’s an image of an ARCore demo app showing subdued lighting on an AR object:

Google image of demo ARCore app showing off subdued lighting

The effects of lighting, or lack thereof, will become more obvious as we start developing our startup applications.

To summarize, we took a very quick look at what immersive computing and AR is all about. We learned about augmented reality covering the middle ground of the immersive computing spectrum, and AR is a careful blend of illusions used to trick the user into believing that their reality has been combined with a virtual one.

After all, Google developed ARCore as a way to provide a better set of tools for constructing those illusions and to keep Android competitive in the AR market. After that, we learned the core concepts ARCore was designed to address and looked at each: motion tracking, environmental understanding, and light estimation, in a little more detail.

This has been taken from Learn ARCore – Fundamentals of Google ARCore. Find it here.

Read More

Getting started with building an ARCore application for Android

Types of Augmented Reality targets

 

Content Marketing Editor at Packt Hub. I blog about new and upcoming tech trends ranging from Data science, Web development, Programming, Cloud & Networking, IoT, Security and Game development.

LEAVE A REPLY

Please enter your comment!
Please enter your name here