8 min read

In this article, by Raymond C. H. Lo and William C. Y. Lo, authors of the book OpenGL Data Visualization Cookbook, we will demonstrate how to visualize data with stunning stereoscopic 3D technology using OpenGL. Stereoscopic 3D devices are becoming increasingly popular, and the latest generation’s wearable computing devices (such as the 3D vision glasses from NVIDIA, Epson, and more recently, the augmented reality 3D glasses from Meta) can now support this feature natively.

The ability to visualize data in a stereoscopic 3D environment provides a powerful and highly intuitive platform for the interactive display of data in many applications. For example, we may acquire data from the 3D scan of a model (such as in architecture, engineering, and dentistry or medicine) and would like to visualize or manipulate 3D objects in real time.

Unfortunately, OpenGL does not provide any mechanism to load, save, or manipulate 3D models. Thus, to support this, we will integrate a new library named Open Asset Import Library (Assimp) into our code. The main dependencies include the GLFW library that requires OpenGL version 3.2 and higher.

(For more resources related to this topic, see here.)

Stereoscopic 3D rendering

3D television and 3D glasses are becoming much more prevalent with the latest trends in consumer electronics and technological advances in wearable computing. In the market, there are currently many hardware options that allow us to visualize information with stereoscopic 3D technology. One common format is side-by-side 3D, which is supported by many 3D glasses as each eye sees an image of the same scene from a different perspective. In OpenGL, creating side-by-side 3D rendering requires asymmetric adjustment as well as viewport adjustment (that is, the area to be rendered) – asymmetric frustum parallel projection or equivalently to lens-shift in photography. This technique introduces no vertical parallax and widely adopted in the stereoscopic rendering. To illustrate this concept, the following diagram shows the geometry of the scene that a user sees from the right eye:

OpenGL Data Visualization Cookbook

The intraocular distance (IOD) is the distance between two eyes. As we can see from the diagram, the Frustum Shift represents the amount of skew/shift for asymmetric frustrum adjustment. Similarly, for the left eye image, we perform the transformation with a mirrored setting. The implementation of this setup is described in the next section.

How to do it…

The following code illustrates the steps to construct the projection and view matrices for stereoscopic 3D visualization. The code uses the intraocular distance, the distance of the image plane, and the distance of the near clipping plane to compute the appropriate frustum shifts value. In the source file, common/controls.cpp, we add the implementation for the stereo 3D matrix setup:

void computeStereoViewProjectionMatrices(GLFWwindow* window,   float IOD, float depthZ, bool left_eye){
  int width, height;
  glfwGetWindowSize(window, &width, &height);
  //up vector
  glm::vec3 up = glm::vec3(0,-1,0);
  glm::vec3 direction_z(0, 0, -1);
  //mirror the parameters with the right eye
  float left_right_direction = -1.0f;
  if(left_eye)
    left_right_direction = 1.0f;
  float aspect_ratio = (float)width/(float)height;
  float nearZ = 1.0f;
  float farZ = 100.0f;
  double frustumshift = (IOD/2)*nearZ/depthZ;
  float top = tan(g_initial_fov/2)*nearZ;
  float right =
aspect_ratio*top+frustumshift*left_right_direction; 
//half screen
  float left =     -aspect_ratio*top+frustumshift*left_right_direction;
  float bottom = -top;
  g_projection_matrix = glm::frustum(left, right, bottom, top,     nearZ, farZ);
  // update the view matrix
 g_view_matrix = 
 glm::lookAt(
   g_position-direction_z+
     glm::vec3(left_right_direction*IOD/2, 0, 0), 
     //eye position
   g_position+
     glm::vec3(left_right_direction*IOD/2, 0, 0), 
     //centre position
   up //up direction
 );

In the rendering loop in main.cpp, we define the viewports for each eye (left and right) and set up the projection and view matrices accordingly. For each eye, we translate our camera position by half of the intraocular distance, as illustrated in the previous figure:

if(stereo){
  //draw the LEFT eye, left half of the screen
  glViewport(0, 0, width/2, height);
  //computes the MVP matrix from the IOD and virtual image plane distance
  computeStereoViewProjectionMatrices(g_window, IOD, depthZ, true);
  //gets the View and Model Matrix and apply to the rendering
  glm::mat4 projection_matrix = getProjectionMatrix();
  glm::mat4 view_matrix = getViewMatrix();
  glm::mat4 model_matrix = glm::mat4(1.0);
  model_matrix = glm::translate(model_matrix, glm::vec3(0.0f,     0.0f, -depthZ));
  model_matrix = glm::rotate(model_matrix, glm::pi<float>() *     rotateY, glm::vec3(0.0f, 1.0f, 0.0f));
  model_matrix = glm::rotate(model_matrix, glm::pi<float>() *     rotateX, glm::vec3(1.0f, 0.0f, 0.0f));
  glm::mat4 mvp = projection_matrix * view_matrix * model_matrix;
  //sends our transformation to the currently bound shader,
  //in the "MVP" uniform variable
  glUniformMatrix4fv(matrix_id, 1, GL_FALSE, &mvp[0][0]);
  //render scene, with different drawing modes

  if(drawTriangles)
  obj_loader->draw(GL_TRIANGLES);

  if(drawPoints)
    obj_loader->draw(GL_POINTS);

  if(drawLines)
    obj_loader->draw(GL_LINES);
  //Draw the RIGHT eye, right half of the screen
  glViewport(width/2, 0, width/2, height);
  computeStereoViewProjectionMatrices(g_window, IOD, depthZ,     false);
  projection_matrix = getProjectionMatrix();
  view_matrix = getViewMatrix();
  model_matrix = glm::mat4(1.0);
  model_matrix = glm::translate(model_matrix, glm::vec3(0.0f,     0.0f, -depthZ));
  model_matrix = glm::rotate(model_matrix, glm::pi<float>() *     rotateY, glm::vec3(0.0f, 1.0f, 0.0f));
  model_matrix = glm::rotate(model_matrix, glm::pi<float>() *     rotateX, glm::vec3(1.0f, 0.0f, 0.0f));
  mvp = projection_matrix * view_matrix * model_matrix;
  glUniformMatrix4fv(matrix_id, 1, GL_FALSE, &mvp[0][0]);
  if(drawTriangles)
    obj_loader->draw(GL_TRIANGLES);
  if(drawPoints)
    obj_loader->draw(GL_POINTS);
  if(drawLines)
    obj_loader->draw(GL_LINES);
}

The final rendering result consists of two separate images on each side of the display, and note that each image is compressed horizontally by a scaling factor of two. For some display systems, each side of the display is required to preserve the same aspect ratio depending on the specifications of the display.

Here are the final screenshots of the same models in true 3D using stereoscopic 3D rendering:

OpenGL Data Visualization Cookbook

Here’s the rendering of the architectural model in stereoscopic 3D:

OpenGL Data Visualization Cookbook

How it works…

The stereoscopic 3D rendering technique is based on the parallel axis and asymmetric frustum perspective projection principle. In simpler terms, we rendered a separate image for each eye as if the object was seen at a different eye position but viewed on the same plane. Parameters such as the intraocular distance and frustum shift can be dynamically adjusted to provide the desired 3D stereo effects.

For example, by increasing or decreasing the frustum asymmetry parameter, the object will appear to be moved in front or behind the plane of the screen. By default, the zero parallax plane is set to the middle of the view volume. That is, the object is set up so that the center position of the object is positioned at the screen level, and some parts of the object will appear in front of or behind the screen. By increasing the frustum asymmetry (that is, positive parallax), the scene will appear to be pushed behind the screen. Likewise, by decreasing the frustum asymmetry (that is, negative parallax), the scene will appear to be pulled in front of the screen.

The glm::frustum function sets up the projection matrix, and we implemented the asymmetric frustum projection concept illustrated in the drawing. Then, we use the glm::lookAt function to adjust the eye position based on the IOP value we have selected.

To project the images side by side, we use the glViewport function to constrain the area within which the graphics can be rendered. The function basically performs an affine transformation (that is, scale and translation) which maps the normalized device coordinate to the window coordinate. Note that the final result is a side-by-side image in which the graphic is scaled by a factor of two vertically (or compressed horizontally). Depending on the hardware configuration, we may need to adjust the aspect ratio.

The current implementation supports side-by-side 3D, which is commonly used in most wearable Augmented Reality (AR) or Virtual Reality (VR) glasses. Fundamentally, the rendering technique, namely the asymmetric frustum perspective projection described in our article, is platform-independent. For example, we have successfully tested our implementation on the Meta 1 Developer Kit (https://www.getameta.com/products) and rendered the final results on the optical see-through stereoscopic 3D display:

OpenGL Data Visualization Cookbook

Here is the front view of the Meta 1 Developer Kit, showing the optical see-through stereoscopic 3D display and 3D range-sensing camera:

OpenGL Data Visualization Cookbook

The result is shown as follows, with the stereoscopic 3D graphics rendered onto the real world (which forms the basis of augmented reality):

OpenGL Data Visualization Cookbook

OpenGL Data Visualization Cookbook

See also

In addition, we can easily extend our code to support shutter glasses-based 3D monitors by utilizing the Quad Buffered OpenGL APIs (refer to the GL_BACK_RIGHT and GL_BACK_LEFT flags in the glDrawBuffer function). Unfortunately, such 3D formats require specific hardware synchronization and often require higher frame rate display (for example, 120Hz) as well as a professional graphics card. Further information on how to implement stereoscopic 3D in your application can be found at http://www.nvidia.com/content/GTC-2010/pdfs/2010_GTC2010.pdf.

Summary

In this article, we covered how to visualize data with stunning stereoscopic 3D technology using OpenGL. OpenGL does not provide any mechanism to load, save, or manipulate 3D models. Thus, to support this, we have integrated a new library named Assimp into the code.

Resources for Article:


Further resources on this subject:


1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here