14 min read

(For more resources on Debugging with OpenGL ES in iOS 5, see here.)

The Open Graphics Library (OpenGL) can be simply defned as a software interface to the graphics hardware. It is a 3D graphics and modeling library that is highly portable and extremely fast. Using the OpenGL graphics API, you can create some brilliant graphics that are capable of representing 2D and 3D data.

 

The OpenGL library is a multi-purpose, open-source graphics library that supports applications for 2D and 3D digital content creation, mechanical and architectural design, virtual prototyping, fight simulation, and video games, and allows application developers to confgure a 3D graphics pipeline, and submit data to it.

An object is defned by connected vertices. The vertices of the object are then transformed, lit, and assembled into primitives, and rasterized to create a 2D image that can be directly sent to the underlying graphics hardware to render the drawing, which is deemed to be typically very fast, due to the hardware being dedicated to processing graphics commands.

We have some fantastic stuff to cover in this article, so let’s get started.

Understanding the new workfow feature within Xcode

In this section, we will be taking a look at the improvements that have been made to the Xcode 4 development environment, and how this can enable us to debug OpenGL ES applications much easier, compared to the previous versions of Xcode.

We will look at how we can use the frame capture feature of the debugger to capture all frame objects that are included within an OpenGL ES application. This tool enables you to list all the frame objects that are currently used by your application at a given point of time.

We will familiarize ourselves with the new OpenGL ES debugger within Xcode, to enable us to track down specifc issues relating to OpenGL ES within the code.

Creating a simple project to debug an OpenGL ES application

Before we can proceed, we frst need to create our OpenGLESExample project.

  1. Launch Xcode from the /Developer/Applications folder.
  2. Select the OpenGL Game template from the Project template dialog box.

  3. Then, click on the Next button to proceed to the next step in the wizard. This will allow you to enter in the Product Name and your Company Identifer.

  4. Enter in OpenGLESExample for the Product Name, and ensure that you have selected iPhone from the Device Family dropdown box.
  5. Next, click on the Next button to proceed to the fnal step in the wizard.
  6. Choose the folder location where you would like to save your project.
  7. Then, click on the Create button to save your project at the location specifed.

Once your project has been created, you will be presented with the Xcode development interface, along with the project fles that the template created for you within the Project Navigator window.

Now that we have our project created, we need to confgure our project to enable us to debug the state of the objects.

Detecting OpenGL ES state information and objects

To enable us to detect and monitor the state of the objects within our application, we need to enable this feature through the Edit Scheme… section of our project, as shown in the following screenshot:

From the Edit Scheme section, as shown in the following screenshot, select the Run OpenGLESExampleDebug action, then click on the Options tab, and then select the OpenGL ES Enable frame capture checkbox.

For this feature to work, you must run the application on an iOS device, and the device must be running iOS 5.0 or later. This feature will not work within the iOS simulator. You will need to ensure that after you have attached your device, you will then need to restart Xcode for this option to become available.

When you have confgured your project correctly, click on the OK button to accept the changes made, and close the dialog box. Next, build and run your OpenGL ES application. When you run your application, you will see two three-dimensional and colored box cubes.

When you run your application on the iOS device, you will notice that the frame capture appears within the Xcode 4 debug bar, as shown in the following screenshot:

When using the OpenGL ES features of Xcode 4.2, these debugging features enable you to do the following:

  1. Inspect OpenGL ES state information.
  2. Introspect OpenGL ES objects such as view textures and shaders.
  3. Step through draw calls and watch changes with each call.
  4. Step through the state calls that proceed each draw call to see exactly how the image is constructed.

The following screenshot displays the captured frame of our sample application. The debug navigator contains a list of every draw call and state call associated with that particular frame.

The buffers that are associated with the frame are shown within the editor pane, and the state information is shown in the debug windowpane. The default view when the OpenGL ES frame capture is launched is displayed in the Auto view. This view displays the color portion, which is the Renderbuffer #1, as well as its grayscale equivalent of the image, that being Renderbuffer #2.

You can also toggle the visibility between each of the channels for red, green and blue, as well as the alpha channels, and then use the Range scroll to adjust the color range. This can be done easily by selecting each of the cog buttons, shown in the previous screenshot.

You also have the ability to step through each of the draw calls in the debug navigator, or by using the double arrows and slider in the debug bar.

When using the draw call arrows or sliders, you can have Xcode select the stepped-to draw call from the debug navigator. This can be achieved by Control + clicking below the captured frame, and choosing the Reveal in Debug Navigator from the shortcut menu.

You can also use the shortcut menu to toggle between the standard view of drawing the image, as well as showing the wireframe view of the object, by selecting the Show Wireframe option from the pop-up menu, as shown in the previous screenshot.

When using the wireframe view of an object, it highlights the element that is being drawn by the selected draw call. To turn off the wireframe feature and have the image return back to the normal state, select the Hide Wireframe option from the pop-up menu, as shown in the following screenshot:

Now that you have a reasonable understanding of debugging through an OpenGL ES application and its draw calls, let’s take a look at how we can view the textures associated with an OpenGL ES application.

View textures

When referring to textures in OpenGL ES 2.0, this is basically an image that can be sampled by the graphics engine pipeline, and is used to map a colored image onto a mapping surface. To view objects that have been captured by the frame capture button, follow these simple steps:

  1. Open the Assistant Editor to see the objects associated with the captured frame. In this view, you can choose to see all of the objects, only bound objects, or the stack. This can be accessed from the View | Assistant Editor | Show Assistant Editor menu, as shown in the following screenshot:

  2. Open a secondary assistant editor pane, so that you can see both the objects and the stack frame at the same time. This can be accessed from the View | Assistant Editor | Add Assistant Editor menu shown previously, or by clicking on the + symbol, as shown in the following screenshot:

To see details about any object contained within the OpenGL ES assistant editor, double-click on the object, or choose the item from the pop-up list, as shown in the next screenshot.

It is worth mentioning that, from within this view, you have the ability to change the orientation of any object that has been captured and has been rendered to the view. To change the orientation, locate the Orientation options shown at the bottom- right hand of the screen. Objects can be changed to appear in one or more views as needed, and these are as follows:

  • Rotate clockwise
  • Rotate counter-clockwise
  • Flip orientation vertically
  • Flip orientation horizontally

For example, if you want to see information about the vertex array object (VAO), you would double-click on it to see it in more detail, as shown in the following screenshot.

This displays all the X, Y, and Z-axes required to construct each of our objects. Next, we will take a look into how shaders are constructed.

Shaders

There are two types of shaders that you can write for OpenGL ES; these are Vertex shaders and Fragment shaders.

These two shaders make up what is known as the Programmable portion of the OpenGL ES 2.0 programmable pipeline, and are written in a C-like language syntax, called the OpenGL ES Shading Language (GLSL).

The following screenshot outlines the OpenGL ES 2.0 programmable pipeline, and combines a version of the OpenGL Shading Language for programming Vertex Shader and Fragment Shader that has been adapted for embedded platforms for iOS devices:

Shaders are not new, these have been used in a variety of games that use OpenGL. Such games that come to mind are: Doom 3 and Quake 4, or several fight simulators, such as Microsoft’s Flight Simulator X.

Once thing to note about shaders, is that they are not compiled when your application is built. The source code of the shader gets stored within your application bundle as a text fle, or defned within your code as a string literal, that is,

vertShaderPathname = [[NSBundlemainBundle] pathForResource:@”Shader” ofType:@”vsh”];


 

Before you can use your shaders, your application has to load and compile each of them. This is done to preserve device independence.

Let’s take for example, if Apple decided to change to a different GPU manufacturer, for future releases of its iPhone, the compiled shaders may not work on the new GPU. Having your application deferring the compilation to runtime will avoid this problem, and any latest versions of the GPU will be fully supported without a need for you to rebuild your application.

The following table explains the differences between the two shaders.

Shader type Description
Vertex shaders These are programs that get called once-per-vertex in your scene. An example to explain this better would be – if you were rendering a simple scene with a single square, with one vertex at each corner, this would be called four times.

Its job is to perform some calculations such as lighting, geometry transforms, moving, scaling and rotating of objects, to simulate realism.

Fragment shaders These are programs that get called once-per-pixel in your scene. So, if you’re rendering that same simple scene with a single square, it will be called once for each pixel that the square covers. Fragment shaders can also perform lighting calculations, and so on, but their most important job is to set the final color for the pixel.

Next, we will start by examining the implementation of the vertex shader that the OpenGL template created for us. You will notice that these shaders are code fles that have been implemented using C-Syntax like instructions. Lets, start by examining each section of the vertex shader fle, by following these simple steps:

  1. Open the Shader.vsh vertex shader fle located within the OpenGLESExample folder of the Project Navigator window, and examine the following code snippet.

    
    

    attribute vec4 position;

    attribute vec3 normal;

    varyinglowp vec4 colorVarying;

    uniform mat4 modelViewProjectionMatrix;

    uniform mat3 normalMatrix;

    void main(){

      vec3eyeNormal = normalize(normalMatrix * normal);

      vec3lightPosition = vec3(0.0, 0.0, 1.0);

      vec4diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);

      floatnDotVP = max(0.0, dot(eyeNormal,
    normalize(lightPosition)));

      colorVarying = diffuseColor * nDotVP;

      gl_Position = modelViewProjectionMatrix * position;

    }

  2. Next, we will take a look at what this piece of code is doing and explain what is actually going on. So let’s start.
    The attribute keyword declares that this shader is going to be passed in an input variable called position. This will be used to indicate the position of the vertex. You will notice that the position variable has been declared of type vec4, which means that each vertex contains four foating-point values. The second attribute input variable that is declared with the variable name normal, has been declared of type vec3, which means that the vertex con- tains three foating-point values that are used for the rotational aspect around the x, y, and z axes.
    The third attribute input variable that is declared with the variable name diffuseColor, defnes the color to be used for the vertex. We declare an- other variable called colorVarying. You will notice that it doesn’t contain the attribute keyword. This is because it is an output variable that will be passed to the fragment shader.
    The varying keyword tells us the value for a particular vertex. This basically means that you can specify a different color for each vertex, and it will make all the values in-between a neat gradient that you will see in the fnal output. We have declared this as vec4, because colors are comprised of four compo- nent values.
  3. Finally, we declare two uniform keyword variables called modelViewProjectionMatrix and normalMatrix. The model, view, and projection matrices are three separate matrices. Model maps from an object’s local coordinate space into world space, view from world space to camera space, and projection from camera to screen.
    When all three are used, you can then use the one result to map all the way from object space to screen space, enabling you to work out what you need to pass on to the next stage of a programmable pipeline from the incoming vertex positions.
    The normal matrix vectors are used to determine how much light is received at the specifed vertex or surface. Uniforms are a second form of data that al- low you to pass from your application code to the shaders. Uniform types are available to both vertex and fragment shaders, which, unlike attributes, are only available to the vertex shader.
    The value of a uniform cannot be changed by the shaders, and will have the same value every time a shader runs for a given trip through the pipeline. Uniforms can also contain any kind of data that you want to pass along for use in your shader.
  4. Next, we assign the value from the color per-vertex attribute to the varying variable colorVarying. This value will then be available in the fragment shader in interpolated form.
  5. Finally, we modify the gl_Position output variable, using the foating point translate variable to move the vertex along the X, Y, and Z-axes, based on the value of the translate uniform.
    Next, we will take a look at the fragment shader that the OpenGL ES tem- plate created for us.
  6. Open the Shader.fsh fragment shader fle located within the OpenGLESExample folder of the Project Navigator window, and examine the following code snippet.

    
    

    varyinglowp vec4 colorVarying;

    void main(){

      gl_FragColor = colorVarying;

    }

We will now take a look at this code snippet, and explain what is actually going on here.

You will notice that within the fragment shader, the declaration of the varying type variable colorVarying, as highlighted in the code, has the same name as it did in the vertex shader. This is very important; if these names were different, OpenGL ES won’t realize it’s the same variable, and your program will produce unexpected results.

The type is also very important, and it has to be the same data type as it was declared within the vertex shader. This is a GLSL keyword that is used to specify the precision of the number of bytes used to represent a number.

From a programming point of view, the more bytes that are used to represent a number, the fewer problems you will be likely to have with the rounding of foating point calculations. GLSL allows the user to precision modifers any time a variable is declared, and it must be declared within this fle. Failure to declare it within the fragment shader, will result in your shader failing to compile.

The lowp keyword is going to give you the best performance with the least accuracy during interpolation. This is the better option when dealing with colors, where small rounding errors don’t matter. Should you fnd the need to increase the precision, it is better to use the mediump or highp, if the lack of precision causes you problems within your application.

For more information on the OpenGL ES Shading Language (GLSL) or the Precision modifers, refer to the following documentation located at: http://www.khronos.org/registry/ gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf.

LEAVE A REPLY

Please enter your comment!
Please enter your name here