26 min read

In this article by Parminder Singh, author of OpenGL ES 3.0 Cookbook, we will program shaders in Open GL ES shading language 3.0, load and compile a shader program, link a shader program, check errors in OpenGL ES 3.0, use the per-vertex attribute to send data to a shader, use uniform variables to send data to a shader, and program OpenGL ES 3.0 Hello World Triangle.

(For more resources related to this topic, see here.)

OpenGL ES 3.0 stands for Open Graphics Library for embedded systems version 3.0. It is a set of standard API specifications established by the Khronos Group. The Khronos Group is an association of members and organizations that are focused on producing open standards for royalty-free APIs. OpenGL ES 3.0 specifications were publicly released in August 2012. These specifications are backward compatible with OpenGL ES 2.0, which is a well-known de facto standard for embedded systems to render 2D and 3D graphics. Embedded operating systems such as Android, iOS, BlackBerry, Bada, Windows, and many others support OpenGL ES.

OpenGL ES 3.0 is a programmable pipeline. A pipeline is a set of events that occur in a predefined fixed sequence, from the moment input data is given to the graphic engine to the output generated data for rendering the frame. A frame refers to an image produced as an output on the screen by the graphics engine.

This article will provide OpenGL ES 3.0 development using C/C++, you can refer to the book OpenGL ES 3.0 Cookbook for more information on building OpenGL ES 3.0 applications on Android and iOS platforms. We will begin this article by understanding the basic programming of the OpenGL ES 3.0 with the help of a simple example to render a triangle on the screen. You will learn how to set up and create your first application on both platforms step by step.

  • Understanding EGL: The OpenGL ES APIs require the EGL as a prerequisite before they can effectively be used on the hardware devices. The EGL provides an interface between the OpenGL ES APIs and the underlying native windowing system. Different OS vendors have their own ways to manage the creation of drawing surfaces, communication with hardware devices, and other configurations to manage the rendering context. EGL provides an abstraction, how the underlying system needs to be implemented in a platform-independent way.

The EGL provides two important things to OpenGL ES APIs:

  • Rendering context: This stores the data structure and important OpenGL ES states that are essentially required for rendering purpose
  • Drawing surface: This provides the drawing surface to render primitives

The following screenshot shows OpenGL ES 3.0 the programmable pipeline architecture.

OpenGL ES 3.0 Cookbook

EGL provides the following responsibilities:

  • Checking the available configuration to create rendering context of the device windowing system
  • Creating the OpenGL rendering surface for drawing
  • Compatibility and interfacing with other graphics APIs such as OpenVG, OpenAL, and so on
  • Managing resources such as texture mapping

Programming shaders in Open GL ES shading language 3.0

OpenGL ES shading language 3.0 (also called as GLSL) is a C-like language that allows us to writes shaders for programmable processors in the OpenGL ES processing pipeline. Shaders are the small programs that run on the GPU in parallel.

OpenGL ES 3.0 supports two types of shaders: vertex shader and fragment shader. Each shader has specific responsibilities. For example, the vertex shader is used to process geometric vertices; however, the fragment shader processes the pixels or fragment color information. More specially, the vertex shader processes the vertex information by applying 2D/3D transformation. The output of the vertex shader goes to the rasterizer where the fragments are produced. The fragments are processed by the fragment shader, which is responsible for coloring them.

The order of execution of the shaders is fixed; the vertex shader is always executed first, followed by the fragment shader. Each shader can share its processed data with the next stage in the pipeline.

Getting ready

There are two types of processors in the OpenGL ES 3.0 processing pipeline to execute vertex shader and fragment shader executables; it is called programmable processing unit:

  • Vertex processor: The vertex processor is a programmable unit that operates on the incoming vertices and related data. It uses the vertex shader executable and run it on the vertex processor. The vertex shader needs to be programmed, compiled, and linked first in order to generate an executable, which can then be run on the vertex processor.
  • Fragment processor: The fragment processor uses the fragment shader executable to process fragment or pixel data. The fragment processor is responsible for calculating colors of the fragment. They cannot change the position of the fragments. They also cannot access neighboring fragments. However, they can discard the pixels. The computed color values from this shader are used to update the framebuffer memory and texture memory.

How to do it…

Here are the sample codes for vertex and fragment shaders:

  1. Program the following vertex shader and store it into the vertexShader character type array variable:
    #version 300 es            
    in vec4 VertexPosition, VertexColor;      
    uniform float RadianAngle;
    out vec4     TriangleColor;    
    mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle),
    void main() {
    gl_Position = mat4(rotation)*VertexPosition;
    TriangleColor = VertexColor;
  2. Program the following fragment shader and store it into another character array type variable called fragmentShader:
    #version 300 es        
    precision mediump float;
    in vec4   TriangleColor;
    out vec4 FragColor;    
    void main() {          
    FragColor = TriangleColor;

How it works…

Like most of the languages, the shader program also starts its control from the main() function. In both shader programs, the first line, #version 300 es, specifies the GLES shading language version number, which is 3.0 in the present case. The vertex shader receives a per-vertex input variable VertexPosition. The data type of this variable is vec4, which is one of the inbuilt data types provided by OpenGL ES Shading Language. The in keyword in the beginning of the variable specifies that it is an incoming variable and it receives some data outside the scope of our current shader program. Similarly, the out keyword specifies that the variable is used to send some data value to the next stage of the shader. Similarly, the color information data is received in VertexColor. This color information is passed to TriangleColor, which sends this information to the fragment shader, and is the next stage of the processing pipeline.

The RadianAngle is a uniform type of variable that contains the rotation angle. This angle is used to calculate the rotation matrix to make the rendering triangle revolve.

The input values received by VertexPosition are multiplied using the rotation matrix, which will rotate the geometry of our triangle. This value is assigned to gl_Position. The gl_Position is an inbuilt variable of the vertex shader. This variable is supposed to write the vertex position in the homogeneous form. This value can be used by any of the fixed functionality stages, such as primitive assembly, rasterization, culling, and so on.

In the fragment shader, the precision keyword specifies the default precision of all floating types (and aggregates, such as mat4 and vec4) to be mediump. The acceptable values of such declared types need to fall within the range specified by the declared precision. OpenGL ES Shading Language supports three types of the precision: lowp, mediump, and highp. Specifying the precision in the fragment shader is compulsory. However, for vertex, if the precision is not specified, it is considered to be highest (highp).

The FragColor is an out variable, which sends the calculated color values for each fragment to the next stage. It accepts the value in the RGBA color format.

There’s more…

As mentioned there are three types of precision qualifiers, the following table describes these, the range and precision of these precision qualifiers are shown here:

OpenGL ES 3.0 Cookbook

Loading and compiling a shader program

The shader program created needs to be loaded and compiled into a binary form. This article will be helpful in understanding the procedure of loading and compiling a shader program.

Getting ready

Compiling and linking a shader is necessary so that these programs are understandable and executable by the underlying graphics hardware/platform (that is, the vertex and fragment processors).

OpenGL ES 3.0 Cookbook

How to do it…

In order to load and compile the shader source, use the following steps:

  1. Create a NativeTemplate.h/NativeTemplate.cpp and define a function named loadAndCompileShader in it. Use the following code, and proceed to the next step for detailed information about this function:
    GLuint loadAndCompileShader(GLenum shaderType, const char*
    sourceCode) {
    GLuint shader = glCreateShader(shaderType); // Create the shader
    if ( shader ) {
         // Pass the shader source code
         glShaderSource(shader, 1, &sourceCode, NULL);
         glCompileShader(shader); // Compile the shader source code
         // Check the status of compilation
         GLint compiled = 0;
         if (!compiled) {
           GLint infoLen = 0;
          glGetShaderiv(shader,GL_INFO_LOG_LENGTH, &infoLen);
           if (infoLen) {
             char* buf = (char*) malloc(infoLen);
             if (buf) {
               glGetShaderInfoLog(shader, infoLen, NULL, buf);
               printf("Could not compile shader %s:" buf);
             glDeleteShader(shader); // Delete the shader program
             shader = 0;
    return shader;

    This function is responsible for loading and compiling a shader source. The argument shaderType accepts the type of shader that needs to be loaded and compiled; it can be GL_VERTEX_SHADER or GL_FRAGMENT_SHADER. The sourceCode specifies the source program of the corresponding shader.

  2. Create an empty shader object using the glCreateShader OpenGL ES 3.0 API. This API returns a non-zero value if the object is successfully created. This value is used as a handle to reference this object. On failure, this function returns 0. The shaderType argument specifies the type of the shader to be created. It must be either GL_VERTEX_SHADER or GL_FRAGMENT_SHADER:
    GLuint shader = glCreateShader(shaderType);

    Unlike in C++, where object creation is transparent, in OpenGL ES, the objects are created behind the curtains. You can access, use, and delete the objects as and when required. All the objects are identified by a unique identifier, which can be used for programming purposes.

    The created empty shader object (shader) needs to be bound first with the shader source in order to compile it. This binding is performed by using the glShaderSource API:

    // Load the shader source code
    glShaderSource(shader, 1, &sourceCode, NULL);

    The API sets the shader code string in the shader object, shader. The source string is simply copied in the shader object; it is not parsed or scanned.

  3. Compile the shader using the glCompileShader API. It accepts a shader object handle shader:
           glCompileShader(shader);   // Compile the shader
  4. The compilation status of the shader is stored as a state of the shader object. This state can be retrieved using the glGetShaderiv OpenGL ES API:
         GLint compiled = 0;   // Check compilation status
         glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);

    The glGetShaderiv API accepts the handle of the shader and GL_COMPILE_STATUS as an argument to check the status of the compilation. It retrieves the status in the compiled variable. The compiled returns GL_TRUE if the last compilation was successful. Otherwise, it returns GL_FALSE.

  5. Use glGetShaderInfoLog to get the error report.
  6. The shader is deleted if the shader source cannot be compiled. Delete the shader object using the glDeleteShader API.
  7. Return the shader object ID if the shader is compiled successfully:
    return shader; // Return the shader object ID

How it works…

The loadAndCompileShader function first creates an empty shader object. This empty object is referenced by the shader variable. This object is bound with the source code of the corresponding shader. The source code is compiled through a shader object using the glCompileShader API. If the compilation is successful, the shader object handle is returned successfully. Otherwise, the shader object returns 0 and needs to be deleted explicitly using glDeleteShader. The status of the compilation can be checked using glGetShaderiv with GL_COMPILE_STATUS.

There’s more…

In order to differentiate among various versions of OpenGL ES and GL shading language, it is useful to get this information from the current driver of your device. This will be helpful to make the program robust and manageable by avoiding errors caused by version upgrade or application being installed on older versions of OpenGL ES and GLSL. The other vital information can be queried from the current driver, such as the vendor, renderer, and available extensions supported by the device driver. This information can be queried using the glGetString API. This API accepts a symbolic constant and returns the queried system metrics in the string form. The printGLString wrapper function in our program helps in printing device metrics:

static void printGLString(const char *name, GLenum s) {
   printf("GL %s = %sn", name, (const char *) glGetString(s));

Linking a shader program

Linking is a process of aggregating a set (vertex and fragment) of shaders into one program that maps to the entirety of the programmable phases of the OpenGL ES 3.0 graphics pipeline. The shaders are compiled using shader objects. These objects are used to create special objects called program objects to link it to the OpenGL ES 3.0 pipeline.

How to do it…

The following instructions provide a step-by-step procedure to link as shader:

  1. Create a new function, linkShader, in NativeTemplate.cpp. This will be the wrapper function to link a shader program to the OpenGL ES 3.0 pipeline. Follow these steps to understand this program in detail:
    GLuint linkShader(GLuint vertShaderID,GLuint fragShaderID){
    if (!vertShaderID || !fragShaderID){ // Fails! return
    return 0;
    // Create an empty program object
    GLuint program = glCreateProgram();
    if (program) {
    // Attach vertex and fragment shader to it
    glAttachShader(program, vertShaderID);
    glAttachShader(program, fragShaderID);
    // Link the program
    GLint linkStatus = GL_FALSE;
    glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);
    if (linkStatus != GL_TRUE) {
    GLint bufLength = 0;
    glGetProgramiv(program, GL_INFO_LOG_LENGTH,
    if (bufLength) {
    char* buf = (char*) malloc(bufLength);
    if(buf) {
    glGetProgramInfoLog(program,bufLength,NULL, buf);
    printf("Could not link program:n%sn", buf);
    program = 0;
    return program;
  2. Create a program object with glCreateProgram. This API creates an empty program object using which the shader objects will be linked:
    GLuint program = glCreateProgram(); //Create shader program
  3. Attach shader objects to the program object using the glAttachShader API. It is necessary to attach the shaders to the program object in order to create the program executable:
    glAttachShader(program, vertShaderID);
    glAttachShader(program, fragShaderID);

How it works…

The linkShader wrapper function links the shader. It accepts two parameters: vertShaderID and fragShaderID. They are identifiers of the compiled shader objects. The createProgram function creates a program object. It is another OpenGL ES object to which shader objects are attached using glAttachShader. The shader objects can be detached from the program object if they are no longer in need. The program object is responsible for creating the executable program that runs on the programmable processor. A program in OpenGL ES is an executable in the OpenGL ES 3.0 pipeline that runs on the vertex and fragment processors.

The program object is linked using glLinkShader. If the linking fails, the program object must be deleted using glDeleteProgram. When a program object is deleted it automatically detached the shader objects associated with it. The shader objects need to be deleted explicitly. If a program object is requested for deletion, it will only be deleted until it’s not being used by some other rendering context in the current OpenGL ES state.

If the program’s object link successfully, then one or more executable will be created, depending on the number of shaders attached with the program. The executable can be used at runtime with the help of the glUseProgram API. It makes the executable a part of the current OpenGL ES state.

Checking errors in OpenGL ES 3.0

While programming, it is very common to get unexpected results or errors in the programmed source code. It’s important to make sure that the program does not generate any error. In such a case, you would like to handle the error gracefully.

OpenGL ES 3.0 allows us to check the error using a simple routine called getGlError. The following wrapper function prints all the error messages occurred in the programming:

static void checkGlError(const char* op) {
for(GLint error = glGetError(); error; error= glGetError()){
printf("after %s() glError (0x%x)n", op, error);

Here are few examples of code that produce OpenGL ES errors:

glEnable(GL_TRIANGLES);   // Gives a GL_INVALID_ENUM error

How it works…

When OpenGL ES detects an error, it records the error into an error flag. Each error has a unique numeric code and symbolic name. OpenGL ES does not track each time an error has occurred. Due to performance reasons, detecting errors may degrade the rendering performance therefore, the error flag is not set until the glGetError routine is called. If there is no error detected, this routine will always return GL_NO_ERRORS. In distributed environment, there may be several error flags, therefore, it is advisable to call the glGetError routine in the loop, as this routine can record multiple error flags.

Using the per-vertex attribute to send data to a shader

The per-vertex attribute in the shader programming helps receive data in the vertex shader from OpenGL ES program for each unique vertex attribute. The received data value is not shared among the vertices. The vertex coordinates, normal coordinates, texture coordinates, color information, and so on are the example of per-vertex attributes. The per-vertex attributes are meant for vertex shaders only, they cannot be directly available to the fragment shader. Instead, they are shared via the vertex shader throughout variables.

Typically, the shaders are executed on the GPU that allows parallel processing of several vertices at the same time using multicore processors. In order to process the vertex information in the vertex shader, we need some mechanism that sends the data residing on the client side (CPU) to the shader on the server side (GPU). This article will be helpful to understand the use of per-vertex attributes to communicate with shaders.

Getting ready

The vertex shader contains two per-vertex attributes named VertexPosition and VertexColor:

// Incoming vertex info from program to vertex shader
in vec4 VertexPosition;
in vec4 VertexColor;

The VertexPosition contains the 3D coordinates of the triangle that defines the shape of the object that we intend to draw on the screen. The VertexColor contains the color information on each vertex of this geometry.

In the vertex shader, a non-negative attribute location ID uniquely identifies each vertex attribute. This attribute location is assigned at the compile time if not specified in the vertex shader program. Basically, the logic of sending data to their shader is very simple. It’s a two-step process:

  • Query attribute: Query the vertex attribute location ID from the shader.
  • Attach data to the attribute: Attach this ID to the data. This will create a bridge between the data and the per-vertex attribute specified using the ID. The OpenGL ES processing pipeline takes care of sending data.

How to do it…

Follow this procedure to send data to a shader using the per-vertex attribute:

  1. Declare two global variables in NativeTemplate.cpp to store the queried attribute location IDs of VertexPosition and VertexColor:
    GLuint positionAttribHandle;
    GLuint colorAttribHandle;
  2. Query the vertex attribute location using the glGetAttribLocation API:
    positionAttribHandle = glGetAttribLocation
    (programID, "VertexPosition");
    colorAttribHandle    = glGetAttribLocation
    (programID, "VertexColor");

    This API provides a convenient way to query an attribute location from a shader. The return value must be greater than or equals to 0 in order to ensure that attribute with given name exists.

  3. Send the data to the shader using the glVertexAttribPointer OpenGL ES API:
    // Send data to shader using queried attrib location
    glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT,
    GL_FALSE, 0, gTriangleVertices);
    glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, 
    GL_FALSE, 0, gTriangleColors);

    The data associated with geometry is passed in the form of an array using the generic vertex attribute with the help of the glVertexAttribPointer API.

  4. It’s important to enable the attribute location. This allows us to access data on the shader side. By default, the vertex attributes are disabled.
  5. Similarly, the attribute can be disabled using glDisableVertexAttribArray. This API has the same syntax as that of glEnableVertexAttribArray.
  6. Store the incoming per-vertex attribute color VertexColor into the outgoing attribute TriangleColor in order to send it to the next stage (fragment shader):
    in vec4 VertexColor; // Incoming data from CPU
    out vec4 TriangleColor; // Outgoing to next stage
    void main() {
    . . .
    TriangleColor = VertexColor;
  7. Receive the color information from the vertex shader and set the fragment color:
    in vec4 TriangleColor; // Incoming from vertex shader
    out vec4 FragColor; // The fragment color
    void main() {
    FragColor = TriangleColor;

How it works…

The per-vertex attribute variables VertexPosition and VertexColor defined in the vertex shader are the lifelines of the vertex shader. These lifelines constantly provide the data information from the client side (OpenGL ES program or CPU) to server side (GPU). Each per-vertex attribute has a unique attribute location available in the shader that can be queried using glGetAttribLocation. The per-vertex queried attribute locations are stored in positionAttribHandle; colorAttribHandle must be bound with the data using attribute location with glVertexAttribPointer. This API establishes a logical connection between client and server side. Now, the data is ready to flow from our data structures to the shader. The last important thing is the enabling of the attribute on the shader side for optimization purposes. By default, all the attribute are disabled. Therefore, even if the data is supplied for the client side, it is not visible at the server side. The glEnableVertexAttribArray API allows us to enable the per-vertex attributes on the shader side.

Using uniform variables to send data to a shader

The uniform variables contain the data values that are global. They are shared by all vertices and fragments in the vertex and fragment shaders. Generally, some information that is not specific to the per-vertex is treated in the form of uniform variables. The uniform variable could exist in both the vertex and fragment shaders.

Getting ready

The vertex shader we programmed in the programming shaders in OpenGL ES shading language 3.0 contains a uniform variable RadianAngle. This variable is used to rotate the rendered triangle:

// Uniform variable for rotating triangle
uniform float RadianAngle;

This variable will be updated on the client side (CPU) and send to the shader at server side (GPU) using special OpenGL ES 3.0 APIs. Similar to per-vertex attributes for uniform variables, we need to query and bind data in order to make it available in the shader.

How to do it…

Follow these steps to send data to a shader using uniform variables:

  1. Declare a global variable in NativeTemplate.cpp to store the queried attribute location IDs of radianAngle:
    GLuint radianAngle;
  2. Query the uniform variable location using the glGetUniformLocation API:
  3. Send the updated radian value to the shader using the glUniform1f API:
    float degree = 0; // Global degree variable
    float radian; // Global radian variable
    radian = degree++/57.2957795; // Update angle and convert 
    it into radian glUniform1f(radianAngle, radian); // Send updated data in
    the vertex shader uniform
  4. Use a general form of 2D rotation to apply on the entire incoming vertex coordinates:
    . . . .
    uniform float RadianAngle;
    mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle),
    void main() {
    gl_Position = mat4(rotation)*VertexPosition;
    . . . . .

How it works…

The uniform variable RadianAngle defined in the vertex shader is used to apply rotation transformation on the incoming per-vertex attribute VertexPosition. On the client side, this uniform variable is queried using glGetUniformLocation. This API returns the index of the uniform variable and stores it in radianAngle. This index will be used to bind the updated data information that is stored the radian with the glUniform1f OpenGL ES 3.0 API. Finally, the updated data reaches the vertex shader executable, where the general form of the Euler rotation is calculated:

mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle),

The rotation transformation is calculated in the form of 2 x 2 matrix rotation, which is later promoted to a 4 x 4 matrix when multiplied by VertexPosition. The resultant vertices cause to rotate the triangle in a 2D space.

Programming OpenGL ES 3.0 Hello World Triangle

The NativeTemplate.h/cpp file contains OpenGL ES 3.0 code, which demonstrates a rotating colored triangle. The output of this file is not an executable on its own. It needs a host application that provides the necessary OpenGL ES 3.0 prerequisites to render this program on a device screen.

  • Developing Android OpenGL ES 3.0 application
  • Developing iOS OpenGL ES 3.0 application

This will provide all the necessary prerequisites that are required to set up OpenGL ES, rendering and querying necessary attributes from shaders to render our OpenGL ES 3.0 “Hello World Triangle” program. In this program, we will render a simple colored triangle on the screen.

Getting ready

OpenGL ES requires a physical size (pixels) to define a 2D rendering surface called a viewport. This is used to define the OpenGL ES Framebuffer size.

A buffer in OpenGL ES is a 2D array in the memory that represents pixels in the viewport region. OpenGL ES has three types of buffers: color buffer, depth buffer, and stencil buffer. These buffers are collectively known as a framebuffer. All the drawings commands effect the information in the framebuffer.

The life cycle of this is broadly divided into three states:

  • Initialization: Shaders are compiled and linked to create program objects
  • Resizing: This state defines the viewport size of rendering surface
  • Rendering: This state uses the shader program object to render geometry on screen

How to do it…

Follow these steps to program this:

  1. Use the NativeTemplate.cpp file and create a createProgramExec function. This is a high-level function to load, compile, and link a shader program. This function will return the program object ID after successful execution:
    GLuint createProgramExec(const char* VS, const char* FS) {
    GLuint vsID = loadAndCompileShader(GL_VERTEX_SHADER, VS);
    GLuint fsID = loadAndCompileShader(GL_FRAGMENT_SHADER, FS);
    return linkShader(vsID, fsID);
  2. Visit the loading and compiling a shader program and linking shader program for more information on the working of loadAndCompileShader and linkShader.
  3. Use NativeTemplate.cpp, create a function GraphicsInit and create the shader program object by calling createProgramExec:
    GLuint programID; // Global shader program handler
    bool GraphicsInit(){
    printOpenGLESInfo(); // Print GLES3.0 system metrics
    // Create program object and cache the ID
    programID = createProgramExec(vertexShader, 
    fragmentShader); if (!programID) { // Failure !!! return printf("Could not create program."); return false; } checkGlError("GraphicsInit"); // Check for errors }
  4. Create a new function GraphicsResize. This will set the viewport region:
    bool GraphicsResize( int width, int height ){
    glViewport(0, 0, width, height);
  5. The viewport determines the portion of the OpenGL ES surface window on which the rendering of the primitives will be performed. The viewport in OpenGL ES is set using the glViewPort API.
  6. Create the gTriangleVertices global variable that contains the vertices of the triangle:
    GLfloat gTriangleVertices[] = { { 0.0f, 0.5f}, {-0.5f, -
    0.5f}, { 0.5f, -0.5f} };
  7. Create the GraphicsRender renderer function. This function is responsible for rendering the scene. Add the following code in it and perform the following steps to understand this function:
           bool GraphicsRender(){
    glClear( GL_COLOR_BUFFER_BIT ); // Which buffer to clear? 
    – color buffer glClearColor(0.0f, 0.0f, 0.0f, 1.0f); // Clear color with
    black color   glUseProgram( programID ); // Use shader program and apply radian = degree++/57.2957795; // Query and send the uniform variable. radianAngle = glGetUniformLocation(programID, "RadianAngle"); glUniform1f(radianAngle, radian); // Query 'VertexPosition' from vertex shader positionAttribHandle = glGetAttribLocation (programID, "VertexPosition"); colorAttribHandle = glGetAttribLocation (programID, "VertexColor"); // Send data to shader using queried attribute glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices); glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleColors); glEnableVertexAttribArray(positionAttribHandle); // Enable
    vertex position attribute glEnableVertexAttribArray(colorAttribHandle); glDrawArrays(GL_TRIANGLES, 0, 3); // Draw 3 triangle
    vertices from 0th index }
  8. Choose the appropriate buffer from the framebuffer (color, depth, and stencil) that we want to clear each time the frame is rendered using the glClear API. In this, we want to clear color buffer. The glClear API can be used to select the buffers that need to be cleared. This API accepts a bitwise OR argument mask that can be used to set any combination of buffers.
  9. Query the VertexPosition generic vertex attribute location ID from the vertex shader into positionAttribHandle using glGetAttribLocation. This location will be used to send triangle vertex data that is stored in gTriangleVertices to the shader using glVertexAttribPointer. Follow the same instruction in order to get the handle of VertexColor into colorAttributeHandle:
    positionAttribHandle = glGetAttribLocation (programID, 
    "VertexPosition"); colorAttribHandle = glGetAttribLocation (programID,
    "VertexColor"); glVertexAttribPointer(positionAttribHandle, 2, GL_FLOAT, GL_FALSE, 0, gTriangleVertices); glVertexAttribPointer(colorAttribHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleColors);
  10. Enable the generic vertex attribute location using positionAttribHandle before the rendering call and render the triangle geometry. Similarly, for the per-vertex color information, use colorAttribHandle:
    glDrawArrays(GL_TRIANGLES, 0, 3);

How it works…

When the application starts, the control begins with GraphicsInit, where the system metrics are printed out to make sure that the device supports OpenGL ES 3.0. The OpenGL ES programmable pipeline requires vertex shader and fragment shader program executables in the rendering pipeline. The program object contains one or more executables after attaching the compiled shader objects and linking them to program. In the createProgramExec function the vertex and fragment shaders are compiled and linked, in order to generate the program object.

The GraphicsResize function generates the viewport of the given dimension. This is used internally by OpenGL ES 3.0 to maintain the framebuffer. In our current application, it is used to manage color buffer.

Finally, the rendering of the scene is performed by GraphicsRender, this function clears the color buffer with black background and renders the triangle on the screen. It uses a shader object program and sets it as the current rendering state using the glUseProgram API.

Each time a frame is rendered, data is sent from the client side (CPU) to the shader executable on the server side (GPU) using glVertexAttribPointer. This function uses the queried generic vertex attribute to bind the data with OpenGL ES pipeline.

OpenGL ES 3.0 Cookbook

There’s more…

There are other buffers also available in OpenGL ES 3.0:

  • Depth buffer: This is used to prevent background pixels from rendering if there is a closer pixel available. The rule of prevention of the pixels can be controlled using special depth rules provided by OpenGL ES 3.0.
  • Stencil buffer: The stencil buffer stores the per-pixel information and is used to limit the area of rendering.

The OpenGL ES API allows us to control each buffer separately. These buffers can be enabled and disabled as per the requirement of the rendering. The OpenGL ES can use any of these buffers (including color buffer) directly to act differently. These buffers can be set via preset values by using OpenGL ES APIs, such as glClearColor, glClearDepthf, and glClearStencil.


This article covered different aspects of OpenGL ES 3.0.

Resources for Article:

Further resources on this subject:


Please enter your comment!
Please enter your name here