10 min read

Shaders were first introduced into OpenGL in version 2.0, introducing programmability into the formerly fixed-function OpenGL pipeline. Shaders are implemented using the OpenGL Shading Language (GLSL). The GLSL is syntactically similar to C, which should make it easier for experienced OpenGL programmers to learn. Due to the nature of this text, I won’t present a thorough introduction to GLSL here. Instead, if you’re new to GLSL, reading through these recipes should help you to learn the language by example. If you are already comfortable with GLSL, but don’t have experience with version 4.0, you’ll see how to implement these techniques utilizing the newer API. However, before we jump into GLSL programming, let’s take a quick look at how vertex and fragment shaders fit within the OpenGL pipeline.

Vertex and fragment shaders

In OpenGL version 4.0, there are five shader stages: vertex, geometry, tessellation control, tessellation evaluation, and fragment. In this article we’ll focus only on the vertex and fragment stages.

Shaders replace parts of the OpenGL pipeline. More specifically, they make those parts of the pipeline programmable. The following block diagram shows a simplified view of the OpenGL pipeline with only the vertex and fragment shaders installed.

OpenGL GLSL 4.0 Shading Language tutorial

Vertex data is sent down the pipeline and arrives at the vertex shader via shader input variables. The vertex shader’s input variables correspond to vertex attributes (see Sending data to a shader using per-vertex attributes and vertex buffer objects). In general, a shader receives its input via programmer-defined input variables, and the data for those variables comes either from the main OpenGL application or previous pipeline stages (other shaders). For example, a fragment shader’s input variables might be fed from the output variables of the vertex shader. Data can also be provided to any shader stage using uniform variables (see Sending data to a shader using uniform variables). These are used for information that changes less often than vertex attributes (for example, matrices, light position, and other settings). The following figure shows a simplified view of the relationships between input and output variables when there are two shaders active (vertex and fragment).

OpenGL GLSL 4.0 Shading Language tutorial

The vertex shader is executed once for each vertex, possibly in parallel. The data corresponding to vertex position must be transformed into clip coordinates and assigned to the output variable gl_Position before the vertex shader finishes execution. The vertex shader can send other information down the pipeline using shader output variables. For example, the vertex shader might also compute the color associated with the vertex. That color would be passed to later stages via an appropriate output variable.

Between the vertex and fragment shader, the vertices are assembled into primitives, clipping takes place, and the viewport transformation is applied (among other operations). The rasterization process then takes place and the polygon is filled (if necessary). The fragment shader is executed once for each fragment (pixel) of the polygon being rendered (typically in parallel). Data provided from the vertex shader is (by default) interpolated in a perspective correct manner, and provided to the fragment shader via shader input variables. The fragment shader determines the appropriate color for the pixel and sends it to the frame buffer using output variables. The depth information is handled automatically.

Replicating the old fixed functionality

Programmable shaders give us tremendous power and flexibility. However, in some cases we might just want to re-implement the basic shading techniques that were used in the default fixed-function pipeline, or perhaps use them as a basis for other shading techniques. Studying the basic shading algorithm of the old fixed-function pipeline can also be a good way to get started when learning about shader programming.

In this article, we’ll look at the basic techniques for implementing shading similar to that of the old fixed-function pipeline. We’ll cover the standard ambient, diffuse, and specular (ADS) shading algorithm, the implementation of two-sided rendering, and flat shading. in the next article, we’ll also see some examples of other GLSL features such as functions, subroutines, and the discard keyword.

Implementing diffuse, per-vertex shading with a single point light source

One of the simplest shading techniques is to assume that the surface exhibits purely diffuse reflection. That is to say that the surface is one that appears to scatter light in all directions equally, regardless of direction. Incoming light strikes the surface and penetrates slightly before being re-radiated in all directions. Of course, the incoming light interacts with the surface before it is scattered, causing some wavelengths to be fully or partially absorbed and others to be scattered. A typical example of a diffuse surface is a surface that has been painted with a matte paint. The surface has a dull look with no shine at all.

The following image shows a torus rendered with diffuse shading.

OpenGL GLSL 4.0 Shading Language tutorial

The mathematical model for diffuse reflection involves two vectors: the direction from the surface point to the light source (s), and the normal vector at the surface point (n). The vectors are represented in the following diagram.

OpenGL GLSL 4.0 Shading Language tutorial

The amount of incoming light (or radiance) that reaches the surface is partially dependent on the orientation of the surface with respect to the light source. The physics of the situation tells us that the amount of radiation that reaches a point on a surface is maximal when the light arrives along the direction of the normal vector, and zero when the light is perpendicular to the normal. In between, it is proportional to the cosine of the angle between the direction towards the light source and the normal vector. So, since the dot product is proportional to the cosine of the angle between two vectors, we can express the amount of radiation striking the surface as the product of the light intensity and the dot product of s and n.

OpenGL GLSL 4.0 Shading Language tutorial

Where Ld is the intensity of the light source, and the vectors s and n are assumed to be normalized. You may recall that the dot product of two unit vectors is equal to the cosine of the angle between them.

As stated previously, some of the incoming light is absorbed before it is re-emitted. We can model this interaction by using a reflection coefficient (Kd), which represents the fraction of the incoming light that is scattered. This is sometimes referred to as the diffuse reflectivity, or the diffuse reflection coefficient. The diffuse reflectivity becomes a scaling factor for the incoming radiation, so the intensity of the outgoing light can be expressed as follows:

OpenGL GLSL 4.0 Shading Language tutorial

Because this model depends only on the direction towards the light source and the normal to the surface, not on the direction towards the viewer, we have a model that represents uniform (omnidirectional) scattering.

In this recipe, we’ll evaluate this equation at each vertex in the vertex shader and interpolate the resulting color across the face.

In this and the following recipes, light intensities and material reflectivity coefficients are represented by 3-component (RGB) vectors. Therefore, the equations should be treated as component-wise operations, applied to each of the three components separately. Luckily, the GLSL will make this nearly transparent because the needed operators will operate component-wise on vector variables.

Getting ready

Start with an OpenGL application that provides the vertex position in attribute location 0, and the vertex normal in attribute location 1 (see Sending data to a shader using per-vertex attributes and vertex buffer objects). The OpenGL application also should provide the standard transformation matrices (projection, modelview, and normal) via uniform variables.

The light position (in eye coordinates), Kd, and Ld should also be provided by the OpenGL application via uniform variables. Note that Kd and Ld are type vec3. We can use a vec3 to store an RGB color as well as a vector or point.

How to do it…

To create a shader pair that implements diffuse shading, use the following code:

  1. Use the following code for the vertex shader.

    #version 400

    layout (location = 0) in vec3 VertexPosition;
    layout (location = 1) in vec3 VertexNormal;

    out vec3 LightIntensity;

    uniform vec4 LightPosition; // Light position in eye coords.
    uniform vec3 Kd; // Diffuse reflectivity
    uniform vec3 Ld; // Light source intensity

    uniform mat4 ModelViewMatrix;
    uniform mat3 NormalMatrix;
    uniform mat4 ProjectionMatrix;
    uniform mat4 MVP; // Projection * ModelView

    void main()
    {
    // Convert normal and position to eye coords
    vec3 tnorm = normalize( NormalMatrix * VertexNormal);
    vec4 eyeCoords = ModelViewMatrix *
    vec4(VertexPosition,1.0));
    vec3 s = normalize(vec3(LightPosition – eyeCoords));

    // The diffuse shading equation
    LightIntensity = Ld * Kd * max( dot( s, tnorm ), 0.0 );

    // Convert position to clip coordinates and pass along
    gl_Position = MVP * vec4(VertexPosition,1.0);
    }

    
    
  2. Use the following code for the fragment shader.

    #version 400

    in vec3 LightIntensity;

    layout( location = 0 ) out vec4 FragColor;

    void main() {
    FragColor = vec4(LightIntensity, 1.0);
    }

    
    
  3. Compile and link both shaders within the OpenGL application, and install the shader program prior to rendering. See Tips and Tricks for Getting Started with OpenGL and GLSL 4.0 for details about compiling, linking, and installing shaders.

How it works…

The vertex shader does all of the work in this example. The diffuse reflection is computed in eye coordinates by first transforming the normal vector using the normal matrix, normalizing, and storing the result in tnorm. Note that the normalization here may not be necessary if your normal vectors are already normalized and the normal matrix does not do any scaling.

The normal matrix is typically the inverse transpose of the upper-left 3×3 portion of the model-view matrix. We use the inverse transpose because normal vectors transform differently than the vertex position. For a more thorough discussion of the normal matrix, and the reasons why, see any introductory computer graphics textbook. (A good choice would be Computer Graphics with OpenGL by Hearn and Baker.) If your model-view matrix does not include any non-uniform scalings, then one can use the upper-left 3×3 of the model-view matrix in place of the normal matrix to transform your normal vectors. However, if your model-view matrix does include (uniform) scalings, you’ll still need to (re)normalize your normal vectors after transforming them.

The next step converts the vertex position to eye (camera) coordinates by transforming it via the model-view matrix. Then we compute the direction towards the light source by subtracting the vertex position from the light position and storing the result in s.

Next, we compute the scattered light intensity using the equation described above and store the result in the output variable LightIntensity. Note the use of the max function here. If the dot product is less than zero, then the angle between the normal vector and the light direction is greater than 90 degrees. This means that the incoming light is coming from inside the surface. Since such a situation is not physically possible (for a closed mesh), we use a value of 0.0. However, you may decide that you want to properly light both sides of your surface, in which case the normal vector needs to be reversed for those situations where the light is striking the back side of the surface (see Implementing two-sided shading).

Finally, we convert the vertex position to clip coordinates by multiplying with the model-view projection matrix, (which is: projection * view * model) and store the result in the built-in output variable gl_Position.

gl_Position = MVP * vec4(VertexPosition,1.0);


The subsequent stage of the OpenGL pipeline expects that the vertex position will be provided in clip coordinates in the output variable gl_Position. This variable does not directly correspond to any input variable in the fragment shader, but is used by the OpenGL pipeline in the primitive assembly, clipping, and rasterization stages that follow the vertex shader. It is important that we always provide a valid value for this variable.

Since LightIntensity is an output variable from the vertex shader, its value is interpolated across the face and passed into the fragment shader. The fragment shader then simply assigns the value to the output fragment.

There’s more…

Diffuse shading is a technique that models only a very limited range of surfaces. It is best used for surfaces that have a “matte” appearance. Additionally, with the technique used above, the dark areas may look a bit too dark. In fact, those areas that are not directly illuminated are completely black. In real scenes, there is typically some light that has been reflected about the room that brightens these surfaces. In the following recipes, we’ll look at ways to model more surface types, as well as provide some light for those dark parts of the surface.

LEAVE A REPLY

Please enter your comment!
Please enter your name here