Using Image Processing Techniques

8 min read

(For more resources related to this topic, see here.)

In most of the examples, we will use the following famous test image widely used to illustrate computer vision algorithms and techniques:

You can download Lenna’s image from Wikipedia (

Transforming image contrast and brightness

In this recipe we will cover basic image color transformations using the Surface class for pixel manipulation.

How to do it…

We will create an application with simple GUI for contrast and brightness manipulation on the sample image. Perform the following steps to do so:

  1. Include necessary headers:

    #include "cinder/gl/gl.h" #include "cinder/gl/Texture.h" #include "cinder/Surface.h" #include "cinder/ImageIo.h"

  2. Add properties to the main class:

    float mContrast,mContrastOld; float mBrightness,mBrightnessOld; Surface32f mImage, mImageOutput;

  3. In the setup method an image is loaded for processing and the Surface object is prepared to store processed image:

    mImage = loadImage( loadAsset("image.png") ); mImageOutput = Surface32f(mImage.getWidth(), mImage.getHeight(), false);

  4. Set window size to default values:

    setWindowSize(1025, 512); mContrast = 0.f; mContrastOld = -1.f; mBrightness = 0.f; mBrightnessOld = -1.f;

  5. Add parameter controls to the InterfaceGl window:

    mParams.addParam("Contrast", &mContrast, "min=-0.5 max=1.0 step=0.01"); mParams.addParam("Brightness", &mBrightness, "min=-0.5 max=0.5 step=0.01");

  6. Implement the update method as follows:

    if(mContrastOld != mContrast || mBrightnessOld != mBrightness) { float c = 1.f + mContrast; Surface32f::IterpixelIter = mImage.getIter(); Surface32f::IterpixelOutIter = mImageOutput.getIter(); while( pixelIter.line() ) { pixelOutIter.line(); while( pixelIter.pixel() ) { pixelOutIter.pixel(); // contrast transformation pixelOutIter.r() = (pixelIter.r() - 0.5f) * c + 0.5f; pixelOutIter.g() = (pixelIter.g() - 0.5f) * c + 0.5f; pixelOutIter.b() = (pixelIter.b() - 0.5f) * c + 0.5f; // brightness transformation pixelOutIter.r() += mBrightness; pixelOutIter.g() += mBrightness; pixelOutIter.b() += mBrightness; } } mContrastOld = mContrast; mBrightnessOld = mBrightness; }

  7. Lastly, we will draw the original and processed images by adding the following lines of code inside the draw method:

    gl::draw(mImage); gl::draw(mImageOutput, Vec2f(512.f+1.f, 0.f));

How it works…

The most important part is inside the update method. In step 6 we checked if the parameters for contrast and brightness had been changed. If they have, we iterate through all the pixels of the original image and store recalculated color values in mImageOutput. While modifying the brightness is just increasing or decreasing each color component, calculating contrast is a little more complicated. For each color component we are using the multiplying formula, color = (color – 0.5) * contrast + 0.5, where contrast is a number between 0.5 and 2. In the GUI we are setting a value between -0.5 and 1.0, which is more natural range; it is then recalculated at the beginning of step 6. While processing the image we have to change color value of all pixels, so later in step 6, you can see that we iterate through later columns of each row of the pixels using two while loops. To move to the next row we invoked the line method on the Surface iterator and then the pixel method to move to the next pixel of the current row. This method is much faster than using, for example, the getPixel and setPixel methods.

Our application is rendering the original image on the left-hand side and the processed image on the right-hand side, so you can compare the results of color adjustment.

Integrating with OpenCV

OpenCV is a very powerful open-source library for computer vision. The library is written in C++ so it can be easily integrated in your Cinder application. There is a very useful OpenCV Cinder block provided within Cinder package available at the GitHub repository (

Getting ready

Make sure you have Xcode up and running with a Cinder project opened.

How to do it…

We will add OpenCV Cinder block to your project, which also illustrates the usual way of adding any other Cinder block to your project. Perform the following steps to do so:

  1. Add a new group to our Xcode project root and name it Blocks. Next, drag the opencv folder inside the Blocks group. Be sure to select the Create groups for any added folders radio button, as shown in the following screenshot:

  2. You will need only the include folder inside the opencv folder in your project structure, so delete any reference to others. The final project structure should look like the following screenshot:

  3. Add the paths to the OpenCV library files in the Other Linker Flags section of your project’s build settings, for example:

    $(CINDER_PATH)/blocks/opencv/lib/macosx/libopencv_imgproc.a $(CINDER_PATH)/blocks/opencv/lib/macosx/libopencv_core.a $(CINDER_PATH)/blocks/opencv/lib/macosx/libopencv_objdetect.a

    These paths are shown in the following screenshot:

  4. Add the paths to the OpenCV Cinder block headers you are going to use in the User Header Search Paths section of your project’s build settings:


    This path is shown in the following screenshot:

  5. Include OpenCV Cinder block header file:

    #include "CinderOpenCV.h"

How it works…

OpenCV Cinder block provides the toOcv and fromOcv functions for data exchange between Cinder and OpenCV. After setting up your project you can use them, as shown in the following short example:

Surface mImage, mImageOutput; mImage = loadImage( loadAsset("image.png") ); cv::Mat ocvImage(toOcv(mImage)); cv::cvtColor(ocvImage, ocvImage, CV_BGR2GRAY ); mImageOutput = Surface(fromOcv(ocvImage));

You can use the toOcv and fromOcv functions to convert between Cinder and OpenCV types, storing image data such as Surface or Channel handled through the ImageSourceRef type; there are also other types, as shown in the following table:

Cinder types

OpenCV types











In this example we are linking against the following three files from the OpenCV package:

  • libopencv_imgproc.a: This image processing module includes image manipulation functions, filters, feature detection, and more
  • libopencv_core.a: This module provides core functionality and data structures
  • libopencv_objdetect.a: This module has object detection tools such as cascade classifiers

You can find the documentation on all OpenCV modules at

There’s more…

There are some features that are not available in precompiled OpenCV libraries packaged in OpenCV Cinder block, but you can always compile your own OpenCV libraries and still use exchange functions from OpenCV Cinder block in your project.

Detecting edges

In this recipe, we will demonstrate how to use edge detection function, which is one of the image processing functions implemented directly in Cinder.

Getting ready

Make sure you have Xcode up and running with an empty Cinder project opened. We will need a sample image to proceed, so save it in your assets folder as image.png.

How to do it…

We will process the sample image with the edge detection function. Perform the following steps to do so:

  1. Include necessary headers:

    #include "cinder/gl/Texture.h" #include "cinder/Surface.h" #include "cinder/ImageIo.h" #include "cinder/ip/EdgeDetect.h" #include "cinder/ip/Grayscale.h"

  2. Add two properties to your main class:

    Surface8u mImageOutput;

  3. Load the source image and set up Surface for processed images inside the setup method:

    mImage = loadImage( loadAsset("image.png") ); mImageOutput = Surface8u(mImage.getWidth(), mImage.getHeight(), false);

  4. Use image processing functions:

    ip::grayscale(mImage, &mImage); ip::edgeDetectSobel(mImage, &mImageOutput);

  5. Inside the draw method add the following two lines of code for drawing images:

    gl::draw(mImage); gl::draw(mImageOutput, Vec2f(512.f+1.f, 0.f));

How it works…

As you can see, detecting edges in Cinder is pretty easy because of implementation of basic image processing functions directly in Cinder, so you don’t have to include any third-party libraries. In this case we are using the grayscale function to convert the original image color space to grayscale. It is a commonly used feature in image processing because many algorithms work more efficiently on grayscale images or are even designed to work only with grayscale source images. The edge detection is implemented with the edgeDetectSobel function and uses the Sobel algorithm. In this case, the first parameter is the source original grayscale image and the second parameter, is the output Surface object in which the result will be stored.

Inside the draw method we are drawing both images, as shown in the following screenshot:

There’s more…

You may find the image processing functions implemented in Cinder insufficient, so you can also include to your project, third-party library such as OpenCV. We explained how we can use Cinder and OpenCV together in the preceding recipe, Integrating with OpenCV.

Other useful functions in the context of edge detection are Canny and findContours. The following is the example of how we can use them:

vector<vector<cv::Point> > contours; cv::Mat inputMat( toOcv( frame ) ); // blur cv::cvtColor( inputMat, inputMat, CV_BGR2GRAY ); cv::Mat blurMat; cv::medianBlur(inputMat, blurMat, 11); // threshold cv::Mat thresholdMat; cv::threshold(blurMat, thresholdMat, 50, 255, CV_8U ); // erode cv::Mat erodeMat; cv::erode(thresholdMat, erodeMat, 11); // Detect edges cv::Mat cannyMat; int thresh = 100; cv::Canny(erodeMat, cannyMat, thresh, thresh*2, 3 ); // Find contours cv::findContours(cannyMat, contours, CV_RETR_TREE, CV_CHAIN_APPROX_ SIMPLE);

After executing the preceding code, the points, which form the contours are stored in the contours variable.


Please enter your comment!
Please enter your name here