15 min read

In the world of computer vision, image filtering is used to modify images. These modifications essentially allow you to clarify an image in order to get the information you want. This could involve anything from extracting edges from an image, blurring it, or removing unwanted objects. 

There are, of course, lots of reasons why you might want to use image filtering to modify an image. For example, taking a picture in sunlight or darkness will impact an images clarity – you can use image filters to modify the image to get what you want from it. Similarly, you might have a blurred or ‘noisy’ image that needs clarification and focus. Let’s use an example to see how to do image filtering in OpenCV.

This image filtering tutorial is an extract from Practical Computer Vision.

Here’s an example with considerable salt and pepper noise. This occurs when there is a disturbance in the quality of the signal that’s used to generate the image.

salt and pepper noise

The image above can be easily generated using OpenCV as follows:

# initialize noise image with zeros

noise = np.zeros((400, 600))

# fill the image with random numbers in given range

cv2.randu(noise, 0, 256)

Let’s add weighted noise to a grayscale image (on the left) so the resulting image will look like the one on the right:

greyscale image

The code for this is as follows:

# add noise to existing image

noisy_gray = gray + np.array(0.2*noise, dtype=np.int)

Here, 0.2 is used as parameter, increase or decrease the value to create different intensity noise.

In several applications, noise plays an important role in improving a system’s capabilities. This is particularly true when you’re using deep learning models. The noise becomes a way of testing the precision of the deep learning application, and building it into the computer vision algorithm.

Linear image filtering

The simplest filter is a point operator. Each pixel value is multiplied by a scalar value. This operation can be written as follows:

Linear filter

Here:

  • The input image is F and the value of pixel at (i,j) is denoted as f(i,j)
  • The output image is G and the value of pixel at (i,j) is denoted as g(i,j)
  • K is scalar constant

This type of operation on an image is what is known as a linear filter. In addition to multiplication by a scalar value, each pixel can also be increased or decreased by a constant value. So overall point operation can be written like this:

Linear filter formula

This operation can be applied both to grayscale images and RGB images. For RGB images, each channel will be modified with this operation separately. The following is the result of varying both K and L. The first image is input on the left. In the second image, K=0.5 and L=0.0, while in the third image, K is set to 1.0 and L is 10. For the final image on the right, K=0.7 and L=25. As you can see, varying K changes the brightness of the image and varying L changes the contrast of the image:

greyscale image

This image can be generated with the following code:

import numpy as np
import matplotlib.pyplot as plt
import cv2
def point_operation(img, K, L):
"""
Applies point operation to given grayscale image
"""
img = np.asarray(img, dtype=np.float)
img = img*K + L
# clip pixel values
img[img > 255] = 255
img[img < 0] = 0
return np.asarray(img, dtype = np.int)
def main():
# read an image
img = cv2.imread('../figures/flower.png')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# k = 0.5, l = 0
out1 = point_operation(gray, 0.5, 0)
# k = 1., l = 10
out2 = point_operation(gray, 1., 10)
# k = 0.8, l = 15
out3 = point_operation(gray, 0.7, 25)
res = np.hstack([gray,out1, out2, out3])
plt.imshow(res, cmap='gray')
plt.axis('off')
plt.show()
if __name__ == '__main__':
main()

2D linear image filtering

While the preceding filter is a point-based filter, image pixels have information around the pixel as well. In the previous image of the flower, the pixel values in the petal are all yellow. If we choose a pixel of the petal and move around, the values will be quite close. This gives some more information about the image. To extract this information in filtering, there are several neighborhood filters.

In neighborhood filters, there is a kernel matrix which captures local region information around a pixel. To explain these filters, let’s start with an input image, as follows:

2D linear filters

This is a simple binary image of the number 2. To get certain information from this image, we can directly use all the pixel values. But instead, to simplify, we can apply filters on this. We define a matrix smaller than the given image which operates in the neighborhood of a target pixel. This matrix is termed kernel; an example is given as follows:

binary image

The operation is defined first by superimposing the kernel matrix on the original image, then taking the product of the corresponding pixels and returning a summation of all the products. In the following figure, the lower 3 x 3 area in the original image is superimposed with the given kernel matrix and the corresponding pixel values from the kernel and image are multiplied. The resulting image is shown on the right and is the summation of all the previous pixel products:

summation of previous pixel products:

This operation is repeated by sliding the kernel along image rows and then image columns. This can be implemented as in following code. We will see the effects of applying this on an image in coming sections.

# design a kernel matrix, here is uniform 5×5

kernel = np.ones((5,5),np.float32)/25

# apply on the input image, here grayscale input

dst = cv2.filter2D(gray,-1,kernel)

However, as you can see previously, the corner pixel will have a drastic impact and results in a smaller image because the kernel, while overlapping, will be outside the image region. This causes a black region, or holes, along with the boundary of an image. To rectify this, there are some common techniques used:

  • Padding the corners with constant values maybe 0 or 255, by default OpenCV will
  • use this.
  • Mirroring the pixel along the edge to the external area
  • Creating a pattern of pixels around the image

The choice of these will depend on the task at hand. In common cases, padding will be able to generate satisfactory results.

The effect of the kernel is most crucial as changing these values changes the output significantly. We will first see simple kernel-based filters and also see their effects on the output when changing the size.

Box filtering

This filter averages out the pixel value as the kernel matrix is denoted as follows:

kernel matrixApplying this filter results in blurring the image. The results are as shown as follows:

Box filter

In frequency domain analysis of the image, this filter is a low pass filter. The frequency domain analysis is done using Fourier transformation of the image, which is beyond the scope of this introduction. We can see on changing the kernel size, the image gets more and more blurred:

Box filter

As we increase the size of the kernel, you can see that the resulting image gets more blurred. This is due to averaging out of peak values in small neighbourhood where the kernel is applied. The result for applying kernel of size 20×20 can be seen in the following image.

Box filter

However, if we use a very small filter of size (3,3) there is negligible effect on the output, due to the fact that the kernel size is quite small compared to the photo size. In most applications, kernel size is heuristically set according to image size:

Box filter

The complete code to generate box filtered photos is as follows:

def plot_cv_img(input_image, output_image):
"""
Converts an image from BGR to RGB and plots
"""
fig, ax = plt.subplots(nrows=1, ncols=2)
ax[0].imshow(cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB))
ax[0].set_title('Input Image')
ax[0].axis('off')
ax[1].imshow(cv2.cvtColor(output_image, cv2.COLOR_BGR2RGB))
ax[1].set_title('Box Filter (5,5)')
ax[1].axis('off')
plt.show()
def main():
# read an image
img = cv2.imread('../figures/flower.png')
# To try different kernel, change size here.
kernel_size = (5,5)
# opencv has implementation for kernel based box blurring
blur = cv2.blur(img,kernel_size)
# Do plot
plot_cv_img(img, blur)
if __name__ == '__main__':
main()

Properties of linear filters

Several computer vision applications are composed of step by step transformations of an input photo to output. This is easily done due to several properties associated with a common type of filters, that is, linear filters:

  • The linear filters are commutative such that we can perform multiplication operations on filters in any order and the result still remains the same:

a * b = b * a

  • They are associative in nature, which means the order of applying the filter does not affect the outcome:

(a * b) * c = a * (b * c)

  • Even in cases of summing two filters, we can perform the first summation and then apply the filter, or we can also individually apply the filter and then sum the results. The overall outcome still remains the same:
  • Applying a scaling factor to one filter and multiplying to another filter is
    equivalent to first multiplying both filters and then applying scaling factor

These properties play a significant role in other computer vision tasks such as object detection and segmentation. A suitable combination of these filters enhances the quality of information extraction and as a result, improves the accuracy.

Non-linear image filtering

While in many cases linear filters are sufficient to get the required results, in several other use cases performance can be significantly increased by using non-linear image filtering. Mon-linear image filtering is more complex, than linear filtering. This complexity can, however, give you more control and better results in your computer vision tasks.

Let’s take a look at how non-linear image filtering works when applied to different images.

Smoothing a photo

Applying a box filter with hard edges doesn’t result in a smooth blur on the output photo. To improve this, the filter can be made smoother around the edges. One of the popular such filters is a Gaussian filter. This is a non-linear filter which enhances the effect of the center pixel and gradually reduces the effects as the pixel gets farther from the center. Mathematically, a Gaussian function is given as:

Smoothing image

where μ is mean and σ is variance.

An example kernel matrix for this kind of filter in 2D discrete domain is given as follows:

2D discrete domain

This 2D array is used in normalized form and effect of this filter also depends on its width by changing the kernel width has varying effects on the output as discussed in further section. Applying gaussian kernel as filter removes high-frequency components which results in removing strong edges and hence a blurred photo:

Gaussian blurred

While this filter performs better blurring than a box filter, the implementation is also quite simple with OpenCV:

def plot_cv_img(input_image, output_image):
"""
Converts an image from BGR to RGB and plots
"""
fig, ax = plt.subplots(nrows=1, ncols=2)
ax[0].imshow(cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB))
ax[0].set_title('Input Image')
ax[0].axis('off')
ax[1].imshow(cv2.cvtColor(output_image, cv2.COLOR_BGR2RGB))
ax[1].set_title('Gaussian Blurred')
ax[1].axis('off')
plt.show()
def main():
# read an image
img = cv2.imread('../figures/flower.png')
# apply gaussian blur,
# kernel of size 5x5,
# change here for other sizes
kernel_size = (5,5)
# sigma values are same in both direction
blur = cv2.GaussianBlur(img,(5,5),0)
plot_cv_img(img, blur)
if __name__ == '__main__':
main()

The histogram equalization technique

The basic point operations, to change the brightness and contrast, help in improving photo quality but require manual tuning. Using histogram equalization technique, these can be found algorithmically and create a better-looking photo. Intuitively, this method tries to set the brightest pixels to white and the darker pixels to black. The remaining pixel values are similarly rescaled. This rescaling is performed by transforming original intensity distribution to capture all intensity distribution. An example of this equalization is as following:

histogram equalization

The preceding image is an example of histogram equalization. On the right is the output and, as you can see, the contrast is increased significantly. The input histogram is shown in the bottom figure on the left and it can be observed that not all the colors are observed in the image. After applying equalization, resulting histogram plot is as shown on the right bottom figure. To visualize the results of equalization in the image , the input and results are stacked together in following figure.

Histogram equalized

Code for the preceding photos is as follows:

def plot_gray(input_image, output_image):
"""
Converts an image from BGR to RGB and plots
"""
# change color channels order for matplotlib
fig, ax = plt.subplots(nrows=1, ncols=2)
ax[0].imshow(input_image, cmap='gray')
ax[0].set_title('Input Image')
ax[0].axis('off')
ax[1].imshow(output_image, cmap='gray')
ax[1].set_title('Histogram Equalized ')
ax[1].axis('off')
plt.savefig('../figures/03_histogram_equalized.png')
plt.show()
def main():
# read an image
img = cv2.imread('../figures/flower.png')
# grayscale image is used for equalization
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# following function performs equalization on input image
equ = cv2.equalizeHist(gray)
# for visualizing input and output side by side
plot_gray(gray, equ)
if __name__ == '__main__':
main()

Median image filtering

Median image filtering a similar technique as neighborhood filtering. The key technique here, of course, is the use of a median value. As such, the filter is non-linear. It is quite useful in removing sharp noise such as salt and pepper.

Instead of using a product or sum of neighborhood pixel values, this filter computes a median value of the region. This results in the removal of random peak values in the region, which can be due to noise like salt and pepper noise. This is further shown in the following figure with different kernel size used to create output.

In this image first input is added with channel wise random noise as:

# read the image

flower = cv2.imread('../figures/flower.png')

# initialize noise image with zeros

noise = np.zeros(flower.shape[:2])

# fill the image with random numbers in given range

cv2.randu(noise, 0, 256)

# add noise to existing image, apply channel wise

noise_factor = 0.1

noisy_flower = np.zeros(flower.shape)

for i in range(flower.shape[2]):

noisy_flower[:,:,i] = flower[:,:,i] + np.array(noise_factor*noise,

dtype=np.int)

# convert data type for use

noisy_flower = np.asarray(noisy_flower, dtype=np.uint8)

The created noisy image is used for median image filtering as:

# apply median filter of kernel size 5

kernel_5 = 5

median_5 = cv2.medianBlur(noisy_flower,kernel_5)

# apply median filter of kernel size 3

kernel_3 = 3

median_3 = cv2.medianBlur(noisy_flower,kernel_3)

In the following photo, you can see the resulting photo after varying the kernel size (indicated in brackets). The rightmost photo is the smoothest of them all:

Median filter

The most common application for median blur is in smartphone application which filters input image and adds an additional artifacts to add artistic effects.

The code to generate the preceding photograph is as follows:

def plot_cv_img(input_image, output_image1, output_image2, output_image3):
"""
Converts an image from BGR to RGB and plots
"""
fig, ax = plt.subplots(nrows=1, ncols=4)
ax[0].imshow(cv2.cvtColor(input_image, cv2.COLOR_BGR2RGB))
ax[0].set_title('Input Image')
ax[0].axis('off')
ax[1].imshow(cv2.cvtColor(output_image1, cv2.COLOR_BGR2RGB))
ax[1].set_title('Median Filter (3,3)')
ax[1].axis('off')
ax[2].imshow(cv2.cvtColor(output_image2, cv2.COLOR_BGR2RGB))
ax[2].set_title('Median Filter (5,5)')
ax[2].axis('off')
ax[3].imshow(cv2.cvtColor(output_image3, cv2.COLOR_BGR2RGB))
ax[3].set_title('Median Filter (7,7)')
ax[3].axis('off')
plt.show()
def main():
# read an image
img = cv2.imread('../figures/flower.png')
# compute median filtered image varying kernel size
median1 = cv2.medianBlur(img,3)
median2 = cv2.medianBlur(img,5)
median3 = cv2.medianBlur(img,7)
# Do plot
plot_cv_img(img, median1, median2, median3)
if __name__ == '__main__':
main()

Image filtering and image gradients

These are more edge detectors or sharp changes in a photograph. Image gradients widely used in object detection and segmentation tasks. In this section, we will look at how to compute image gradients. First, the image derivative is applying the kernel matrix which computes the change in a direction.

The Sobel filter is one such filter and kernel in the x-direction is given as follows:

kernel in the x-direction

Here, in the y-direction:

kernel in the y-direction

This is applied in a similar fashion to the linear box filter by computing values on a superimposed kernel with the photo. The filter is then shifted along the image to compute all values. Following is some example results, where X and Y denote the direction of the Sobel kernel:

Sobel kernel

This is also termed as an image derivative with respect to given direction(here X or Y). The lighter resulting photographs (middle and right) are positive gradients, while the darker regions denote negative and gray is zero.

While Sobel filters correspond to first order derivatives of a photo, the Laplacian filter gives a second-order derivative of a photo. The Laplacian filter is also applied in a similar way to Sobel:

Laplacian filter

The code to get Sobel and Laplacian filters is as follows:

# sobel

x_sobel = cv2.Sobel(img,cv2.CV_64F,1,0,ksize=5)

y_sobel = cv2.Sobel(img,cv2.CV_64F,0,1,ksize=5)

# laplacian

lapl = cv2.Laplacian(img,cv2.CV_64F, ksize=5)

# gaussian blur

blur = cv2.GaussianBlur(img,(5,5),0)

# laplacian of gaussian

log = cv2.Laplacian(blur,cv2.CV_64F, ksize=5)

We learnt about types of filters and how to perform image filtering in OpenCV. To know more about image transformation and 3D computer vision check out this book Practical Computer Vision.

Check out for more:

Fingerprint detection using OpenCV 3

3 ways to deploy a QT and OpenCV application

OpenCV 4.0 is on schedule for July releasePractical Computer Vision

 

1 COMMENT

  1. Don’t call Gaussian filter a ‘non-linear’ filter. Linearity or non-linearity is defined on the way the filter is applied, not the shape of the filter. If the filter is applied using convolution (which is a linear operation), it is a linear filter! There is only one way to check if a process or operation is linear: if f(ax+b) = af(x) + f(b), then it is linear!

    I think this article seriously lacks basic training in signal processing. I’d be very careful learning from this particular article!

LEAVE A REPLY

Please enter your comment!
Please enter your name here