13 min read

(For more resources related to this topic, see here.)

An introduction to image filtering

Morphological operations and edge detection are actually types of image filtering, even though we used them in a black box sense, without really looking under the hood. Hopefully, this approach will get you accustomed to the details of image filtering a little faster.

First of all, let’s give a general definition of image filtering; it can be explained as the process of modifying the values of the pixels using a function that is typically applied on a local neighborhood of the image. In many situations, applying the function on a neighborhood involves a special operation, called convolution, with an operand called kernel. In this sense, you have already applied such a process in the case of erosion or dilation and even in the case of edge detection. The former processes used the strel function to create a kernel, while the latter used a kernel based on your choice of the edge detection method. But let’s not get ahead of ourselves. We will try to take things one step at a time, starting by explaining neighborhood processing.

Processing neighborhoods of pixels

In the previous paragraph, we mentioned that the filtering process typically takes place on a specific neighborhood of pixels. When this neighborhood process is applied for all pixels, it is called sliding neighborhood operation. In it, we slide a rectangular neighborhood window through all possible positions of the image and modify its central pixel using a function of the pixels in the neighborhood.

Let’s see how this is done, using a numeric example. We’ll start with something simple, like a linear filtering process, that is, averaging. Let’s suppose that we have a small image, sized 8×8 pixels and we want to modify its pixel values, so that they get assigned with the rounded average of the pixels’ values in their 3×3 neighborhoods.

This will be easier to explain by using a real numeric example. Let’s explain what happens in the step shown in the following image, in which the central pixel of the highlighted 3×3 neighborhood (in the fourth row and sixth column) will be replaced by the average value of all the pixels in the neighborhood (rounded to the nearest integer):

Let the image be called I, the result in pixel I(4,6) will be:

Substituting the values of the pixels, we can calculate the average value:

Hence, the value of the central pixel of the neighborhood will become 121 (the closest integer to 120.89).

By repeating the process described previously for all the pixels of the image, we get a result commonly known as mean filtering or average filtering. The final result of the entire process is shown in the following figure:

You may be wondering now; the choice of neighborhood, for the example, was very convenient, but what happens when we want to change the value of a pixel on the borders of the image such as let’s say pixel I(1,4)? Why was it set to 77 as shown in the image?

This is indeed a valid and natural question, and you are very intuitive if you already thought about it. The answer is that the way to tackle this problem when you want your resulting image to be the same size as your original image is to involve only the neighboring pixels that exist in your calculations. However, since in our example, the calculation that has to be performed is averaging the neighborhood pixels, the denominator will still be 9, hence, it will be like we pad the rest of the neighborhood with zeros. Let’s demonstrate this example as well:

As shown in the previous image, the central pixel value gets evaluated as follows:

Of course, since there is no 0th line, the first three operands of the addition are non-existent, hence set to zero:

Therefore, the result of the averaging process for the aforementioned neighborhood will be equal to 77 (as shown in the image). This approach is not the only one we have for the image borders. We could assign the maximum possible value (255 for our example) to the non-existent pixels, or assign them the mean value of the rest of the neighborhood, and so on. The choice we make affects the quality of the borders of the image, as we will see in real pictures later on.

The basics of convolution

The process described previously was performed in overlapping neighborhoods of the image, but no use of a kernel was mentioned. So, what is this all about? And how does the convolution fit in this framework? Well, the truth is that the process described previously is actually describing the essence of convolution, which is passing a kernel over all possible equally sized neighborhoods of the image and using it to modify the value of the central pixel. The only problem in our case is that we did not use a specific kernel in the process described. Or did we? Let’s try to find out using MATLAB code to perform two-dimensional convolution.

The 3×3 neighborhood we used for the described process can be replaced by a 3×3 kernel, as long as the final result remains the same. The kernel that accomplishes this effect is a 3×3 matrix with all pixels set to 1/9. Convolving this kernel with the original image produces the same result as the aforementioned example. To demonstrate the process, we can use the two-dimensional convolution MATLAB function conv2 as follows, to get the result:

>> original = [132 101 101 107 115 121 110 92 120 124 122 120 129 123 121 129 134 146 144 134 134 132 134 138 143 147 136 121 121 115 107 107 145 147 138 129 119 113 113 122 162 155 152 149 142 129 118 122 127 122 115 113 117 102 95 94 67 74 78 80 89 89 107 109]; % Create original image >> kernel = ones(3,3)*(1/9); % Create kernel >> conv_result = conv2(original, kernel,'same'); % Perform convolution >> final_result = round(conv_result) % Rounding of result

The final result obtained is as follows:

final_result = 53 78 75 77 79 80 77 50 84 125 122 123 124 124 122 80 90 135 133 129 125 124 123 82 96 142 138 131 124 121 120 80 100 147 142 134 126 120 116 77 95 140 136 130 124 116 112 74 79 117 115 115 112 110 107 72 43 65 65 66 66 67 66 45

As expected, the result is the same as the one calculated using the analytical process described before. The convolution kernel has done its job. In our process, we used a 8×8 original image and a 3×3 kernel with the values of all pixels as 1/9 (this is what happens when you get a 3×3 matrix with all instances of 1 and multiply it by 1/9, as we did) and finally ordered the conv2 function to produce the result using the padding process described earlier for the borders, hence calculating a result with the same dimensions as the original.

But how did it do it? What exactly is convolution? Now it is time to fully understand convolution. But first, you must get acquainted with its mathematical equations. Since learning math is not the purpose of this book, we will try to give you just the basics, so that you get an idea of what this operation is all about, as it is invaluable for image filtering.

The ugly mathematical truth

Let’s start with the mathematical definition of convolution for discrete functions (since in digital image processing all functions are discrete). To form our problem in a signal processing sense, we can define it as passing an input image I, through a Linear Space Invariant (LSI) system, performing convolution with a kernel h (also called a filter), to produce an output image, g. Hence, we get the following block diagram:

This process is described mathematically by the following equation:

where * is the symbol for convolution and the large Σ denotes a sum. The reason we have two sums is because our process is two-dimensional. Without going into too much detail, we can summarize the process described previously using the following steps, which are also followed in the implementation of conv2:

  1. Rotate the convolution kernel by 180 degrees to abide by the process in the double sum of the equation.
  2. Determine the central pixel of the neighborhood. This is straightforward when the neighborhood has an odd number of rows and columns, but must be based on some rule if either of the dimensions is even.
  3. Apply the rotated kernel to each pixel of the input image. This is a multiplication of each pixel in the rotated kernel by the corresponding pixel on the image neighborhood processed. It can be thought of as the weighted sum of the neighborhood pixels.

The result of conv2 can be either of the following choices:

  • full: Larger than the original image, taking into account all the pixels that can be computed using the convolution kernel, even if their center falls out of the image. This is the default choice for the function.
  • same: Same size as the original image, using zeros to calculate border pixel values.
  • valid: Smaller than the original image, so that it uses only pixels that have full valid neighbors in the computations.

This means that when you want to produce a convolution result with the same size as the original image, you will have to use same as an input, as we did in our previous example.

By now, those of you that are not very much into math may be tempted to stop reading. So, let’s stop the mathematical jargon and dive into the practical examples. We know what a convolution does and we have seen an example on the pixels of a very small image, using an averaging convolution kernel. So, what does this process really do to an image?

Time for action – applying averaging filters in images

We will start off with an easy-to-follow example, so that all the theory described previously is demonstrated. In this example, we will also introduce some new MATLAB functions, to facilitate your understanding. Let’s start:

  1. First, we load our image, which is holiday_image2.bmp:

    >> img = imread('holiday_image2.bmp');

  2. Then, we generate our convolution kernel, using function fspecial and then rotate it 180 degrees:

    >> kernel = fspecial('average',3); >> kernel = rot90(kernel,2)

  3. The output of the code will be as follows:

    kernel = 0.1111 0.1111 0.1111 0.1111 0.1111 0.1111 0.1111 0.1111 0.1111

  4. Now, it is time to use the three different ways of convolving our image:

    >> con1 = conv2(img,kernel); % Default usage ('full') >> con2 = conv2(img,kernel,'same'); % convolution using 'same' >> con3 = conv2(img,kernel,'valid'); % convolution using 'valid'

  5. In the previous step, you probably got a warning saying:

    Warning: CONV2 on values of class UINT8 is obsolete. Use CONV2(DOUBLE(A),DOUBLE(B)) or CONV2(SINGLE(A),SINGLE(B)) instead.

  6. This actually means that UNIT8 type will not be supported by conv2 in the future. To be on the safe side, you might want to use the suggestion by MATLAB and convert your image to single prior to convolving it:

    >> img = single(img); >> kernel = fspecial('average',3); % Create 3x3 averaging kernel >> con1 = conv2(img,kernel); % Default usage ('full') >> con2 = conv2(img,kernel,'same'); % convolution using 'same' >> con3 = conv2(img,kernel,'valid'); % convolution using 'valid'

  7. Now, we can show our results in one figure, along with the original image. This time, we are going to use an empty matrix as the second argument in imshow, to avoid having to convert our results to UNIT8:

    >> figure;subplot(2,2,1),imshow(img,[]),title('Original') >> subplot(2,2,2),imshow(con1,[]),title('full') >> subplot(2,2,3),imshow(con2,[]),title('same') >> subplot(2,2,4),imshow(con3,[]),title('valid')

  8. It is obvious that the three results are identical, but there is a small detail. Their size is not. So let’s see if we got what we expected. In the Workspace window, you can see the difference in sizes:

  9. Let’s now discuss the physical, qualitative meaning of averaging an image. What does it exactly do? The answer is; it performs blurring of the image. To examine this effect, we can crop the tower from our original and averaged image and display the result. The tower can be cropped using the following coordinates:

    >> tower_original = img(51:210,321:440); >> tower_blurred = con2(51:210,321:440); figure >> subplot(1,2,1),imshow(tower_original),title('Original tower') >> subplot(1,2,2),imshow(tower_blurred),title('Blurred tower')

  10. The original image and the blurred image are as follows:

What just happened?

The process described in the previous example demonstrated the usage of convolution in its various implementations, using the averaging kernel produced using fspecial. This function is designed to generate kernels for popular filtering tasks, as we will further analyze in the following sections. In our case, we created a 3×3 kernel of values equal to 1/9 (which is almost equal to 0.1111, hence the result in step 2). Then, the three different choices of convolution were applied and the results were displayed along with the original image. Of course, a detail such as the size of the borders cannot be easily observed in full scale, so we observed the difference in the sizes of the results. Finally, we displayed a part of the original image next to the same part of the same convolution result, to prove that the result of the averaging process is a blurring of the image.

Alternatives to convolution

Convolution is not the only way to perform image filtering. There is also correlation, which gives us the same result. Filtering an image using correlation can be accomplished by using the MATLAB function called filter2, which performs, as its name implies, a two-dimensional filtering of two images. The first input in this case is a kernel (filter) and the second input is an image (or in a more general case a two-dimensional matrix). We will not go into detail here, just point out that one main difference between the two methods is that correlation does not need the kernel to be rotated. The border issue remains, having the same three approaches as in the case of convolution using conv2. A demonstration on the equivalence of the two functions is given if we type in the following commands:

>> img = imread('holiday_image2.bmp'); >> img = img(51:210,321:440); >> kernel = fspecial('average',3); >> kernel180 = rot90(kernel,3); >> conv_result = conv2(img,kernel180,'same'); >> corr_result = filter2(kernel,img,'same'); >> subplot(1,3,1),imshow(img),title('Original') >> subplot(1,3,2),imshow(uint8(conv_result)),title('Blurred - conv2') >> subplot(1,3,3),imshow(uint8(corr_result)),title('Blurred - filter2')

The result of the preceding code is displayed as follows:

In our example, the two kernels used for conv2 and filter2 are identical, since the averaging filter used is square (3×3) and all its elements are equal. The generalized process shown will be useful when we have a more complex kernel.

Using imfilter

The two alternative solutions for performing image filtering presented so far have their origin in general two-dimensional signal processing theory. This means that they should be expanded for three-dimensional signals when we have to deal with colored image filtering. The process is pretty straightforward and involves repeating the process for all three separate colored channels. But why do that, when we have a function that takes care of checking the image before applying the filter and then selecting the correct method?

This specialized function is called imfilter and it is designed for handling images, regardless if they are grayscale or color. This function can implement both filtering methods described in previous paragraphs and it can also define the result to be same or full. Its extra functionality comes in the selection of the way it handles boundary values, and the automatic processing of color images. Furthermore, this function performs the needed conversions, in case the image input is integer-valued. Combined with the fspecial function, this will probably be your most valuable tool in MATLAB when it comes to image filtering.

LEAVE A REPLY

Please enter your comment!
Please enter your name here