11 min read

In this article by Joseph Howse, Quan Hua, Steven Puttemans, and Utkarsh Sinha, the authors of OpenCV Blueprints, we delve into the aspect of fingerprint detection using OpenCV.

(For more resources related to this topic, see here.)

Fingerprint identification, how is it done?

We have already discussed the use of the first biometric, which is the face of the person trying to login to the system. However since we mentioned that using a single biometric can be quite risky, we suggest adding secondary biometric checks to the system, like the fingerprint of a person. There are many of the shelf fingerprint scanners which are quite cheap and return you the scanned image. However you will still have to write your own registration software for these scanners and which can be done by using OpenCV software. Examples of such fingerprint images can be found below.

Examples of single individual thumb fingerprintExamples of single individual thumb fingerprint in different scanning positions

This dataset can be downloaded from the FVC2002 competition website released by the University of Bologna. The website (http://bias.csr.unibo.it/fvc2002/databases.asp) contains 4 databases of fingerprints available for public download of the following structure:

  • Four fingerprint capturing devices DB1 – DB4.
  • For each device, the prints of 10 individuals are available.
  • For each person, 8 different positions of prints were recorded.

We will use this publicly available dataset to build our system upon. We will focus on the first capturing device, using up to 4 fingerprints of each individual for training the system and making an average descriptor of the fingerprint. Then we will use the other 4 fingerprints to evaluate our system and make sure that the person is still recognized by our system.

You could apply exactly the same approach on the data grabbed from the other devices if you would like to investigate the difference between a system that captures almost binary images and one that captures grayscale images. However we will provide techniques for doing the binarization yourself.

Implement the approach in OpenCV 3

The complete fingerprint software for processing fingerprints derived from a fingerprint scanner can be found at https://github.com/OpenCVBlueprints/OpenCVBlueprints/tree/master/chapter_6/source_code/fingerprint/fingerprint_process/.

In this subsection we will describe how you can implement this approach in the OpenCV interface. We will start by grabbing the image from the fingerprint system and apply binarization. This will enable us to remove any desired noise from the image as well as help us to make the contrast better between the kin and the wrinkled surface of the finger.

// Start by reading in an image
Mat input = imread("/data/fingerprints/image1.png", CV_LOAD_GRAYSCALE);
// Binarize the image, through local thresholding
Mat input_binary;
threshold(input, input_binary, 0, 255, CV_THRESH_BINARY | CV_THRESH_OTSU);

The Otsu thresholding will automatically choose the best generic threshold for the image to obtain a good contrast between foreground and background information. This is because the image contains a bimodal distribution (which means that we have an image with a 2 peak histogram) of pixel values. For that image, we can approximately take a value in the middle of those peaks as threshold value. (for images which are not bimodal, binarization won’t be accurate.) Otsu allows us to avoid using a fixed threshold value, and thus making the system more general for any capturing device. However we do acknowledge that if you have only a single capturing device, then playing around with a fixed threshold value could result in a better image for that specific setup. The result of the thresholding can be seen below.

In order to make thinning from the next skeletization step as effective as possible we need the inverse binary image.

Comparison between grayscale and binarized fingerprint image
Comparison between grayscale and binarized fingerprint image

Once we have a binary image we are actually already set to go to calculate our feature points and feature point descriptors. However, in order to improve the process a bit more, we suggest to skeletize the image. This will create more unique and stronger interest points. The following piece of code can apply the skeletization on top of the binary image. The skeletization is based on the Zhang-Suen line thinning approach.

Special thanks to @bsdNoobz of the OpenCV Q&A forum who supplied this iteration approach.

#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>

using namespace std;
using namespace cv;

// Perform a single thinning iteration, which is repeated until the skeletization is finalized
void thinningIteration(Mat& im, int iter)
{
   Mat marker = Mat::zeros(im.size(), CV_8UC1);
   for (int i = 1; i < im.rows-1; i++)
   {
       for (int j = 1; j < im.cols-1; j++)
       {
         uchar p2 = im.at<uchar>(i-1, j);
           uchar p3 = im.at<uchar>(i-1, j+1);
           uchar p4 = im.at<uchar>(i, j+1);
           uchar p5 = im.at<uchar>(i+1, j+1);
           uchar p6 = im.at<uchar>(i+1, j);
           uchar p7 = im.at<uchar>(i+1, j-1);
           uchar p8 = im.at<uchar>(i, j-1);
           uchar p9 = im.at<uchar>(i-1, j-1);

           int A = (p2 == 0 && p3 == 1) + (p3 == 0 && p4 == 1) +
                     (p4 == 0 && p5 == 1) + (p5 == 0 && p6 == 1) +
                    (p6 == 0 && p7 == 1) + (p7 == 0 && p8 == 1) +
                     (p8 == 0 && p9 == 1) + (p9 == 0 && p2 == 1);
           int B = p2 + p3 + p4 + p5 + p6 + p7 + p8 + p9;
           int m1 = iter == 0 ? (p2 * p4 * p6) : (p2 * p4 * p8);
          int m2 = iter == 0 ? (p4 * p6 * p8) : (p2 * p6 * p8);

           if (A == 1 && (B >= 2 && B <= 6) && m1 == 0 && m2 == 0)

               marker.at<uchar>(i,j) = 1;
       }
   }
   im &= ~marker;
}

// Function for thinning any given binary image within the range of 0-255. If not you should first make sure that your image has this range preset and configured!
void thinning(Mat& im)
{
   // Enforce the range tob e in between 0 - 255
   im /= 255;

   Mat prev = Mat::zeros(im.size(), CV_8UC1);
   Mat diff;

   do {
       thinningIteration(im, 0);
       thinningIteration(im, 1);
       absdiff(im, prev, diff);
       im.copyTo(prev);
   }
   while (countNonZero(diff) > 0);

   im *= 255;
}

The code above can then simply be applied to our previous steps by calling the thinning function on top of our previous binary generated image. The code for this is:

 Apply thinning algorithm
Mat input_thinned = input_binary.clone();
thinning(input_thinned);

This will result in the following output:

Comparison between binarized and thinned fingerprint imageComparison between binarized and thinned fingerprint image using skeletization techniques.

When we got this skeleton image, the following step would be to look for crossing points on the ridges of the fingerprint, which are being then called minutiae points. We can do this by a keypoint detector that looks at a large change in local contrast, like the Harris corner detector. Since the Harris corner detector is both able to detect strong corners and edges, this is ideally for the fingerprint problem, where the most important minutiae are short edges and bifurcation, the positions where edges come together.

More information about minutae points and Harris Corner detection can be found in the following publications:

Ross, Arun A., Jidnya Shah, and Anil K. Jain. “Toward reconstructing fingerprints from minutiae points.” Defense and Security. International Society for Optics and Photonics, 2005.

Harris, Chris, and Mike Stephens. “A combined corner and edge detector.” Alvey vision conference. Vol. 15. 1988.

Calling the Harris Corner operation on a skeletonized and binarized image in OpenCV is quite straightforward. The Harris corners are stored as positions corresponding in the image with their cornerness response value. If we want to detect points with a certain cornerness, than we should simply threshold the image.

Mat harris_corners, harris_normalised;
harris_corners = Mat::zeros(input_thinned.size(), CV_32FC1);
cornerHarris(input_thinned, harris_corners, 2, 3, 0.04, BORDER_DEFAULT);
normalize(harris_corners, harris_normalised, 0, 255, NORM_MINMAX, CV_32FC1, Mat());

We now have a map with all the available corner responses rescaled to the range of [0 255] and stored as float values. We can now manually define a threshold, that will generate a good amount of keypoints for our application. Playing around with this parameter could improve performance in other cases. This can be done by using the following code snippet:

float threshold = 125.0;
vector<KeyPoint> keypoints;
Mat rescaled;
convertScaleAbs(harris_normalised, rescaled);
Mat harris_c(rescaled.rows, rescaled.cols, CV_8UC3);
Mat in[] = { rescaled, rescaled, rescaled };
int from_to[] = { 0,0, 1,1, 2,2 };
mixChannels( in, 3, &harris_c, 1, from_to, 3 );
for(int x=0; x<harris_normalised.cols; x++){
   for(int y=0; y<harris_normalised.rows; y++){
         if ( (int)harris_normalised.at<float>(y, x) > threshold ){
             // Draw or store the keypoint location here, just like you decide. In our case we will store the location of the keypoint
             circle(harris_c, Point(x, y), 5, Scalar(0,255,0), 1);
             circle(harris_c, Point(x, y), 1, Scalar(0,0,255), 1);
             keypoints.push_back( KeyPoint (x, y, 1) );
         }
       }
   }

Comparison between thinned fingerprint and Harris corner response
Comparison between thinned fingerprint and Harris corner response, as well as the selected Harris corners.

Now that we have a list of keypoints we will need to create some of formal descriptor of the local region around that keypoint to be able to uniquely identify it among other keypoints.

Chapter 3, Recognizing facial expressions with machine learning, discusses in more detail the wide range of keypoints out there. In this article, we will mainly focus on the process. Feel free to adapt the interface with other keypoint detectors and descriptors out there, for better or for worse performance.

Since we have an application where the orientation of the thumb can differ (since it is not a fixed position), we want a keypoint descriptor that is robust at handling these slight differences. One of the mostly used descriptors for that is the SIFT descriptor, which stands for scale invariant feature transform. However SIFT is not under a BSD license and can thus pose problems to use in commercial software. A good alternative in OpenCV is the ORB descriptor. In OpenCV you can implement it in the following way.

Ptr<Feature2D> orb_descriptor = ORB::create();
Mat descriptors;
orb_descriptor->compute(input_thinned, keypoints, descriptors);

This will enable us to calculate only the descriptors using the ORB approach, since we already retrieved the location of the keypoints using the Harris corner approach.

At this point we can retrieve a descriptor for each detected keypoint of any given fingerprint. The descriptors matrix will contain a row for each keypoint containing the representation.

Let us now start from the case where we have only a single reference image for each fingerprint. In that case we will have a database containing a set of feature descriptors for the training persons in the database. We then have a single new entry, consisting of multiple descriptors for the keypoints found at registration time. We now have to match these descriptors to the descriptors stored in the database, to see which one has the best match.

The most simple way is by performing a brute force matching using the hamming distance criteria between descriptors of different keypoints.

// Imagine we have a vector of single entry descriptors as a database
// We will still need to fill those once we compare everything, by using the code snippets above
vector<Mat> database_descriptors;
Mat current_descriptors;
// Create the matcher interface
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");
// Now loop over the database and start the matching
vector< vector< DMatch > > all_matches;
for(int entry=0; i<database_descriptors.size();entry++){
   vector< DMatch > matches;
   matcheràmatch(database_descriptors[entry], current_descriptors, matches);
   all_matches.push_back(matches);
}

We now have all the matches stored as DMatch objects. This means that for each matching couple we will have the original keypoint, the matched keypoint and a floating point score between both matches, representing the distance between the matched points.

The idea about finding the best match seems pretty straightforward. We take a look at the amount of matches that have been returned by the matching process and weigh them by their Euclidean distance in order to add some certainty. We then look for the matching process that yielded the biggest score. This will be our best match and the match we want to return as the selected one from the database.

If you want to avoid an imposter getting assigned to the best matching score, you can again add a manual threshold on top of the scoring to avoid matches that are not good enough, to be ignored. However it is possible, and should be taken into consideration, that if you increase the score to high, that people with a little change will be rejected from the system, like for example in the case where someone cuts his finger and thus changing his pattern to drastically.

Fingerprint matching process visualized in OpenCVFingerprint matching process visualized.

To summarize, we saw how to detect fingerprints and implement it using OpenCV 3.

Resources for Article:


Further resources on this subject:


1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here