6 min read

Computer vision might sound like an exotic term, but it’s actually a piece of technology that you can easily find in your daily life. You know how Facebook can automatically tag your friends in a photo? That’s computer vision. Have you ever tried Google Image Search? That’s computer vision too. Even the QR Code reader app in your phone employs some sort of computer vision technology.

Fortunately, you don’t have to conduct your own researches to implement computer vision, since that technology is easily accessible in the form of SDKs and libraries. OpenCV is one of those libraries, and it’s open source too. OpenCV focuses on real-time computer vision, so it feels very natural when the library is extended to Android, a device that usually has a camera built in.

However, if you’re looking to implement OpenCV in your app, you will find the official documentations for the Android version a bit lagging behind the ever evolving Android development environment. But don’t worry; this post will help you with that. Together we’re going to add the OpenCV Android library and use some of its basic functions on your app.


Before you get started, let’s make sure you have all the following requirements:

  • Android Studio v1.2 or above
  • Android 4.4 (API 19) SDK or above
  • OpenCV for Android library v3.1 or above
  • An Android device with a camera

Importing the OpenCV Library

All right, let’s get started. Once you have downloaded the OpenCV library, extract it and you will find a folder named “sdk” in it. This “sdk” folder should contain folders called “java” and “native“. Remember the location of these 2 folders, since we will get back to them soon enough.

So now you need to create a new project with blank activity on Android Studio. Make sure to set the minimum required SDK to API 19, which is the lowest version that’s compatible with the library.

Next, import the OpenCV library. Open the File > New > Import Module… menu and point it to the “java” folder mentioned earlier, which will automatically copy the Java library to your project folder.

Now that you have added the library as a module, you need to link the Android project to the module. Open the File > Project Structure… menu and select app. On the dependencies tab, press the + button, choose Module Dependency, and select the OpenCV module on the list that pops up.

Next, you need to make sure that the module will be built with the same setting as your app. Open the build.gradle scripts for both the app and the OpenCV module. Copy the SDK version and tools version values in the app graddle script to the OpenCV graddle script. Once it’s done, sync the gradle scripts and rebuild the project.

Here are the values of my graddle script, but your script may differ based on the SDK version you used.

compileSdkVersion 23
buildToolsVersion "23.0.0 rc2"

defaultConfig {
        minSdkVersion 19
        targetSdkVersion 23

To finish importing OpenCV, you need to add the C++ libraries to the project. Remember the “native” folder mentioned earlier? There should be a folder named “libs” inside it. Copy the “libs” folder to the <project-name>/OpenCVLibrary/src/main folder and rename it to “jniLibs” so that Android Studio will know that the files inside that folder are C++ libraries.

Sync the project again, and now OpenCV should have been imported properly to your project.

Accessing the Camera

Now that you’re done importing the library, it’s time for the next step: accessing the device’s camera. The OpenCV library has its own camera UI that you can use to easily access the camera data, so let’s use that. To do that, simply replace the layout XML file for your main activity with this one.

Then you’ll need to ask permission from the user to access the camera. Add the following line to the app manifest.

<uses-permission android_name="android.permission.CAMERA"/>

And if you’re building for Android 6.0 (API 23), you will need to ask for permission inside the app. Add the following line to the onCreate() function of your main activity to ask for permission.

requestPermissions(new String[] { Manifest.permission.CAMERA }, 1);

There are two things to note about the camera UI from the library. First, by default, it will not show anything unless it’s activated in the app by calling the enableView() function. And second, on portrait orientation, the camera will display a rotated view. Fixing this last issue is quite a hassle, so let’s just choose to lock the app to landscape orientation.

Using OpenCV Library

With the preparation out of the way, let’s start actually using the library. Here’s the code for the app’s main activity if you want to see how the final version works.

To use the library, initialize it by calling the OpenCVLoader.initAsync() method on the activity‘s onResume() method. This way the activity will always check if the OpenCV library has been initialized every time the app is going to use it.

//Create callback
protected LoaderCallbackInterface mCallback = new BaseLoaderCallback(this) {
      public void onManagerConnected(int status) {
            //If not success, call base method
            if (status != LoaderCallbackInterface.SUCCESS) super.onManagerConnected(status);
            else {
                //Enable camera if connected to library
                if (mCamera != null) mCamera.enableView();

protected void onResume() {

      //Try to init
      OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_10, this, mCallback);

The initialization process will check if your phone already has the full OpenCV library. If it doesn’t, it will automatically open the Google Play page for the OpenCV Manager app and ask the user to install it. And if OpenCV has been initialized, it simply activates the camera for further use.


If you noticed, the activity implements the CvCameraViewListener2 interface. This interface enables you to access the onCameraFrame() method, which is a function that allows you to read what image the camera is capturing, and to return what image the interface should be showing.

Let’s try a simple image processing and show it on the screen.

public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
      //Get edge from the image
      Mat result = new Mat();
      Imgproc.Canny(inputFrame.rgba(), result, 70, 100);

      //Return result
      return result;

Imgproc.Canny() is an OpenCV function that does Canny Edge Detection, which is a process to detect all edges in a picture. As you can see, it’s pretty simple; you simply need to put the image from the camera (inputFrame.rgba()) into the function and it will return another image that shows only the edges.

Here’s what the app’s display will look like.

And that’s it! You’ve implemented a pretty basic feature from the OpenCV library on an Android app. There are still many image processing features that the library has, so check out this exhaustive list of features for more. Good luck!

About the author

Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.


Please enter your comment!
Please enter your name here