Wednesday, February 19, 2014

Static background subtraction using OpenCV

With background subtraction you can eliminate the background and focus on the actual object for further processing (detection, recogization, ...). Some algorythms provided in the OpenCV library are "learning" the background. The problem is if an object doesn't move, it will become the background as well. If you have a fixed camera - and so always the same background - you can just calculate the diffrence of two images.

Here's the code I used (you can download it here):
1:   #include <opencv2/core/core.hpp>   
2:   #include <opencv2/highgui/highgui.hpp>   
3:   #include <opencv2/imgproc/imgproc.hpp>   
4:   #include <iostream>   
5:   using namespace cv;   
6:   int main( int argc, char** argv )   
7:   {   
8:    // Read image given by user   
9:    Mat src = imread( "/users/christian/documents/programming/other/imgs/background.jpg", 0); // 1:color, 0:grayscale   
10:    Mat dst = imread("/users/christian/documents/programming/other/imgs/backtest.jpg", 0);   
11:    // background subtraction   
12:    Mat diff;   
13:    absdiff(src, dst, diff);   
14:    threshold(diff, diff, 10, 255, CV_THRESH_BINARY); // grayscale needed   
15:    // Show image in window   
16:    imshow("original", src);   
17:    imshow("new", dst);   
18:    imshow("diff", diff);   
19:    // Wait until user presses key   
20:    waitKey();   
21:    return 0;   
22:   }   
All it does is loading two images, calculating the differance, applying a threshold to highlight the object and displaying the 3 images.


This is just a theoretical example. Because here I use a static image to calculate the differnce, this method is very light sensitve. Lighting will also change the background and will no longer match exactly the stored background.

Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0

Thursday, February 13, 2014

Directories

This partly just a reminder for myself. But maybe some of you wondered where these libraries are and where Xcode 5 saves the executables that you just compiled.

Xcode 5 hides the project files pretty well but you can find them under:
 /users/username/Library/Developer/Xcode/DerivedData  
To access the Library folder, klick Go in the finder menu bar. When you pres the alt-key a link to Library should appear.

And where are the OpenCV files you installed using cmake?
The lib-files are in
 /usr/local/lib  
and the headers in
 /usr/local/include 
Again, the usr-folder is hidden. The easiest way to get there is by just typing the path in finder Go->Go to Folder...

Also important to know is the location of additional libraries you installed using MacPorts. MacPorts installs everything to
 /opt/local
Nothing is hidden there, so no problem.

Because auf the hidden folders and the rather longer paths I created for me some aliases to acess these folders rapidly from my working folder.

Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0
MacPorts: 2.2.1

How to use imageclipper on Mac OS X

In the previous tutorial I explained how to build imageclipper from source. You may have done this or just downloaded the executable file I provided. But you can't just double click it. If you do, only a help will be presented:
 Application Usage:  
  Mouse Usage:  
   Left (select)     : Select or initialize a rectangle region.  
   Right (move or resize) : Move by dragging inside the rectangle.  
                Resize by draggin outside the rectangle.  
   Middle or SHIFT + Left : Initialize the watershed marker. Drag it.   
  Keyboard Usage:  
   s (save)        : Save the selected region as an image.  
   f (forward)       : Forward. Show next image.  
   SPACE          : Save and Forward.  
   b (backward)      : Backward.   
   q (quit) or ESC     : Quit.   
   r (rotate) R (opposite) : Rotate rectangle in counter-clockwise.  
   e (expand) E (shrink)  : Expand the recntagle size.  
   + (incl)  - (decl)   : Increment the step size to increment.  
   h (left) j (down) k (up) l (right) : Move rectangle. (vi-like keybinds)  
   y (left) u (down) i (up) o (right) : Resize rectangle. (Move boundaries)  
   n (left) m (down) , (up) . (right) : Shear deformation.  
 Now reading a directory..... No image file exist under a directory .  
 ImageClipper - image clipping helper tool.  
 Command Usage: imgclipper [option]... [arg_reference]  
  <arg_reference = .>  
   <arg_reference> would be a directory or an image or a video filename.  
   For a directory, image files in the directory will be read sequentially.  
   For an image, it starts to read a directory from the specified image file.   
   (A file is judged as an image based on its filename extension.)  
   A file except images is tried to be read as a video and read frame by frame.   
  Options  
   -o <output_format = imgout_format or vidout_format>  
     Determine the output file path format.  
     This is a syntax sugar for -i and -v.   
     Format Expression)  
       %d - dirname of the original  
       %i - filename of the original without extension  
       %e - filename extension of the original  
       %x - upper-left x coord  
       %y - upper-left y coord  
       %w - width  
       %h - height  
       %r - rotation degree  
       %. - shear deformation in x coord  
       %, - shear deformation in y coord  
       %f - frame number (for video)  
     Example) ./$i_%04x_%04y_%04w_%04h.%e  
       Store into software directory and use image type of the original.  
   -i <imgout_format = %d/imageclipper/%i.%e_%04r_%04x_%04y_%04w_%04h.png>  
     Determine the output file path format for image inputs.  
   -v <vidout_format = %d/imageclipper/%i.%e_%04f_%04r_%04x_%04y_%04w_%04h.png>  
     Determine the output file path format for a video input.  
   -f  
   --frame <frame = 1> (video)  
     Determine the frame number of video to start to read.  
   -h  
   --help  
     Show this help  
  Supported Image Types  
    bmp|dib|jpeg|jpg|jpe|png|pbm|pgm|ppm|sr|ras|tiff|exr|jp2  

This already tells you a lot. But to you is it you have to do the following:

1. Create a folder and put in all the images you want to cut and the imageclipper executable.

2. Open the Terminal app and navigate to this folder.

3. Once you're in this folder, type this (or whatever your name for the executable is):
 ./imageclipper  
4. A new window should open showing the first picture in this folder.


5. With your mouse draw a rectangle of the part you want to cut. Another window will open showing you how your cut image will look like.


6. To crop it and hit s, to jump to the next image hit f. Or you hit space to combine these two steps which makes you even faster.

7. After cropping, the next image in the folder will automatically open. To leave the aplication, hit esc.

That's it. You will find you're cropped images in a separete folder called imageclipper.

Program versions:
OS: Mac OS X 10.9.1
OpenCV: 2.4.8.0

Wednesday, February 12, 2014

Image processing with OpenCV

In the tutorial where I have explained how to write your first program, we already used some kind of image processing.

In this post I want to try out some other image processing possibilities.

Before we start we have to include some headers (from now on I will only use the newer headers from OpenCV 2):
 #include <opencv2/highgui/highgui.hpp>  
 #include <opencv2/imgproc/imgproc.hpp>  
After this I want to remind of the standard input and output operations that we will use.

Input and Output of images

Reading an image

We already did this in the tutorial. The command is:
 Mat image = imread( "PATH_TO_IMAGE", 1);  
So let's split this up:
Mat: this is the class for storing images, like int is for numbers
image: this is the name of the variable where the image is stored in
imread: command to read image
PATH_TO_IMAGE: this specifies the path to the image like "/users/christian/pictures/img.jpg"
1: specifies a color image with the 3 channels RGB. Change it to 0 for a grayscale image with just one channel.

Showing an image in a window

To do this we need the following command:
 imshow("Name of Window", image);
imshow: command to show image
Name of Window: this will be the name of the window where the image is shown in
image: variable of the image you want to display

Image Processing

I will just give you some examples. You can find all the possible commands in the OpenCV documentation. Let's start:

Allocation of channels
 vector<Mat>channels;  
 split(image, channels);  
Here we first create a variable called channels. With the command 'split' we split the RGB channels of the image and store them in channels. After that we can access each channel separately.
 imshow("Red", channels[2]);  
 imshow("Green", channels[1]);  
 imshow("Blue", channels[0]);  

Thresholding

Remember that every pixel of a image can be an integer in a range from 0 to 255. This means that a grayscale image can have 256 different scales of gray. The Picture below shows this

 threshold(image, image, 100, 255, CV_THRESH_BINARY);
image: the input and here as well the output image
100: this is the threshold; every pixel lower then 100 will be set to 0 (black)
255: all pixels over 100 will here be set to 255 (white)
CV_THRESH_BINARY: type of the threshold (doc)

For this I am using a grayscale image.


Blurring
 blur(image, image, Size (10, 10));  
image: the input and here as well the output image
Size: bluring kernel size, the higher the number - the more blur you'll get


Cutting
 Rect rect = Rect(100, 100, 200, 200); // Rectangle size 200x200, location x=100 y=100  
 image(rect).copyTo(imgaecut);         // Copy of the image  
 image(rect) *= 2;                     // Highlighting of the cut part in the original
Here we are creating a rectangle called rect with the size of 200x200 pixles and we locate it at x=y=100 pixels starting from the upper left corner. This part will be copied to a new image called imagecut. After this we highlight the cut part in the original image.


Don't forget to check the documentation for other processing commands!

Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0

Reference: whydomath.org

Monday, February 3, 2014

Tutorial: Build imageclipper on Mac OS X 10.9

In object detection using cascade classification you will get better results the more images you have to train your classifiers. But for the training it is necessary to prepare images that contain examples of the object you want to detect. More clearly, you have to crop loads of images that only the desired object is visible without much background. But cropping hundreds or thousands of images may be a lengthy process. For exactly this purpose Naotoshi Seo created a small, fast, multi platform software, named imageclipper.


Although the application still works, there are some problems we have to face:

--> He only provides a executable file for Windows. Unix users have to compile it themselves.

I searched a lot how to do this and found someone who uploaded a compiled version for Mac here - just to face another, for me already well known problem.

--> The software is outdated and it was compiled with an older version of OpenCV (here: version 2.1). Since then the links to the libraries and headers have changed. When trying to start imageclipper with a new Version of OpenCV you will get the following error:

dyld: Library not loaded: libopencv_core.2.1.dylib

So you will have to compile it for yourself if you want to use the speed advantage of imageclipper.

Preparation

Download the imageclipper source code

Get it from here: https://github.com/noplay/imageclipper
It is a fork of the original imageclipper with some Mac specific updates. Unpack it and navigate to the folder /src inside. There is all what you need.

Install boost

To compile the imageclipper source code you need the boost C++ source libraries. You can install it easily using MacPorts - just like you installed cmake. Just open the Terminal and enter this command:
 sudo port install boost   
After entering your password boost will be downloaded and installed.

Configure Xcode

1. Create a new project - a C++ Command Line Tool - and name it "imageclipper".

2. Copy all the files of the /src folder of the imageclipper fork in the folder of your project, that you just created.

3. Copy the content of the "imageclipper.cpp" in your "main.cpp".

4. Now we have to include the libraries:

4.1 Include the OpenCV libraries like I explained here in point 5 of "Configure Xcode": Tutorial: Configure Xcode for OpenCV programming

4.2 In the same manner add the boost libraries. They are located in /opt/local/lib. Just add all the .dylib-files named something with "libboost".

5. Configure search paths:

5.1 Add following Header Search Paths:
/usr/local/include
/usr/local/include/opencv
/usr/local/lib
/opt/local/include
/opt/local/include/boost

and the path to your project where you copied the imageclipper source files inside, for example:
/users/christian/documents/programming/imageclipper

It should look like this:

5.2 Add following Library Search Paths:
/usr/local/lib
/opt/local/lib

6. Compile imageclipper.

That's it. You can now compile imageclipper. You find the executable file in
/Users/USER-NAME/Library/Developer/Xcode/DerivedData/PROJECT-NAME/Build/Products

I have already done this. Has long as you followed my instructions of installing OpenCV and your libraries are located in the same path, you can just download my executable file and use it.

Download imageclipper executable for Mac

In another post, I explain how to use imageclipper.

Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0
MacPorts: 2.2.1
boost: 1.55.0

Saturday, February 1, 2014

Tutorial: First OpenCV program

After we installed OpenCV and configured Xcode it's time to create an easy program.

For the first program we want to refer to a OpenCV tutorial where we change the contrast and brightness of an image. But as I mentioned in my first post, this tutorial is outdated and was compiled with a older version of OpenCV. So we have to do some small changes to the code.

Edit the code

First edit the "main.cpp" file. Delete everything and copy the following code inside. You can also download it here.

1:  #include <opencv/cv.h>  
2:  #include <opencv/highgui.h>  
3:  #include <iostream>  
4:    
5:  using namespace cv;  
6:    
7:  double alpha; /**< Simple contrast control */  
8:  int beta; /**< Simple brightness control */  
9:    
10:  int main( int argc, char** argv )  
11:  {  
12:    // Read image given by user  
13:    // Change the path here to your image, look at form below  
14:    Mat image = imread( "/users/name/documents/programming/imgs/test1.jpg" );  
15:    Mat new_image = Mat::zeros( image.size(), image.type() );  
16:      
17:    // Initialize values  
18:    std::cout<<" Basic Linear Transforms "<<std::endl;  
19:    std::cout<<"-------------------------"<<std::endl;  
20:    std::cout<<"* Enter the alpha value [1.0-3.0]: ";std::cin>>alpha;  
21:    std::cout<<"* Enter the beta value [0-100]: "; std::cin>>beta;  
22:      
23:    // Do the operation new_image(i,j) = alpha*image(i,j) + beta  
24:    for( int y = 0; y < image.rows; y++ )  
25:    { for( int x = 0; x < image.cols; x++ )  
26:    { for( int c = 0; c < 3; c++ )  
27:    {  
28:      new_image.at<Vec3b>(y,x)[c] =  
29:      saturate_cast<uchar>( alpha*( image.at<Vec3b>(y,x)[c] ) + beta );  
30:    }  
31:    }  
32:    }  
33:      
34:    // Create Windows  
35:    namedWindow("Original Image", 1);  
36:    namedWindow("New Image", 1);  
37:      
38:    // Show stuff  
39:    imshow("Original Image", image);  
40:    imshow("New Image", new_image);  
41:      
42:    // Wait until user press some key  
43:    waitKey();  
44:    return 0;  
45:  }  

Note that I edited the include files in line 1 and 2. This is because of a newer version of OpenCV. If you navigate to /usr/local/include you will see that there are two folders now - opencv and opencv2. opencv contains the old header files and opencv2 the newer ones. So we just add the folder name to the path. Newer tutorials from the OpenCV website always include already the path and they use the opencv2 headers.

Run the program

Now it's time to finally compile and run the program. For this go to Product --> Run or just hit ⌘R

Down in the right corner you can see the console output where you can choose some values of the image manipulation.


Type in some numbers and hit enter. Here is the result for the values I chose.


Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0

Tutorial: Configure Xcode for OpenCV programming

In this post I want to show you how to configure Xcode to work with OpenCV if you installed it like I explained here.

Start Xcode

1. When you open Xcode, select "Create a new Xcode project".

2. Under OS X Applications select "Command Line Tool".

3. Give your project a name and select as Type "C++".

4. Select a path where you want to save your project.

Configure Xcode

Before actually writing some code, we have to prepare Xcode so that it can use OpenCV. For this we have to include the header and library files.

As I explained in the install tutorial, OpenCV was installed to /usr/local. The header files are in /usr/local/include and the libraries are located in /usr/local/lib. These paths we have to add to the "Search Paths" in Xcode. To go there, you have to:

1. Select your project file in the project navigator on the left.

2. Now you can access the "Build Settings". In the search box type in "Search Paths" or simply scroll down until you see them.

3. Double click on "Header Search Paths" and add two lines:
    /usr/local/include
    /usr/local/lib

4. Double click on "Library Search Paths" and add this line:
    /usr/local/lib

5. Now we just have to add the *.dylib files from OpenCV. To do this:

  5.1 right click on your project file.

  5.2 Select "Add files to "projectname" ...".

  5.3 Don't select a path. Just hit / to open a line where you can specify a path. Enter /usr/local/lib and hit enter.

  5.4 Select all the *.dylib files you want to add. Just add all of them for now. If you want to you can create a folder and move all the files in there.

Finally it should look like in this screeshot below. With this configuration you are ready to write programs with OpenCV.


Remark:
In case you didn't build opencv yourself and installed it instead via macports, the procedure of configuring Xcode is the same. Just modify the paths. Instead of
/usr/local/lib and /usr/local/include
Macports installs the libraries in
/opt/local/lib and /opt/local/include

Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0