Saturday, December 13, 2014

Access Raspberry Pi via SSH, VNC and netatalk from your Mac

Originally I planned to use a Mac for my work with opencv on the Raspberry Pi. I failed to set up the Mac properly so I switched to my Ubuntu Laptop which was laying around. And it was so much easier. Mostly because of the various tutorials available. Recently I hadn't any time to play around with opencv but maybe I find some time now and then. And since where I live now, I don't have my Linux PC with me, I have to do it with my MacBook Air.

The first thing if you want to work with a Raspberry Pi and don't have a Monitor available, you want to link it somehow to your PC/Mac and do the work from there. I explained how I did it with my Ubuntu Laptop here and now I want the same functionality with my MacBook.

So first I want to find out the IP address of the Raspberry Pi. I looked in the AppStore and found a free app called "IP Scanner" (Link), which shows all the IP addresses of all the devices in your network.

Connecting over SSH

To connect to your RPi open the terminal on your Mac and type this with the IP adress of your RPi:
 ssh pi@192.168.0.101  
Type in your password (default: raspberry).

Finally in your terminal you should see:
Now every command you type here, the RPi executes - not your computer.

Connecting over VNC

Sometimes this is not enough and you want to see the GUI of the RPi. You can also do this over the network with a VNC server. The part on the RPi is the same and I just copy it from my former tutorial.

On your RPi:
First connect to your RPi over SSH and install tightVNC.
 sudo apt-get update  
 sudo apt-get upgrade  
 sudo apt-get install tightvncserver  
To start the VNC Server use this command:
 vncserver :1 -geometry 1024x600 -depth 16 -pixelformat rgb565  
You must specify a password (8 characters) that you need to connect later. Answer no to the view only question.

On your Mac:
On the Mac you don't need to install anything. Sure, you could install any VNC Viewer you like, but if my computer already comes with the tools, I like to use them.
In Finder press
 cmd+K  
A new window opens where you can type in the IP of your RPi (don't forget to edit the command!):
 vnc://192.168.0.101:5901  

The 01 at the end it the VNC server on the RPi, to start the server on the Pi we chose 1 as well. After that type in your password and hit connect.

After that 'Screen Sharing' should open and you see this:

This is very comfortable like this and you can even save your password and your IP.

File transfers with Netatalk

To transfer files I use Netatalk which is already installed on my Pi. I just paste the command from my older tutorial. Here we have to do it again only on the RPi.

On your RPi install netatalk:
 sudo apt-get install netatalk  
After that give it some time to fully boot and then you should see the RPi in your finder under shared:

Here you click on 'Connect As ...'. In the window that will open type in 'pi' as Name and your password from natatalk (default raspberry).


After that I have the full functionality as I had in Linux.

Program versions:
Mac OSX 10.10.1
Raspian wheezy

Thursday, March 6, 2014

Access Raspberry Pi via SSH, VNC and netatalk from your Linux PC

Working on a Raspberry Pi is realtivly slow. Because of that I set up a cross compiler on my Linux Laptop (Part 1, Part 2). But also if you want to move files to your RPi or try running a program, it is sometimes a pain. I for example don't have a HDMI monitor or a USB keyboard. I only have my laptop. But luckily I can do everything I want on my RPi over the network.

To all of this you need the IP of your RPi in your network. For this I connected my RPi to my TV and borrowd a USB keyboard from a friend. If even this is not possible to you, here's a solution on how you can search for devices connected to your network.

On the RPi type in the terminal:
 ifconfig  
You should get something like:
 inet addr:192.168.0.101  
Note this!

Connecting over SSH

To connect to your RPi open the terminal on your Linux computer and type this with the IP adress of your RPi:
 ssh pi@192.168.0.101  
Answer the fingerprint question with yes and type in your password (default: raspberry).

Finally in your terminal you should see:

Now every command you type here the RPi executes - not your computer.

Connecting over VNC

Sometimes this is not enough and you want to see the GUI of the RPi. You can also do this over the network with a VNC server.

On your RPi:
First connect to your RPi over SSH and install tightVNC.
 sudo apt-get update  
 sudo apt-get upgrade  
 sudo apt-get install tightvncserver  
To start the VNC Server use this command:
 vncserver :1 -geometry 1024x600 -depth 16 -pixelformat rgb565  
You must specify a password (8 characters) that you need to connect later. Answer no to the view only question.

On your computer:
Open another terminal window and install a VNC viewer:
 sudo apt-get update  
 sudo apt-get upgrade  
 sudo apt-get install xtightvncviewer  
Now you can connect to the VNC server on the RPi via (edit the IP!):
 vncviewer 192.168.0.101:5901  
You should see the Desktop of the RPi in a window on your computer.

To stop the viewer just close the window and to stop the server type this on the RPi:
 vncserver -kill :1  

File transfer with netatalk

And how can I put these newly cross compiled programs on my RPi? Sure, with a USB stick. But since I already do everything over the network, I want to transfer files like this as well. The answer is netatalk.

On your computer and RPi install netatalk:
 sudo apt-get install netatalk  
Then just reboot both. After that give it some time to fully boot and then you should see the RPi in your file explorer under network:

Double click it and it asks for a username and password. These are:
Username: pi
Password: whatever you chose, default is raspberry

After that you can browse your files on the RPi from your computer and copy easily new files over

Program versions:
Ubuntu 12.04 LTS
Raspian wheezy 2014-01-07

How to cross compile for Raspberry Pi using Code::Blocks on Linux

In my privious post I described how I managed to build a cross compiler in Ubuntu for a Raspberry Pi. To compile I used a terminal command. But I want to use a IDE. On Linux I use Code::Blocks for now.

So we have to add a custom compiler to Code::Blocks. Go to
Settings --> Compiler ... --> Global compiler settings
Select the GNU CCC Compiler and click Copy. Give your new cross compiler a meaningful name.


Select your created compiler and go to Toolchain executables. Specify under Compiler's installation directory the path where you keep you cross compiler. For me the path looks like this:
/home/christian/Programming/x-tools/arm-unknown-linux-gnueabi

IMPORTANT: Don't specify the path to the bin directory like in the add-to-$PATH tutorial !!!

After that change the compilers and linkers so it looks like this:


Hit OK and create a new project. Select Console application.


Select your language and your directory where you want to save the project ...


Select your just created compiler...


After that you see already a Hello-World-Program. Try to build it. If it works yout get this notification.


Now we want to check if the cross compiler worked. Copy from your project directory the created file in /bin/debug in your home directory on your RPi. On your RPi (I used ssh) open the terminal and type
 ./ FILENAME  
If you see a "Hello World!" then your cross compiler works !

Program versions:
OS: Ubuntu 12.04 LTS
crosstool-ng 1.19.0
Code::Blocks 10.05

How to build a cross compiler for Raspberry Pi using crosstool-ng

Normally I'm working with a Mac and OpenCV. But I want to run OpenCV on a Raspberry Pi for a small preject. Since I don't want to compile on the Raspberry and lose a lot of time, I was looking for a cross compiler. Looks like this is easier from a Linux machine. I had an old Windows laptop left and so I installed Ubuntu 12.04 LTS along with Windows 7 on it.

I found some really good tutorials about this so I won't repeat them and just give you some links:

1. Very clear tutorial. Lightweight for a good overview:
http://www.bootc.net/archives/2012/05/26/how-to-build-a-cross-compiler-for-your-raspberry-pi/
2. More detailed with screenshots:
http://www.kitware.com/blog/home/post/426

I really like the first tutorial but it's missing information about packages you have to install before you can build crosstool-ng. On Ubuntu you do this with this command in the terminal:
 sudo apt-get install PACKAGE_NAME  
The second link already names some packages, but not all of them. I got errors when I tried to build the toolchain, for example subversion is needed to download a package during the build process. So here is a list of what I installed (replace PACKAGE_NAME with the name below to install):
  • libssl-dev 
  • openssh-server 
  • git-core 
  • pkg-config 
  • build-essential 
  • curl
  • gcc 
  • g++
  • bison 
  • flex 
  • gperf 
  • libtool 
  • texinfo 
  • gawk 
  • automake 
  • libncurses5-dev
  • subversion
Lot's of stuff, but finally I got the toolchain built.

The tutorial says that we have to add the compiler to our $PATH. This is done with this command (don't forget editing the path to your compiler):
 export PATH=$PATH:/PATH/TO/x-tools/arm-unknown-linux-gnueabi/bin  
Remember that this is only temporal! The next time you open your terminal you have to add this to your $PATH again

After this we can compile a "Hello World!" program with this command (source code):
 arm-unknown-linux-gnueabi-c++ hello.cpp -o hello  

Program versions:
OS: Ubuntu 12.04 LTS
crosstool-ng 1.19.0
gcc-linaro 4.8-2013.06-1

Wednesday, February 19, 2014

Static background subtraction using OpenCV

With background subtraction you can eliminate the background and focus on the actual object for further processing (detection, recogization, ...). Some algorythms provided in the OpenCV library are "learning" the background. The problem is if an object doesn't move, it will become the background as well. If you have a fixed camera - and so always the same background - you can just calculate the diffrence of two images.

Here's the code I used (you can download it here):
1:   #include <opencv2/core/core.hpp>   
2:   #include <opencv2/highgui/highgui.hpp>   
3:   #include <opencv2/imgproc/imgproc.hpp>   
4:   #include <iostream>   
5:   using namespace cv;   
6:   int main( int argc, char** argv )   
7:   {   
8:    // Read image given by user   
9:    Mat src = imread( "/users/christian/documents/programming/other/imgs/background.jpg", 0); // 1:color, 0:grayscale   
10:    Mat dst = imread("/users/christian/documents/programming/other/imgs/backtest.jpg", 0);   
11:    // background subtraction   
12:    Mat diff;   
13:    absdiff(src, dst, diff);   
14:    threshold(diff, diff, 10, 255, CV_THRESH_BINARY); // grayscale needed   
15:    // Show image in window   
16:    imshow("original", src);   
17:    imshow("new", dst);   
18:    imshow("diff", diff);   
19:    // Wait until user presses key   
20:    waitKey();   
21:    return 0;   
22:   }   
All it does is loading two images, calculating the differance, applying a threshold to highlight the object and displaying the 3 images.


This is just a theoretical example. Because here I use a static image to calculate the differnce, this method is very light sensitve. Lighting will also change the background and will no longer match exactly the stored background.

Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0

Thursday, February 13, 2014

Directories

This partly just a reminder for myself. But maybe some of you wondered where these libraries are and where Xcode 5 saves the executables that you just compiled.

Xcode 5 hides the project files pretty well but you can find them under:
 /users/username/Library/Developer/Xcode/DerivedData  
To access the Library folder, klick Go in the finder menu bar. When you pres the alt-key a link to Library should appear.

And where are the OpenCV files you installed using cmake?
The lib-files are in
 /usr/local/lib  
and the headers in
 /usr/local/include 
Again, the usr-folder is hidden. The easiest way to get there is by just typing the path in finder Go->Go to Folder...

Also important to know is the location of additional libraries you installed using MacPorts. MacPorts installs everything to
 /opt/local
Nothing is hidden there, so no problem.

Because auf the hidden folders and the rather longer paths I created for me some aliases to acess these folders rapidly from my working folder.

Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0
MacPorts: 2.2.1

How to use imageclipper on Mac OS X

In the previous tutorial I explained how to build imageclipper from source. You may have done this or just downloaded the executable file I provided. But you can't just double click it. If you do, only a help will be presented:
 Application Usage:  
  Mouse Usage:  
   Left (select)     : Select or initialize a rectangle region.  
   Right (move or resize) : Move by dragging inside the rectangle.  
                Resize by draggin outside the rectangle.  
   Middle or SHIFT + Left : Initialize the watershed marker. Drag it.   
  Keyboard Usage:  
   s (save)        : Save the selected region as an image.  
   f (forward)       : Forward. Show next image.  
   SPACE          : Save and Forward.  
   b (backward)      : Backward.   
   q (quit) or ESC     : Quit.   
   r (rotate) R (opposite) : Rotate rectangle in counter-clockwise.  
   e (expand) E (shrink)  : Expand the recntagle size.  
   + (incl)  - (decl)   : Increment the step size to increment.  
   h (left) j (down) k (up) l (right) : Move rectangle. (vi-like keybinds)  
   y (left) u (down) i (up) o (right) : Resize rectangle. (Move boundaries)  
   n (left) m (down) , (up) . (right) : Shear deformation.  
 Now reading a directory..... No image file exist under a directory .  
 ImageClipper - image clipping helper tool.  
 Command Usage: imgclipper [option]... [arg_reference]  
  <arg_reference = .>  
   <arg_reference> would be a directory or an image or a video filename.  
   For a directory, image files in the directory will be read sequentially.  
   For an image, it starts to read a directory from the specified image file.   
   (A file is judged as an image based on its filename extension.)  
   A file except images is tried to be read as a video and read frame by frame.   
  Options  
   -o <output_format = imgout_format or vidout_format>  
     Determine the output file path format.  
     This is a syntax sugar for -i and -v.   
     Format Expression)  
       %d - dirname of the original  
       %i - filename of the original without extension  
       %e - filename extension of the original  
       %x - upper-left x coord  
       %y - upper-left y coord  
       %w - width  
       %h - height  
       %r - rotation degree  
       %. - shear deformation in x coord  
       %, - shear deformation in y coord  
       %f - frame number (for video)  
     Example) ./$i_%04x_%04y_%04w_%04h.%e  
       Store into software directory and use image type of the original.  
   -i <imgout_format = %d/imageclipper/%i.%e_%04r_%04x_%04y_%04w_%04h.png>  
     Determine the output file path format for image inputs.  
   -v <vidout_format = %d/imageclipper/%i.%e_%04f_%04r_%04x_%04y_%04w_%04h.png>  
     Determine the output file path format for a video input.  
   -f  
   --frame <frame = 1> (video)  
     Determine the frame number of video to start to read.  
   -h  
   --help  
     Show this help  
  Supported Image Types  
    bmp|dib|jpeg|jpg|jpe|png|pbm|pgm|ppm|sr|ras|tiff|exr|jp2  

This already tells you a lot. But to you is it you have to do the following:

1. Create a folder and put in all the images you want to cut and the imageclipper executable.

2. Open the Terminal app and navigate to this folder.

3. Once you're in this folder, type this (or whatever your name for the executable is):
 ./imageclipper  
4. A new window should open showing the first picture in this folder.


5. With your mouse draw a rectangle of the part you want to cut. Another window will open showing you how your cut image will look like.


6. To crop it and hit s, to jump to the next image hit f. Or you hit space to combine these two steps which makes you even faster.

7. After cropping, the next image in the folder will automatically open. To leave the aplication, hit esc.

That's it. You will find you're cropped images in a separete folder called imageclipper.

Program versions:
OS: Mac OS X 10.9.1
OpenCV: 2.4.8.0

Wednesday, February 12, 2014

Image processing with OpenCV

In the tutorial where I have explained how to write your first program, we already used some kind of image processing.

In this post I want to try out some other image processing possibilities.

Before we start we have to include some headers (from now on I will only use the newer headers from OpenCV 2):
 #include <opencv2/highgui/highgui.hpp>  
 #include <opencv2/imgproc/imgproc.hpp>  
After this I want to remind of the standard input and output operations that we will use.

Input and Output of images

Reading an image

We already did this in the tutorial. The command is:
 Mat image = imread( "PATH_TO_IMAGE", 1);  
So let's split this up:
Mat: this is the class for storing images, like int is for numbers
image: this is the name of the variable where the image is stored in
imread: command to read image
PATH_TO_IMAGE: this specifies the path to the image like "/users/christian/pictures/img.jpg"
1: specifies a color image with the 3 channels RGB. Change it to 0 for a grayscale image with just one channel.

Showing an image in a window

To do this we need the following command:
 imshow("Name of Window", image);
imshow: command to show image
Name of Window: this will be the name of the window where the image is shown in
image: variable of the image you want to display

Image Processing

I will just give you some examples. You can find all the possible commands in the OpenCV documentation. Let's start:

Allocation of channels
 vector<Mat>channels;  
 split(image, channels);  
Here we first create a variable called channels. With the command 'split' we split the RGB channels of the image and store them in channels. After that we can access each channel separately.
 imshow("Red", channels[2]);  
 imshow("Green", channels[1]);  
 imshow("Blue", channels[0]);  

Thresholding

Remember that every pixel of a image can be an integer in a range from 0 to 255. This means that a grayscale image can have 256 different scales of gray. The Picture below shows this

 threshold(image, image, 100, 255, CV_THRESH_BINARY);
image: the input and here as well the output image
100: this is the threshold; every pixel lower then 100 will be set to 0 (black)
255: all pixels over 100 will here be set to 255 (white)
CV_THRESH_BINARY: type of the threshold (doc)

For this I am using a grayscale image.


Blurring
 blur(image, image, Size (10, 10));  
image: the input and here as well the output image
Size: bluring kernel size, the higher the number - the more blur you'll get


Cutting
 Rect rect = Rect(100, 100, 200, 200); // Rectangle size 200x200, location x=100 y=100  
 image(rect).copyTo(imgaecut);         // Copy of the image  
 image(rect) *= 2;                     // Highlighting of the cut part in the original
Here we are creating a rectangle called rect with the size of 200x200 pixles and we locate it at x=y=100 pixels starting from the upper left corner. This part will be copied to a new image called imagecut. After this we highlight the cut part in the original image.


Don't forget to check the documentation for other processing commands!

Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0

Reference: whydomath.org

Monday, February 3, 2014

Tutorial: Build imageclipper on Mac OS X 10.9

In object detection using cascade classification you will get better results the more images you have to train your classifiers. But for the training it is necessary to prepare images that contain examples of the object you want to detect. More clearly, you have to crop loads of images that only the desired object is visible without much background. But cropping hundreds or thousands of images may be a lengthy process. For exactly this purpose Naotoshi Seo created a small, fast, multi platform software, named imageclipper.


Although the application still works, there are some problems we have to face:

--> He only provides a executable file for Windows. Unix users have to compile it themselves.

I searched a lot how to do this and found someone who uploaded a compiled version for Mac here - just to face another, for me already well known problem.

--> The software is outdated and it was compiled with an older version of OpenCV (here: version 2.1). Since then the links to the libraries and headers have changed. When trying to start imageclipper with a new Version of OpenCV you will get the following error:

dyld: Library not loaded: libopencv_core.2.1.dylib

So you will have to compile it for yourself if you want to use the speed advantage of imageclipper.

Preparation

Download the imageclipper source code

Get it from here: https://github.com/noplay/imageclipper
It is a fork of the original imageclipper with some Mac specific updates. Unpack it and navigate to the folder /src inside. There is all what you need.

Install boost

To compile the imageclipper source code you need the boost C++ source libraries. You can install it easily using MacPorts - just like you installed cmake. Just open the Terminal and enter this command:
 sudo port install boost   
After entering your password boost will be downloaded and installed.

Configure Xcode

1. Create a new project - a C++ Command Line Tool - and name it "imageclipper".

2. Copy all the files of the /src folder of the imageclipper fork in the folder of your project, that you just created.

3. Copy the content of the "imageclipper.cpp" in your "main.cpp".

4. Now we have to include the libraries:

4.1 Include the OpenCV libraries like I explained here in point 5 of "Configure Xcode": Tutorial: Configure Xcode for OpenCV programming

4.2 In the same manner add the boost libraries. They are located in /opt/local/lib. Just add all the .dylib-files named something with "libboost".

5. Configure search paths:

5.1 Add following Header Search Paths:
/usr/local/include
/usr/local/include/opencv
/usr/local/lib
/opt/local/include
/opt/local/include/boost

and the path to your project where you copied the imageclipper source files inside, for example:
/users/christian/documents/programming/imageclipper

It should look like this:

5.2 Add following Library Search Paths:
/usr/local/lib
/opt/local/lib

6. Compile imageclipper.

That's it. You can now compile imageclipper. You find the executable file in
/Users/USER-NAME/Library/Developer/Xcode/DerivedData/PROJECT-NAME/Build/Products

I have already done this. Has long as you followed my instructions of installing OpenCV and your libraries are located in the same path, you can just download my executable file and use it.

Download imageclipper executable for Mac

In another post, I explain how to use imageclipper.

Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0
MacPorts: 2.2.1
boost: 1.55.0

Saturday, February 1, 2014

Tutorial: First OpenCV program

After we installed OpenCV and configured Xcode it's time to create an easy program.

For the first program we want to refer to a OpenCV tutorial where we change the contrast and brightness of an image. But as I mentioned in my first post, this tutorial is outdated and was compiled with a older version of OpenCV. So we have to do some small changes to the code.

Edit the code

First edit the "main.cpp" file. Delete everything and copy the following code inside. You can also download it here.

1:  #include <opencv/cv.h>  
2:  #include <opencv/highgui.h>  
3:  #include <iostream>  
4:    
5:  using namespace cv;  
6:    
7:  double alpha; /**< Simple contrast control */  
8:  int beta; /**< Simple brightness control */  
9:    
10:  int main( int argc, char** argv )  
11:  {  
12:    // Read image given by user  
13:    // Change the path here to your image, look at form below  
14:    Mat image = imread( "/users/name/documents/programming/imgs/test1.jpg" );  
15:    Mat new_image = Mat::zeros( image.size(), image.type() );  
16:      
17:    // Initialize values  
18:    std::cout<<" Basic Linear Transforms "<<std::endl;  
19:    std::cout<<"-------------------------"<<std::endl;  
20:    std::cout<<"* Enter the alpha value [1.0-3.0]: ";std::cin>>alpha;  
21:    std::cout<<"* Enter the beta value [0-100]: "; std::cin>>beta;  
22:      
23:    // Do the operation new_image(i,j) = alpha*image(i,j) + beta  
24:    for( int y = 0; y < image.rows; y++ )  
25:    { for( int x = 0; x < image.cols; x++ )  
26:    { for( int c = 0; c < 3; c++ )  
27:    {  
28:      new_image.at<Vec3b>(y,x)[c] =  
29:      saturate_cast<uchar>( alpha*( image.at<Vec3b>(y,x)[c] ) + beta );  
30:    }  
31:    }  
32:    }  
33:      
34:    // Create Windows  
35:    namedWindow("Original Image", 1);  
36:    namedWindow("New Image", 1);  
37:      
38:    // Show stuff  
39:    imshow("Original Image", image);  
40:    imshow("New Image", new_image);  
41:      
42:    // Wait until user press some key  
43:    waitKey();  
44:    return 0;  
45:  }  

Note that I edited the include files in line 1 and 2. This is because of a newer version of OpenCV. If you navigate to /usr/local/include you will see that there are two folders now - opencv and opencv2. opencv contains the old header files and opencv2 the newer ones. So we just add the folder name to the path. Newer tutorials from the OpenCV website always include already the path and they use the opencv2 headers.

Run the program

Now it's time to finally compile and run the program. For this go to Product --> Run or just hit ⌘R

Down in the right corner you can see the console output where you can choose some values of the image manipulation.


Type in some numbers and hit enter. Here is the result for the values I chose.


Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0

Tutorial: Configure Xcode for OpenCV programming

In this post I want to show you how to configure Xcode to work with OpenCV if you installed it like I explained here.

Start Xcode

1. When you open Xcode, select "Create a new Xcode project".

2. Under OS X Applications select "Command Line Tool".

3. Give your project a name and select as Type "C++".

4. Select a path where you want to save your project.

Configure Xcode

Before actually writing some code, we have to prepare Xcode so that it can use OpenCV. For this we have to include the header and library files.

As I explained in the install tutorial, OpenCV was installed to /usr/local. The header files are in /usr/local/include and the libraries are located in /usr/local/lib. These paths we have to add to the "Search Paths" in Xcode. To go there, you have to:

1. Select your project file in the project navigator on the left.

2. Now you can access the "Build Settings". In the search box type in "Search Paths" or simply scroll down until you see them.

3. Double click on "Header Search Paths" and add two lines:
    /usr/local/include
    /usr/local/lib

4. Double click on "Library Search Paths" and add this line:
    /usr/local/lib

5. Now we just have to add the *.dylib files from OpenCV. To do this:

  5.1 right click on your project file.

  5.2 Select "Add files to "projectname" ...".

  5.3 Don't select a path. Just hit / to open a line where you can specify a path. Enter /usr/local/lib and hit enter.

  5.4 Select all the *.dylib files you want to add. Just add all of them for now. If you want to you can create a folder and move all the files in there.

Finally it should look like in this screeshot below. With this configuration you are ready to write programs with OpenCV.


Remark:
In case you didn't build opencv yourself and installed it instead via macports, the procedure of configuring Xcode is the same. Just modify the paths. Instead of
/usr/local/lib and /usr/local/include
Macports installs the libraries in
/opt/local/lib and /opt/local/include

Program versions:
OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0

Friday, January 31, 2014

Tutorial: Installing OpenCV on a Mac (OS X 10.9.1 Mavericks)

Since I found it quite hard to get OpenCV up and running and most of the tutorials I found were outdated, I will update them to work on the latest releases of OpenCV and Mac OS X. In every post I will also tell you which versions I used because I always thought this was a missing information. Here I used:

OS: Mac OS X 10.9.1
Xcode: 5.0.2
OpenCV: 2.4.8.0
MacPorts: 2.2.1
cmake: 2.8.12_3

OpenCV is available from various sources. You can get it via package managers like Homebrew or MacPorts or you can build it yourself from source. It's much easier if you get it from any of the mentioned package managers but in the beginning I got some errors with that when I tried to compile tutorials. Manly because it is installed in a different location as if you build it yourself. As the OpenCV tutorials always refer to the standard path, I would recommend for beginners to build it from source so you don't have to change the code of the examples and tutorials. So, let's go.

Preparation

Install Xcode

First of all you need to have Xcode installed. Just get it from the Mac App Store. The actual Version is 5.0.2. In other tutorials you will always find that you need to install the "Command Line Tools" for Xcode. Well, maybe you needed it. But Xcode 5 comes with these tools already installed. So nothing to do here.
Before you continue, launch Xcode. When you start it the first time you have to agree to the license agreement. If you don't do this, the following will not work.

Download OpenCV

Download the OpenCV Libraries. For Mac you want to use the Unix version. You will get a .zip file - unzip it.

Get MacPorts

Just download the package from their site MacPorts-2.2.1-10.9-Mavericks
... and install it. For help, refer to the MacPorts Guide.

You can check if it was installed correctly by typing
 port version  
in the terminal. The terminal app is in the Utilities folder in your applications.

Get cmake

For this you want to use the previously installed MacPorts. So launch your terminal app again. Then type this in:
 sudo port install cmake   
This may take a while until everything is downloaded and installed. You can check your installation by typing this in your terminal:
 port installed   
This will show all your installed ports. You should see cmake there.

Build OpenCV

We finally have made all the necessary preparations and are ready to build OpenCV.

1.
Copy the unpacked folder of OpenCV to your desktop und rename it to "opencv".

2.
Open your terminal and type the following in:
 cd Desktop/opencv  
 mkdir build  
 cd build  
With these commands you go to your OpenCV folder on the desktop, create a new folder inside named "build" and enter this folder.

3.
Now we can build and install OpenCV:
 cmake -G "Unix Makefiles" ..  
 make -j8  
 sudo make install  

That's it. You just built OpenCV from source and installed it to /usr/local.

In another post I will guide you through the process of creating your first OpenCV project in Xcode.

Source: Tilo Mitra