I already posted our proof of concept for our project in the course of Interactive Exhibit Design. Here I combined the example of Simple Kinect Touch (SKT) with some of the code Javier has been working on. I took some time to sort out the position of my screen and the kinect to get a better result. I guess it would be useful since I am using the last version of the SKT in which the interface has changed a bit. I hope it would be useful to somebody.

A quick configuration of Simple Kinect Touch 2.1

Yesterday, we were playing around with Arduino again. This time Javier and me went to bigger project. A led that turns on when the computer detects a face with the webcam. After solving issues with the Internet connection we divided the work. He went into the task of manipulating the Arduino from Processing and I install the opencv and the respective libraries for processing.

 

Here are the steps:

1. Install processing.  Just download, uncompress and run.

2. Install opencv. Just open Terminal and execute the next command

sudo apt-get install libcv2.1 libcvaux2.1 libhighgui2.1

3. Find the sketchbook folder. To find the Processing sketchbook location on your computer, open the Preferences window from the Processing application and look for the “Sketchbook location” item at the top. So, for example, my sketchbook was located in /home/roberto/sketches.

4. Find or create the libraries folder. Inside the sketchbook folder (/home/roberto/sketches) there should be another folder called libraries. If you don’t find  it, just create it.

5. Download the library. Click here or look for the last version in the official web page.

6. Uncompress the tar.gz

7. Copy the whole uncompressed folder into the libraries folder. In my case /home/roberto/sketches/libraries. Normally the installation finish here, however it does not work because some of the files are named differently (different version of opencv)

8. Open a Terminal an create the following symbolic links with these commands:

sudo ln -s /usr/lib/libcxcore.so /usr/lib/libcxcore.so.1
sudo ln -s /usr/lib/libcv.so /usr/lib/libcv.so.1
sudo ln -s /usr/lib/libcvaux.so /usr/lib/libcvaux.so.1
sudo ln -s /usr/lib/libml.so /usr/lib/libml.so.1
sudo ln -s /usr/lib/libhighgui.so.2.1 /usr/lib/libhighgui.so.1

9. Open processing and paste the code. Just past this code in the processing window.

10. A small change in the previous code.  Just change this line

opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );    // load the FRONTALFACE description file

for this one

opencv.cascade( "/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml" );    // load the FRONTALFACE description file

Actually, I found more than one file. So it probably worth these others files. I just went to try them! The second one is much faster.

/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt2.xml
/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt_tree.xml
/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_default.xml
/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml

11. Run the code and enjoy.

It’s amazing how easily you can run very cool libraries to let your imagination fly. Don’t miss Javier’s blog who is going to post about the whole mini-project with Arduino included during this week.

 

I trained an Adaboost classifier to distinguish between two artistic styles. A tecnichal report of my results can be found on my ResearchGate.net account. This sort of tutorial – or more precisely collection of blog posts – explains the steps and provides the code to create an image classifier from histograms of oriented edges, colors and intensities. Therefore you can replicate my methodology to any other problems.

There are two main steps on this: (1) produce the features of the images, and (2) train and use the classifier. I started the blog sequence from the classifier that I used (Adaboost), and then continue explaining how to produce features for big collections. Probably this is a weird way of viewing the problem because I am starting from the last step,however I found that most of the decisions I took in the process were justified by the input I wanted to reach. I also recommend to check the comments where I have answered multiple questions during the time of existance of this posts.


 

This post follow the same idea as Lots of features from color histograms on a directory of images but using Edge orientation histograms in global and local features.

Basically I wanted to construct a collection of different edge orientation histograms for a collection of images that were saved in a directory. The histograms were calculated in different regions so I could get a lot of features. The images were numerated so the name of the file coincides with a number. The first thing I noticed was that I shouldn’t use the code of Edge orientation histograms in global and local features directly because it was very inefficient. There is no need of calculating the gradients each time. It is better to do it just once and then extract the histogram from regions of this initial process. For this reason I divided that function in two functions: extract_edges and hist_edges. Here are both codes:

# parameters
# - the image
function [data] = extract_edges(im)

% define the filters for the 5 types of edges
f2 = zeros(3,3,5);
f2(:,:,1) = [1 2 1;0 0 0;-1 -2 -1];
f2(:,:,2) = [-1 0 1;-2 0 2;-1 0 1];
f2(:,:,3) = [2 2 -1;2 -1 -1; -1 -1 -1];
f2(:,:,4) = [-1 2 2; -1 -1 2; -1 -1 -1];
f2(:,:,5) = [-1 0 1;0 0 0;1 0 -1];

% the size of the image
ys = size(im,1);
xs = size(im,2);

# The image has to be in gray scale (intensities)
if (isrgb(im))
    im = rgb2gray(im);
endif

# Build a new matrix of the same size of the image
# and 5 dimensions to save the gradients
im2 = zeros(ys,xs,5);

# iterate over the posible directions
for i = 1:5
    # apply the sobel mask
    im2(:,:,i) = filter2(f2(:,:,i), im);
end

# calculate the max sobel gradient
[mmax, maxp] = max(im2,[],3);
# save just the index (type) of the orientation
# and ignore the value of the gradient
im2 = maxp;

# detect the edges using the default Octave parameters
ime = edge(im, 'canny');

# multiply against the types of orientations detected
# by the Sobel masks
data = im2.*ime;
function [data] = histo_edges(im, edges, r)

% size of the image
ys = size(im,1);
xs = size(im,2);
size = round(ys/r) * round(xs/r);

# produce a structur to save all the bins of the
# histogram of each region
eoh = zeros(r,r,6);
# for each region
for j = 1:r
    for i = 1:r
        # extract the subimage
        clip = edges(round((j-1)*ys/r+1):round(j*ys/r),round((i-1)*xs/r+1):round(i*xs/r));
        # calculate the histogram for the region
        eoh(j,i,:) = (hist(makelinear(clip), 0:5)*100)/numel(clip);
    end
end

# take out the zeros
eoh = eoh(:,:,2:6);

# represent all the histograms on one vector
data = zeros(1,numel(eoh));
data(:) = eoh(:);

Now, with this two functions and following the idea of Lots of features from color histograms on a directory of images, it is possible to generate the features in just one vector for each of the images:

function [t_set] = extract_eohs(dir, samples, filename)

fibs = [1,2,3,5,8,13];
total = 0;
ranges = [6, 2];
for fib = 1:size(fibs)(2)
    ranges(fib,1) = total + 1;
    total += 5*fibs(fib)*fibs(fib);
    ranges(fib,2) = total;
endfor

histo = zeros(samples, total);

for ind = 1:samples
    im = imread(strcat(dir, int2str(ind)));
    edges = extract_edges(im);
    for fib = 1:size(fibs)(2)
        histo(ind,ranges(fib,1):ranges(fib,2)) = histo_edges(im, edges, fibs(fib));
    endfor
endfor
save("-text", filename, "histo");
save("-text", "ranges.dat", "ranges");

t_set = ranges;

 

This is the last type of histograms I used in my project of training an Adaboost classifier to distinguish two artistic styles.

The basic idea in this step is to build a histogram with the directions of the gradients of the edges (borders or contours). It is possible to detect edges in an image but it in this we are interest in the detection of the angles. This is possible trough Sobel operators. The next five operators could give an idea of the strength of the gradient in five particular directions (Fig 1.).

Fig. 1 The sobel masks for 5 orientations: vertical, horizontal, diagonals and non-directional

The convolution against each of this mask produce a matrix of the same size of the original image indicating the gradient (strength) of the edge in any particular direction. It is possible to count the max gradient in the final 5 matrix and use that to complete a histogram (Fig 2.)

Fig 2. Edge Orientation Histogram

In terms of avoiding the amount of non important gradients that could potentially be introduced by this methodology, an option is to just take into account the edges detected by a very robust method as the canny edge detector. This detector returns a matrix of the same size of the image with a 1 if there is an edge and 0 if there is not and edge. Basically it returns the contours of the objects inside the image. If you just consider the 1’s we are just counting the most pronounced gradients.

I am also interested in calculate global and local histograms (I have already talk about this in previous posts).  For example Fig 1, Fig 2 and Fig 3 presents the regions for three different type of region divisions: 1, 3, 8 respectively.

Fig 1. 1x1 region divisions
Fig 2. 3x3 region divisions
8x8 region divisions

 

 

 

 

 

 

 

I found this code but I had to do several modifications because of my particular requirements. The most importants are:

  • I need to work just with gray scale images
  • I took out an initial filter that seems to be unnecessary
  • I need to extract histograms of different regions
  • I need a linear response. Just a vector with the responses together

I am posting the code with all the modifications:

# parameters
# - the image
# - the number of vertical and horizontal divisions
function [data] = edgeOrientationHistogram(im, r)

% define the filters for the 5 types of edges
f2 = zeros(3,3,5);
f2(:,:,1) = [1 2 1;0 0 0;-1 -2 -1];
f2(:,:,2) = [-1 0 1;-2 0 2;-1 0 1];
f2(:,:,3) = [2 2 -1;2 -1 -1; -1 -1 -1];
f2(:,:,4) = [-1 2 2; -1 -1 2; -1 -1 -1];
f2(:,:,5) = [-1 0 1;0 0 0;1 0 -1];

% the size of the image
ys = size(im,1);
xs = size(im,2);

# The image has to be in gray scale (intensities)
if (isrgb(im))
    im = rgb2gray(im);
endif

# Build a new matrix of the same size of the image
# and 5 dimensions to save the gradients
im2 = zeros(ys,xs,5);

# iterate over the posible directions
for i = 1:5
    # apply the sobel mask
    im2(:,:,i) = filter2(f2(:,:,i), im);
end

# calculate the max sobel gradient
[mmax, maxp] = max(im2,[],3);
# save just the index (type) of the orientation
# and ignore the value of the gradient
im2 = maxp;

# detect the edges using the default Octave parameters
ime = edge(im, 'canny');

# multiply against the types of orientations detected
# by the Sobel masks
im2 = im2.*ime;

# produce a structur to save all the bins of the
# histogram of each region
eoh = zeros(r,r,6);
# for each region
for j = 1:r
    for i = 1:r
        # extract the subimage
        clip = im2(round((j-1)*ys/r+1):round(j*ys/r),round((i-1)*xs/r+1):round(i*xs/r));
        # calculate the histogram for the region
        eoh(j,i,:) = (hist(makelinear(clip), 0:5)*100)/numel(clip);
    end
end

# take out the zeros
eoh = eoh(:,:,2:6);

# represent all the histograms on one vector
data = zeros(1,numel(eoh));
data(:) = eoh(:);

 
The makelinear function doesn’t exist in Octave. All it does is converting a matrix into a vector. If the function doesn’t exist you can use the following function.

# makelinear.m
# converts any input matrix into a 1D vector (output)

function data = makelinear(im)
data = zeros(numel(im),1);
data(:) = im(:);