I made a proof of concept of the basic idea of the project my classmate and me have on mind. We want to use Simple Kinect Touch to have a multi-touch screen. At some point, we are going to have a video beam projecting the screen. In this case, I am just using a piece of cloth in a very rudimentary way. In despite of the simplicity, it shows it is possible and maybe it would be helpful to somebody. I hope to be excused for the bad quality of the video and my awful english accent.

Rear Screen Capture with Simple Touch Kinect

The first thing is that Javier and me have decided to go back to 32 bits since he has kind of a nightmare with 64 bits. Things have quite well in my 32 Ubuntu 11.1o so we are going to stick to the old-save style. In this case, I am stilling the libraries to use a kinnect on Linux.

Simple Kinect Touch allow you to have a multi-touch screen in any surface. Javier and me are working in an idea to use this and the face recognition OpenCV implementation for a project of Digital Exhibit Design. We are still working in the idea and it will be posted when we have it run.

I know the Simple Kinect Touch has their own README for the installation but for some reason is not easy to follow unless you understand very well certain technical details that they assume. So I am just going to try to do a step by step pointing some of the things are not quite straight forward in the README.

A. Open a Terminal  (Ctrl + Alt + T) and run the next command to install OpenGL, QT4, LibUsb, g++, python, doxygen, graphviz and an interface library of gtk2 to avoid a warning . All of them are requirements for the following installations.

sudo apt-get install  freeglut3 freeglut3-dev  binutils-gold libqtcore4 libqt4-opengl libqt4-test libusb-1.0-0-dev g++ python doxygen graphviz gtk2-engines-pixbuf

B. Install OpenNi.

  1. Go to the downloads/OpenNI Modules.
  2. Then select "OpenNi Binares", "unstable" and "OpenNI Unstable Build for Ubuntu 10.10 x86 (32-bit) v1.5.2.23"
  3. Then Download it
  4. Uncompress it.
  5. In the terminal go to the uncompress folder
  6. Execute in terminal: sudo ./install.sh
  7. Go to ./Samples/Bin/x86-Release
  8. Execute in terminal: sudo ./NiViewer

If you get this error: Failed to set USB interface! 

  1. Execute in terminal: sudo rmmod gspca_kinect
  2. Execute in terminal: sudo ./NiViewer

C. Install Avi2Drivers/SensorKinnect

  1. Download and uncompress it from here (If you have a github account, it's easier just to execute "git clone git@github.com:avin2/SensorKinect.git git/SensorKinect")
  2. In the terminal, go to the uncompress folder
  3. Then, go to Platform/Linux/CreateRedist
  4. Execute in Terminal: chmod +x RedistMaker
  5. Execute in Terminal: ./RedistMaker
  6. Move from to Platform/Linux/CreateRedist/ to Platform/Linux/Redist
  7. Enter to the only folder created in that directory. In my case the name was Sensor-Bin-Linux-x86-v5.1.0.25
  8. Execute in Terminal: chmod +x Install.sh
  9. Execute in Terminal: sudo ./install.sh

D. Install NITE

  1. Go to the downloads/OpenNI Modules.
  2. Then select "OpenNI Compliant Middleware Binaries", "unstable" and "PrimeSense NITE Unstable Build for Ubuntu 10.10 x86 (32-bit) v1.5.2.21"
  3. Then Download it
  4. Uncompress it.
  5. In the terminal go to the uncompress folder
  6. Execute sudo ./install.sh

E.  Install and run the Simple Kinect Touch

  1. Go to the downloads section and download it.
  2. Uncompress it.
  3. In the terminal go to the uncompress folder
  4. Go to bin/Release
  5. chmod +x SKT-2
  6. Execute ./SKT-2

I will be posting more details since I am missing the wire to plug in the kinnect. For now, I am satisfied since everything is installed without errors.

Yesterday, we were playing around with Arduino again. This time Javier and me went to bigger project. A led that turns on when the computer detects a face with the webcam. After solving issues with the Internet connection we divided the work. He went into the task of manipulating the Arduino from Processing and I install the opencv and the respective libraries for processing.

 

Here are the steps:

1. Install processing.  Just download, uncompress and run.

2. Install opencv. Just open Terminal and execute the next command

sudo apt-get install libcv2.1 libcvaux2.1 libhighgui2.1

3. Find the sketchbook folder. To find the Processing sketchbook location on your computer, open the Preferences window from the Processing application and look for the "Sketchbook location" item at the top. So, for example, my sketchbook was located in /home/roberto/sketches.

4. Find or create the libraries folder. Inside the sketchbook folder (/home/roberto/sketches) there should be another folder called libraries. If you don't find  it, just create it.

5. Download the library. Click here or look for the last version in the official web page.

6. Uncompress the tar.gz

7. Copy the whole uncompressed folder into the libraries folder. In my case /home/roberto/sketches/libraries. Normally the installation finish here, however it does not work because some of the files are named differently (different version of opencv)

8. Open a Terminal an create the following symbolic links with these commands:

sudo ln -s /usr/lib/libcxcore.so /usr/lib/libcxcore.so.1
sudo ln -s /usr/lib/libcv.so /usr/lib/libcv.so.1
sudo ln -s /usr/lib/libcvaux.so /usr/lib/libcvaux.so.1
sudo ln -s /usr/lib/libml.so /usr/lib/libml.so.1
sudo ln -s /usr/lib/libhighgui.so.2.1 /usr/lib/libhighgui.so.1

9. Open processing and paste the code. Just past this code in the processing window.

10. A small change in the previous code.  Just change this line

opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );    // load the FRONTALFACE description file

for this one

opencv.cascade( "/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml" );    // load the FRONTALFACE description file

Actually, I found more than one file. So it probably worth these others files. I just went to try them! The second one is much faster.

/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt2.xml
/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt_tree.xml
/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_default.xml
/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml

11. Run the code and enjoy.

It's amazing how easily you can run very cool libraries to let your imagination fly. Don't miss Javier's blog who is going to post about the whole mini-project with Arduino included during this week.

 

I trained an Adaboost classifier to distinguish between two artistic styles. A tecnichal report of my results can be found on my ResearchGate.net account. This sort of tutorial - or more precisely collection of blog posts - explains the steps and provides the code to create an image classifier from histograms of oriented edges, colors and intensities. Therefore you can replicate my methodology to any other problems.

There are two main steps on this: (1) produce the features of the images, and (2) train and use the classifier. I started the blog sequence from the classifier that I used (Adaboost), and then continue explaining how to produce features for big collections. Probably this is a weird way of viewing the problem because I am starting from the last step,however I found that most of the decisions I took in the process were justified by the input I wanted to reach. I also recommend to check the comments where I have answered multiple questions during the time of existance of this posts.


 

This post follow the same idea as Lots of features from color histograms on a directory of images but using Edge orientation histograms in global and local features.

Basically I wanted to construct a collection of different edge orientation histograms for a collection of images that were saved in a directory. The histograms were calculated in different regions so I could get a lot of features. The images were numerated so the name of the file coincides with a number. The first thing I noticed was that I shouldn't use the code of Edge orientation histograms in global and local features directly because it was very inefficient. There is no need of calculating the gradients each time. It is better to do it just once and then extract the histogram from regions of this initial process. For this reason I divided that function in two functions: extract_edges and hist_edges. Here are both codes:

# parameters
# - the image
function [data] = extract_edges(im)

% define the filters for the 5 types of edges
f2 = zeros(3,3,5);
f2(:,:,1) = [1 2 1;0 0 0;-1 -2 -1];
f2(:,:,2) = [-1 0 1;-2 0 2;-1 0 1];
f2(:,:,3) = [2 2 -1;2 -1 -1; -1 -1 -1];
f2(:,:,4) = [-1 2 2; -1 -1 2; -1 -1 -1];
f2(:,:,5) = [-1 0 1;0 0 0;1 0 -1];

% the size of the image
ys = size(im,1);
xs = size(im,2);

# The image has to be in gray scale (intensities)
if (isrgb(im))
    im = rgb2gray(im);
endif

# Build a new matrix of the same size of the image
# and 5 dimensions to save the gradients
im2 = zeros(ys,xs,5);

# iterate over the posible directions
for i = 1:5
    # apply the sobel mask
    im2(:,:,i) = filter2(f2(:,:,i), im);
end

# calculate the max sobel gradient
[mmax, maxp] = max(im2,[],3);
# save just the index (type) of the orientation
# and ignore the value of the gradient
im2 = maxp;

# detect the edges using the default Octave parameters
ime = edge(im, 'canny');

# multiply against the types of orientations detected
# by the Sobel masks
data = im2.*ime;
function [data] = histo_edges(im, edges, r)

% size of the image
ys = size(im,1);
xs = size(im,2);
size = round(ys/r) * round(xs/r);

# produce a structur to save all the bins of the
# histogram of each region
eoh = zeros(r,r,6);
# for each region
for j = 1:r
    for i = 1:r
        # extract the subimage
        clip = edges(round((j-1)*ys/r+1):round(j*ys/r),round((i-1)*xs/r+1):round(i*xs/r));
        # calculate the histogram for the region
        eoh(j,i,:) = (hist(makelinear(clip), 0:5)*100)/numel(clip);
    end
end

# take out the zeros
eoh = eoh(:,:,2:6);

# represent all the histograms on one vector
data = zeros(1,numel(eoh));
data(:) = eoh(:);

Now, with this two functions and following the idea of Lots of features from color histograms on a directory of images, it is possible to generate the features in just one vector for each of the images:

function [t_set] = extract_eohs(dir, samples, filename)

fibs = [1,2,3,5,8,13];
total = 0;
ranges = [6, 2];
for fib = 1:size(fibs)(2)
    ranges(fib,1) = total + 1;
    total += 5*fibs(fib)*fibs(fib);
    ranges(fib,2) = total;
endfor

histo = zeros(samples, total);

for ind = 1:samples
    im = imread(strcat(dir, int2str(ind)));
    edges = extract_edges(im);
    for fib = 1:size(fibs)(2)
        histo(ind,ranges(fib,1):ranges(fib,2)) = histo_edges(im, edges, fibs(fib));
    endfor
endfor
save("-text", filename, "histo");
save("-text", "ranges.dat", "ranges");

t_set = ranges;