I made a proof of concept of the basic idea of the project my classmate and me have on mind. We want to use Simple Kinect Touch to have a multi-touch screen. At some point, we are going to have a video beam projecting the screen. In this case, I am just using a piece of cloth in a very rudimentary way. In despite of the simplicity, it shows it is possible and maybe it would be helpful to somebody. I hope to be excused for the bad quality of the video and my awful english accent.

Rear Screen Capture with Simple Touch Kinect

The first thing is that Javier and me have decided to go back to 32 bits since he has kind of a nightmare with 64 bits. Things have quite well in my 32 Ubuntu 11.1o so we are going to stick to the old-save style. In this case, I am stilling the libraries to use a kinnect on Linux.

Simple Kinect Touch allow you to have a multi-touch screen in any surface. Javier and me are working in an idea to use this and the face recognition OpenCV implementation for a project of Digital Exhibit Design. We are still working in the idea and it will be posted when we have it run.

I know the Simple Kinect Touch has their own README for the installation but for some reason is not easy to follow unless you understand very well certain technical details that they assume. So I am just going to try to do a step by step pointing some of the things are not quite straight forward in the README.

A. Open a Terminal  (Ctrl + Alt + T) and run the next command to install OpenGL, QT4, LibUsb, g++, python, doxygen, graphviz and an interface library of gtk2 to avoid a warning . All of them are requirements for the following installations.

sudo apt-get install  freeglut3 freeglut3-dev  binutils-gold libqtcore4 libqt4-opengl libqt4-test libusb-1.0-0-dev g++ python doxygen graphviz gtk2-engines-pixbuf

B. Install OpenNi.

  1. Go to the downloads/OpenNI Modules.
  2. Then select “OpenNi Binares”, “unstable” and “OpenNI Unstable Build for Ubuntu 10.10 x86 (32-bit) v1.5.2.23”
  3. Then Download it
  4. Uncompress it.
  5. In the terminal go to the uncompress folder
  6. Execute in terminal: sudo ./install.sh
  7. Go to ./Samples/Bin/x86-Release
  8. Execute in terminal: sudo ./NiViewer

If you get this error: Failed to set USB interface! 

  1. Execute in terminal: sudo rmmod gspca_kinect
  2. Execute in terminal: sudo ./NiViewer

C. Install Avi2Drivers/SensorKinnect

  1. Download and uncompress it from here (If you have a github account, it’s easier just to execute “git clone git@github.com:avin2/SensorKinect.git git/SensorKinect”)
  2. In the terminal, go to the uncompress folder
  3. Then, go to Platform/Linux/CreateRedist
  4. Execute in Terminal: chmod +x RedistMaker
  5. Execute in Terminal: ./RedistMaker
  6. Move from to Platform/Linux/CreateRedist/ to Platform/Linux/Redist
  7. Enter to the only folder created in that directory. In my case the name was Sensor-Bin-Linux-x86-v5.1.0.25
  8. Execute in Terminal: chmod +x Install.sh
  9. Execute in Terminal: sudo ./install.sh

D. Install NITE

  1. Go to the downloads/OpenNI Modules.
  2. Then select “OpenNI Compliant Middleware Binaries”, “unstable” and “PrimeSense NITE Unstable Build for Ubuntu 10.10 x86 (32-bit) v1.5.2.21”
  3. Then Download it
  4. Uncompress it.
  5. In the terminal go to the uncompress folder
  6. Execute sudo ./install.sh

E.  Install and run the Simple Kinect Touch

  1. Go to the downloads section and download it.
  2. Uncompress it.
  3. In the terminal go to the uncompress folder
  4. Go to bin/Release
  5. chmod +x SKT-2
  6. Execute ./SKT-2

I will be posting more details since I am missing the wire to plug in the kinnect. For now, I am satisfied since everything is installed without errors.

Yesterday, we were playing around with Arduino again. This time Javier and me went to bigger project. A led that turns on when the computer detects a face with the webcam. After solving issues with the Internet connection we divided the work. He went into the task of manipulating the Arduino from Processing and I install the opencv and the respective libraries for processing.


Here are the steps:

1. Install processing.  Just download, uncompress and run.

2. Install opencv. Just open Terminal and execute the next command

sudo apt-get install libcv2.1 libcvaux2.1 libhighgui2.1

3. Find the sketchbook folder. To find the Processing sketchbook location on your computer, open the Preferences window from the Processing application and look for the “Sketchbook location” item at the top. So, for example, my sketchbook was located in /home/roberto/sketches.

4. Find or create the libraries folder. Inside the sketchbook folder (/home/roberto/sketches) there should be another folder called libraries. If you don’t find  it, just create it.

5. Download the library. Click here or look for the last version in the official web page.

6. Uncompress the tar.gz

7. Copy the whole uncompressed folder into the libraries folder. In my case /home/roberto/sketches/libraries. Normally the installation finish here, however it does not work because some of the files are named differently (different version of opencv)

8. Open a Terminal an create the following symbolic links with these commands:

sudo ln -s /usr/lib/libcxcore.so /usr/lib/libcxcore.so.1
sudo ln -s /usr/lib/libcv.so /usr/lib/libcv.so.1
sudo ln -s /usr/lib/libcvaux.so /usr/lib/libcvaux.so.1
sudo ln -s /usr/lib/libml.so /usr/lib/libml.so.1
sudo ln -s /usr/lib/libhighgui.so.2.1 /usr/lib/libhighgui.so.1

9. Open processing and paste the code. Just past this code in the processing window.

10. A small change in the previous code.  Just change this line

opencv.cascade( OpenCV.CASCADE_FRONTALFACE_ALT );    // load the FRONTALFACE description file

for this one

opencv.cascade( "/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml" );    // load the FRONTALFACE description file

Actually, I found more than one file. So it probably worth these others files. I just went to try them! The second one is much faster.


11. Run the code and enjoy.

It’s amazing how easily you can run very cool libraries to let your imagination fly. Don’t miss Javier’s blog who is going to post about the whole mini-project with Arduino included during this week.


I trained an Adaboost classifier to distinguish between two artistic styles. A tecnichal report of my results can be found on my ResearchGate.net account. This sort of tutorial – or more precisely collection of blog posts – explains the steps and provides the code to create an image classifier from histograms of oriented edges, colors and intensities. Therefore you can replicate my methodology to any other problems.

There are two main steps on this: (1) produce the features of the images, and (2) train and use the classifier. I started the blog sequence from the classifier that I used (Adaboost), and then continue explaining how to produce features for big collections. Probably this is a weird way of viewing the problem because I am starting from the last step,however I found that most of the decisions I took in the process were justified by the input I wanted to reach. I also recommend to check the comments where I have answered multiple questions during the time of existance of this posts.