Final Post: Gamex and Faces in Baroque Paintings

Face recognition algorithms (used in digital cameras) allowed us to detect faces in paintings. This has gave us the possibility of having a collection of faces of a particular epoch (in this case, the baroque). However, the results of the algorithms are not perfect when applied in paintings instead of pictures. Gamexgives the chance to clean this collection. This is very important since these paintings are the only historical visual inheritance we have from the period. A period that started after the meet of two worlds.

1. Description

Gamex was born from the merging of different ideas we had at the very beginning of the Interactive Exhibit Design course. It basically combines motion detection, face recognition and games to produce an interactive exhibit of Baroque paintings. The user is going to interact with the game by touching, or more properly poking, faces, eyes, ears, noses, mouths and throats of the characters of the painting. We will be scoring him if there is or there is not a face already recognized on those points. Previously, the database has a repository with all the information the faces recognition algorithms have detected. With this idea, we will be able to clean mistakes that the automatic face recognition has introduced.

The Gamex Set
The Gamex Set.

2. The Architecture

A Tentative Architecture for Gamex explains the general architecture in more detail. Basically we have four physical components:

  • A screen. Built with a wood frame and elastic-stretch fabric where the images are going to be projected from the back and where the user is going to interact poking them.
  • The projector. Just to project the image from the back to the screen (rear screen projetion).
  • Microsoft Kinect. It is going to capture the deformations on the fabric and send them to the computer.
  • Computer. Captures the deformations send by the Kinect device and translates them to touch events (similar to mouse clicks). These events are used in a game to mark on different parts of the face of people from baroque paintings. All the information is stored in a database and we are going to use it to refine a previously calculated set of faces obtained through face recognition algorithms.

3. The Technology

There were several important pieces of technology that were involved in this project.

Face Recognition

Recent technologies offers us the possibility of recognizing objects in digital images. In this case, we were interested in recognizing faces. To achieve that, we used the libraries OpenCV and SimpleCV. The second one just allowed us to use OpenCV with Python, the glue of our project. There are several posts in which we explain a bit more the details of this technology and how we used.

Multi Touch Screen

One of the biggest part of our work involved working with multi-touch screens. Probably because it is still a very new technology where things haven’t set down that much we have several problems but fortunately we managed to solved them all. The idea is to have a rear screen projection using the Microsoft Kinect. Initially though for video-game system Microsoft Xbox 360, there is a lot of people creating hacks (such as Simple Kinect Touch) to take advantage of the abilities of this artifact to capture deepness. Using two infrared lights and arithmetic, this device is able to capture the distance from the Kinect to the objects in front of it. It basically returns an image, in which each pixel is the deepness of the object to the Kinect. All sorts of magic tricks could be performed, from recognizing gestures of faces to deformations in a piece of sheet. This last idea is the hearth of our project. Again, some of the posts explaining how and how do not use this technology.

The Gamex Set
The Gamex Set

Games

Last but not least, Kivy. Kivy is an open source framework for the development of applications that make use of innovative user interfaces, such as multi-touch applications. So, it fits to our purposes. As programmers, we have developed interfaces in many different types of platforms, such as Java, Microsoft Visual, Python, C++ and HTML. We discovered Kivy being very different from anything we knew before. After struggling for two or three weeks we came with our interface. The real thing about Kivy is that they use a very different approach which, apart from having their own language, the developers claim to be very efficient. At the very end, we started to liked and to be fair it has just one year out there so it will probably improve a lot. Finally, it has the advantage that it is straightforward to have a version for Android and iOS devices.

4. Learning

There has been a lot of personal learning in this project. We never used before the three main technologies used for this project. Also we included a relatively new NoSQL database system called MongoDB. So that makes four different technologies. However, Javier and me agree that one of the most difficult part was building up the frame. We tried several approaches: from using my loft bed as a frame to a monster big frame (with massive pieces of wood carried from downtown to the university in my bike) that the psyco duck would bring down with the movement of the wings.

It is also interesting how ideas changes over the time, some of them we probably forgot. Others, we tried and didn’t work as expected. Most of them changed a little bit but the spirit of our initial concept is in our project. I guess creative process is a long way between a driven idea and the hacks to get to it.

5. The Exhibition

Technology fails on the big day and the day of the presentation we couldn’t get our video but there is the ThatCamp coming soon. A new opportunity to see users in action. So the video of the final result, although not puclib yet, is attached here. It will come more soon!

[youtube BVYq_cBf8z4]

6. Future Work

This has been a long post but there is still a few more things to say. And probably much more in the future. We liked the idea so much that we are continuing working on this and we liked to mention some ideas that need to be polished and some pending work:

  • Score of the game. We want to build a better system for scores. Our main problem is that the data that we have to score is incomplete and imperfect (who has always the right answers anyway). We want to give a fair solution to this. Our idea is to work with fuzzy logic to lessen the damage in case the computer is not right.
  • Graphics. We need to improve our icons. We consider some of them very cheesy and needs to be refined. Also, we would like to adapt the size of the icon to the size of the face the computer already recognized, so the image would be adjusted almost perfectly.
  • Sounds.  A nice improvement but also a lot of work to have a good collection of midi or MP3 files if we don’t find any publicly available.
  • Mobile versions. Since Kivy offers this possibility, it would be silly not to take advantage of this. At the end, we know addictive games are the key to entertain people on buses. This will convert the application in a real crowd sourcing project. Even if this implies to build a better system for storing the information fllowing the REST principles with OAuth and API keys.
  • Cleaning the collection. Finally, after having enough data it would be the right time to collect the faces and have the first repository of “The Baroque Face”. This will give us an spectrum of how does the people of the XVI to XVIII looked like. Exciting, ¿isn’t it?
  • Visualizations. Also we will be able to do some interesting visualizations, like heat maps where the people did touch for being a mouth, or an ear, or a head.

6. Conclusions

In conclusion we can say that the experience has been awesome. Even better than that was to see the really high level of our classmates’ projects. In the honour of the truth, we must say that we have a background in Computer Science and we played somehow with a little bit more of adventage. Anyway, it was an amazing experience the presentation of all the projects. We really liked the course and we recommend to future students. Let’s see what future has prepared for Gamex!

Some of the projects
Some of the projects

This post was written and edit togetter with my classmate Javier. So you also can find the post on his blog.

One of the most important ideas in object recognition, particularly faces recognition, became with the work of Viola and Jones [1]. This work is based in the algorithm of Adaboost [2]. The idea is use very simple features of the faces that can be calculated very fast. Then select the best ones testing against a previously set of faces. In general, a feature is any value we can extract from a digital image. For example, a simple value of a pixel could be a feature. It is also possible to use more sophisticated stuff like histograms of colors or edges. In the case of Viola and Jones they use a very simple way of play with pixels. Just as an example, a feature could be the substraction of the area (sum of pixels)  of one region of the image to another region of the image.

So, as part of the course Interactive Exhibit Design we decided to use this. Then I processed a lot of old baroque paintings and extract the faces. Even though the results are not perfect, I obtained decent results. I have a whole folder of faces and these are two sections of it. the first is a good section of the folder and the second a not-so-good section. I hope to do something interesting with all of this.

[1] Rapid object detection using a boosted cascade of simple features

[2] A decision-theoretic generalization of on-line learning and an application to boosting

Recognized Faces

 

Bad Recognized Faces

 

I already posted our proof of concept for our project in the course of Interactive Exhibit Design. Here I combined the example of Simple Kinect Touch (SKT) with some of the code Javier has been working on. I took some time to sort out the position of my screen and the kinect to get a better result. I guess it would be useful since I am using the last version of the SKT in which the interface has changed a bit. I hope it would be useful to somebody.

A quick configuration of Simple Kinect Touch 2.1

I made a proof of concept of the basic idea of the project my classmate and me have on mind. We want to use Simple Kinect Touch to have a multi-touch screen. At some point, we are going to have a video beam projecting the screen. In this case, I am just using a piece of cloth in a very rudimentary way. In despite of the simplicity, it shows it is possible and maybe it would be helpful to somebody. I hope to be excused for the bad quality of the video and my awful english accent.

Rear Screen Capture with Simple Touch Kinect

The first thing is that Javier and me have decided to go back to 32 bits since he has kind of a nightmare with 64 bits. Things have quite well in my 32 Ubuntu 11.1o so we are going to stick to the old-save style. In this case, I am stilling the libraries to use a kinnect on Linux.

Simple Kinect Touch allow you to have a multi-touch screen in any surface. Javier and me are working in an idea to use this and the face recognition OpenCV implementation for a project of Digital Exhibit Design. We are still working in the idea and it will be posted when we have it run.

I know the Simple Kinect Touch has their own README for the installation but for some reason is not easy to follow unless you understand very well certain technical details that they assume. So I am just going to try to do a step by step pointing some of the things are not quite straight forward in the README.

A. Open a Terminal  (Ctrl + Alt + T) and run the next command to install OpenGL, QT4, LibUsb, g++, python, doxygen, graphviz and an interface library of gtk2 to avoid a warning . All of them are requirements for the following installations.

sudo apt-get install  freeglut3 freeglut3-dev  binutils-gold libqtcore4 libqt4-opengl libqt4-test libusb-1.0-0-dev g++ python doxygen graphviz gtk2-engines-pixbuf

B. Install OpenNi.

  1. Go to the downloads/OpenNI Modules.
  2. Then select "OpenNi Binares", "unstable" and "OpenNI Unstable Build for Ubuntu 10.10 x86 (32-bit) v1.5.2.23"
  3. Then Download it
  4. Uncompress it.
  5. In the terminal go to the uncompress folder
  6. Execute in terminal: sudo ./install.sh
  7. Go to ./Samples/Bin/x86-Release
  8. Execute in terminal: sudo ./NiViewer

If you get this error: Failed to set USB interface! 

  1. Execute in terminal: sudo rmmod gspca_kinect
  2. Execute in terminal: sudo ./NiViewer

C. Install Avi2Drivers/SensorKinnect

  1. Download and uncompress it from here (If you have a github account, it's easier just to execute "git clone git@github.com:avin2/SensorKinect.git git/SensorKinect")
  2. In the terminal, go to the uncompress folder
  3. Then, go to Platform/Linux/CreateRedist
  4. Execute in Terminal: chmod +x RedistMaker
  5. Execute in Terminal: ./RedistMaker
  6. Move from to Platform/Linux/CreateRedist/ to Platform/Linux/Redist
  7. Enter to the only folder created in that directory. In my case the name was Sensor-Bin-Linux-x86-v5.1.0.25
  8. Execute in Terminal: chmod +x Install.sh
  9. Execute in Terminal: sudo ./install.sh

D. Install NITE

  1. Go to the downloads/OpenNI Modules.
  2. Then select "OpenNI Compliant Middleware Binaries", "unstable" and "PrimeSense NITE Unstable Build for Ubuntu 10.10 x86 (32-bit) v1.5.2.21"
  3. Then Download it
  4. Uncompress it.
  5. In the terminal go to the uncompress folder
  6. Execute sudo ./install.sh

E.  Install and run the Simple Kinect Touch

  1. Go to the downloads section and download it.
  2. Uncompress it.
  3. In the terminal go to the uncompress folder
  4. Go to bin/Release
  5. chmod +x SKT-2
  6. Execute ./SKT-2

I will be posting more details since I am missing the wire to plug in the kinnect. For now, I am satisfied since everything is installed without errors.