The bins of the color or intensities histograms are common features that are used in many kinds of applications. In my case, I am interested on them because I want them as the input of a classifier of images (Adaboost that I explained in my previous post). For that reason I have a couple extra requirements that I will explain later on this post.

There are different ways of representing the colors of an image. A very common and natural one is the combination of the three primary colors: Red, Green and Blue. Each pixel in an image is basically a combination of values of these three colors. The higher the value, the higher the intensity. These values are always in the range of 0 to 255. Then it is possible to think in a histogram of, for example, red for a particular image. We could have at most 256 bins (potentially less if we want to join several bins into one). Each bin would have the total number of pixels that have a particular intensity of red. Then, we repeat the process for green and blue.

I found this code that is able to build three histograms, one for each color by simple call:

% 1. Read the image
im = imread('/path/to/image');
% 2. Calculate the histogram with 256 bins
hist = imHistogram(im,256);

Moreover, you also can calculate intensities histograms by just transforming the color image into a gray scale image.  The main difference is that instead of three histograms for three colors the method is going to return just one histogram of 256 values with the intensities.

 
% 1. Read the image
im = imread('/path/to/image');
% 2. verify that it is a color image
if isrgb(im)
% 3. transform it
im = rgb2gray(im);
endif
% 4. calculate the histogram with 256 bins
hist = imHistogram(im,256);

This worked perfect on Octave (a free open source alternative to Matlab) but I had two small issues related with my particular problem:

1. I needed a linear representation of the histograms because I wanted to use the values as the input of a classifier (Adaboost that I explained in my previous post).  Instead of three separated histograms represented in different vectors of 256 bin each, I needed just one big vector of 768 with the three histograms next to the other. That was not difficult to solve.

 
% 1. read the image
im = imread('path/to/image');
% 2. calculate the histogram with 256 bins
hist = imHistogram(im,256);
% 3. make the vector of the right size (768)
linear = zeros(1,numel(hist));
% 4. copy the values to the new vector
linear(:) = hist(:);

2. My second problem was a bit more particular. I needed to process a big collection of images and I found out some of them were in gray scale. This cause an inconsistency in the size of the vectors. As I said before, the gray scale just have one value that defines the intensity instead of three values of the three colors. I solved it by copying the vector three times when I found that a image was gray (i.e. the imHistogram returns a vector of 256).

 
% 1. read the image
im = imread('/path/to/image');
% 2. check if it is gray
if !isrgb(im)
% 3. calculate the histogram with 256 bins
hist = imHistogram(im, 256);
% 4. create a new vector of the right size
linear = zeros(1,numel(hist));
% 5. copy the values to the new vector
linear(:) = hist(:);
endif

My final method looks like this:

%linearHistogram.m
function linear = linearHistogram(im, bins)
% check if it is a color image
if isrgb(im)
% calculate the histogram with 256 bins per color
hist = imHistogram(im, bins);
% create a new vector of the right size
linear = zeros(1,numel(hist));
% copy the values to the new vector
linear(:) = hist(:);
else
% calculate the histogram
hist = imHistogram(im, bins);
% create a new vector of three times the size
linear = zeros(1,length(hist)*3);
% set the same histogram to the three sections
linear(1:256) = linear(257:512) = linear(513:768) = hist(:);
endif

You can see that it receives the image as a parameter, so you need 2 instructions to call it.

 
% 1. Read the image
im = imread('/path/to/image');
% 2. Calculate the linear histogram with 256 bins
hist = linearHistogram(im,256);

In my last course of computer vision and learning, I was working on a project to recognize between two styles of paintings. I decided to use the Adaboost algorithm [1]. I am going to describe the steps and code to make the algorithm run.

Step 0. The binary classification

This is not a step, but you have to be clear that this algorithm is just for classifying two classes. For example, ones from zeros, faces from non-faces or, in my case, baroque from renaissance paintings.

Step 1. Prepare the files.

There are several ways of introducing the samples to the algorithm. I found that the easiest way was using simple csv files. Also, you DO NOT have to worry about dividing the samples in training or testing. Just put all in the same files, OpenCV is going to divide the set picking the training/testings samples automatically. Then it is a good idea to put all the samples of the first class at the beginning and the second class at the end.

The format is very simple. The first column is going to be the category (however you can specify the exact column if your file does not follow this format). The rest of the columns are going to be the features of your problem. For example, I could have used three features. Each of them represent the average of red, blue and green per pixel in the image. So my csv file should look like this. Note that in the first column I am using a character. I recommend to do that so OpenCV is going to recognize that is a category (again you could specify that this a category an not a number).

B,124.34,45.4,12.4
B,64.14,45.23,3.23
B,42.32,125.41,23.8
R,224.4,35.34,163.87
R,14.55,12.423,89.67
...

NOTE: For a very strange reason the OpenCV implementation does not work with less than 11 samples. So this file should have at leas 11 rows.  Just put some more to be sure and because you will need to specify a testing set as well.

Step 2. Opening the file

Let’s suppose that the file is called “samples.csv” This would be the code:

 //1. Declare a structure to keep the data
CvMLData cvml;
//2. Read the file
cvml.read_csv("samples.csv");
//3. Indicate which column is the response
cvml.set_response_idx(0);

Step 3. Splitting the samples

Let’s suppose that our file has 100 rows. This code would select 40 for the training.

 //1. Select 40 for the training
CvTrainTestSplit cvtts(40, true);
//2. Assign the division to the data
cvml.set_train_test_split(&cvtts);

Step 4. The training process

Let’s suppose that I got 1000 features (columns in the csv after the response) and that I want to train the algorithm with just 100 (the second parameter in the next code)

 //1. Declare the classifier
CvBoost boost;
//2. Train it with 100 features
boost.train(&cvml, CvBoostParams(CvBoost::REAL, 100, 0, 1, false, 0), false);

The description of each of the arguments can be find here.

Step 5. Calculating the testing and training error

The error corresponds to the misclassified samples. Then, there could be two possible errors: the training and the testing.

 // 1. Declare a couple of vectors to save the predictions of each sample
std::vector train_responses, test_responses;
// 2. Calculate the training error
float fl1 = boost.calc_error(&cvml,CV_TRAIN_ERROR,&train_responses);
// 3. Calculate the test error
float fl2 = boost.calc_error(&cvml,CV_TEST_ERROR,&test_responses);

Note that the responses for each samples are saved in the train_responses and test_responses vectors. This is very useful to calculate confusion matrix (false positives, false negatives, true positives and false negatives and roc curves. I ll be posting how to build them with R.

Step 6. Save your classifier!!

You probably wouldn’t mind at the beginning when it takes a few seconds to train something but you definitely don’t want to lost it after a couple of hours or days that you waited for the results:

 // Save the trained classifier
boost.save("./trained_boost.xml", "boost");

Step 7. Compiling the whole code

The whole code is pasted at the end. To compile it, use this

g++ -ggdb `pkg-config --cflags opencv` -o `basename main` main.cpp `pkg-config --libs opencv`;

Here is the file with my code.

[1] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.9855

#main.cpp
#include <cstdlib>
#include "opencv/cv.h"
#include "opencv/ml.h"
#include <vector>

using namespace std;
using namespace cv;
int main(int argc, char** argv) {

/* STEP 2. Opening the file */
//1. Declare a structure to keep the data
CvMLData cvml;
//2. Read the file
cvml.read_csv("samples.csv");
//3. Indicate which column is the response
cvml.set_response_idx(0);

/* STEP 3. Splitting the samples */
//1. Select 40 for the training
CvTrainTestSplit cvtts(40, true);
//2. Assign the division to the data
cvml.set_train_test_split(&cvtts);
printf("Training ... ");

/* STEP 4. The training */
//1. Declare the classifier
CvBoost boost;
//2. Train it with 100 features
boost.train(&cvml, CvBoostParams(CvBoost::REAL, 100, 0, 1, false, 0), false);

/* STEP 5. Calculating the testing and training error */
// 1. Declare a couple of vectors to save the predictions of each sample
std::vector train_responses, test_responses;
// 2. Calculate the training error
float fl1 = boost.calc_error(&cvml,CV_TRAIN_ERROR,&train_responses);
// 3. Calculate the test error
float fl2 = boost.calc_error(&cvml,CV_TEST_ERROR,&test_responses);
printf("Error train %f n", fl1);
printf("Error test %f n", fl2);

/* STEP 6. Save your classifier */
// Save the trained classifier
boost.save("./trained_boost.xml", "boost");

return EXIT_SUCCESS;
}