Bag of Features

VLAD- An extension of Bag of Words

Recently, I was a participant at TagMe- an image categorization competition conducted by Microsoft and Indian Institute of Science, Bangalore. The problem statement was to classify a set of given images into five classes: faces, shoes, flowers, buildings and vehicles. As it goes, it is not a trivial problem to solve. So, I decided to attempt my existing bag-of-words algorithm on that. It worked to an extent, I got an accuracy of 86% approximately with SIFT features and an RBF SVM for classification. In order to improve my score though, I decided to look at better methods of feature quantization. I had been looking at VLAD (Vector of Locally Aggregated Descriptors): A first order extension to BoW for my Leaf Recognition project.

So, I decided to attempt to use VLAD using OpenCV and implemented a small function based on the BoW API currently in OpenCV for VLAD. The results showed remarkable improvement with an accuracy of 96.5 % using SURF descriptors on teh validation dataset provided by the organizers.

What is VLAD

Recalling BoW, it involved simply counting the no. of descriptors associated with each cluster in a codebook(vocabulary) and creating a histogram for each set of descriptors from an image, thus representing the information in a an image in a compact vector. VLAD is an extension of this concept. We accumulate the residual of each descriptor with respect to its assigned cluster. In simpler terms, we match a descriptor to its closest cluster, then for each cluster, we store the sum of the differences of the descriptors assigned to the cluster and the centroid of the cluster. Let us have a look at the math behind VLAD..

Mathematical Formulation

As with bag of words, we first train a codebook from the descriptors from our training dataset, as C=\{c_1,c_2,...c_k\} where k is the no. of clusters in K-means. We then associate each d-dimensional local descriptor, x from an image with its nearest neighbour in the codebook.

The idea behind VLAD feature quantization is that, for each cluster centroid, c_i, we accumulate the difference x-c_i where for each x, c_i = NN(x)

Representing the VLAD vector for each image by v, we have,

v_{ij} =\sum_{x|x=NN(c_i)} {(x_j - c_{ij})}

where i=1,2,3...k and j=1,2,3..d

The vector v is subsequently normalized with its L_2 norm as v=\frac{v}{\|v\|_2}

Comparison with BoW

The primary advantage of VLAD over BoW is that we add more discriminative property in our feature vector by adding the difference of each descriptor from the mean in its voronoi cell. This first order statistic adds more information in our feature vector and hence gives us better discrimination for our classifier. This also points us to other improvements we can   adding higher order statistics to our feature encoding as well as looking at soft assignment,i,e. assigning each descriptor multiple centroids weighed by their distance from the descriptor.

Experiments

Here are a few of my results on the TagMe dataset.

results

Improvements to VLAD:

There are several extension possible for VLAD, primarily various normalization options. Arandjelov and Zissermann in their paper, All about VLAD, propose several normalization techniques, including intra normalization and power normalization alonging with a spatial extension – MultiVLAD. Delhumeau et al, propose several different normalization techniques as well as a modification to the VLAD pipeline to show improvements to almost state of the art.

Other references also stress on spatial pooling i.e. dividing your image into regions to get multiple VLAD vectors for each tile to better represent local features and spatial structure. A few also advise soft assignment, which refers to assignment of descriptors to multiple clusters, weighed by their distance from the cluster.

Code:

Here is a link to my code for TagMe. It was a quick has job for testing so it is not very clean though I am going to clean it up soon.

https://github.com/ameya005/VLAD-Implementation

also, a few references for those who want to read the papers I referred:

1.Jégou, H., Perronnin, F., Douze, M., Sánchez, J., Pérez, P., & Schmid, C. (2012). Aggregating local image descriptors into compact codes. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 34(9), 1704-1716.

2. Delhumeau, J., Gosselin, P. H., Jégou, H., & Pérez, P. (2013, October). Revisiting the VLAD image representation. In Proceedings of the 21st ACM international conference on Multimedia (pp. 653-656). ACM.

3. Arandjelovic, R., & Zisserman, A. (2013, June). All about VLAD. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on (pp. 1578-1585). IEEE.

Well, that concludes this post. Be on the lookout for more on image retrieval – improvements to VLAD and Fisher Vectors.

 

Bag of Words – Object Recognition

Hey guys,

Its been a really long time since my last post. But this series of posts is going to be a really cool I hope.

Today we are going to discuss one of the most important problems in Computer Vision- Object Recognition. We humans tend to trivially recognize objects without consciously paying attention to the fact or even wondering how exactly do we achieve this. You look at a baseball flying towards your face, you recognize it as a baseball about to break your nose, and you duck! All in a matter of a few microseconds.

But the process that your brain undertakes in those few microseconds has eluded perfect implementation in computation for several years now. Object recognition is perhaps, rightly considered the primary problem in computer vision. But recent research advances have made strides in this matter.

I recently undertook a project in which I had to classify leaves into species they come from. And as it sounds, it’s not really a trivial problem. It took me a few days to figure out the first steps to such a process. And to start of with I decided to use the Bag-of Words model, a highly cited method for scene and object classification for the above problem.

To begin with, I found a really nice dataset to work with here: http://flavia.sourceforge.net/ . The dataset contains images for 32 species of leaves on plain white backgrounds which simplified my experiment. I am really grateful to them for providing such a comprehensive dataset for free on the web. (Kinda all for Open Access now.).

Bag of words is a basically a simplified representation of an image. Its actually a concept taken form Natural Language Processing where you represent documents as an unordered collection of words disregarding grammar. Translating this into CV jargon, it means that we simplify images by picking out features from an image and representing it as a collection of features. A good explanation of what features are can be found at my friend, Siddharth’s blog here.

To get more technical about the BoW- we construct a vocabulary of features. We then use this vocabulary to create histograms from features for each image and then use a simple machine learning algorithm like SVM or Naive Bayes for classification.

This is the algorithm I followed for BoW. I got a lot of help from Roy’s blog here.

1. We pick out features from all the images in our training dataset. I used SIFT (Scale Invariant Feature Transform).

2. We cluster these features using any clustering algorithm. I used K-Means. (Pretty fast in OpenCV)

3. We use the cluster as a vocabulary to construct histograms. We simply count the no. of features from each image belonging to each cluster. Then we normalize the histograms by dividing it with the no. of features. Therefore, each image in the dataset is represented by one histogram.

4. Then these histograms are passed to an SVM for training. I currently use a Radial Basis function multi-class SVM in OpenCV. Using OpenCV’s CvSVM::train_auto() function, we get parameters for the SVM using cross validation.

Now why does Bag of Words work? Why use it rather than simple feature matching? The answer to that question is simple: features provide just local information. Using the bag-of-words model we create a global representation of an object. Thus, we take a group of features, create a representation of the image in a simpler form and classify it.

That was for the pros of the algorithm. But there are a few cons associated with this model.

1. As evident, we cannot localize an object in an image using this model. That is to say, the problem of finding where the object of interest lies is still open and needs other methods.

2. We neglect grammar. In CV terms, it means we neglect the position of features relative to each other. Thus the concept of a global shape maybe lost.

As for our Leaf Recognizer, we are still working on improving the accuracy. We are almost at our goal! The following are some of the images we got as a result of the above algorithm:

Test_image_1236test_image_1242

Test_image_3176Test_image_3177test_image_1373test_image_1378