Kernels Part 1: What is an RBF Kernel? Really?


An intriguing article. To look at an RBF kernel as a low pass filter is something novel. It also basically shows why RBF kernels work brilliantly on high dimensional images. Given that your image features generally lie in a continuous domain, an RBF kernel generally can fit smooth solutions and thereby create more relevant separating hyperplanes,especially in case of multiple classes.

Originally posted on Machine Learning:

My first blog on machine learning is to discuss a pet peeve I have about working in the industry, namely why not to apply an RBF kernel to text classification tasks.

I wrote this as a follow up to a Quora Answer on the subject:

I will eventually re-write this entry once I get better at Latex.  For now, refer to 

Smola, Scholkopf, and Muller, The connection between regularization operators and support vector kernels

I expand on one point–why not to use Radial Basis Function (RBF) Kernels for Text Classification.  I encountered this  while a consultant a few years ago eBay, where not one but 3 of the teams (local, German, and Indian) were all doing this, with no success  They are were treating a multi-class text classification problem using an SVM with an RBF Kernel.  What is worse, they were claiming the RBF calculations…

View original 686 more words

An Interesting History of Computer Vision

Dr. Fei Fei Li from Stanford discusses the advent and growth of computer vision in recent years. Particularly intersting is her recent research on multimodal interactions and large scale visual recognition. This has been primarily made possible due to the growth in GPU technology. I hope to try out Theano and Caffe for deep learning in this scenario soon.


Recent Publications from L. Fei Fei’s group:

Giving eyes to a micro controller: DCMI interface on an STM32F4

It has been another long hiatus between posts. But, I have managed to learn and do quite a bit of stuff in these last few months and it has been rewarding to say the least.

Recently, I have had to work on an embedded platform for image processing. It was quite a big deal as I had never worked with any sort of embedded platform before and the kind of work is quite different from what I have done before. So, my first task was to interface a camera with a microcontroller. After consulting with my friends, Shrenik and Vinod, I decided to use a microcontroller which provides a hardware camera interface instead of writing the complete firmware from scratch for an ATMEGA as I had planned on doing earlier.

After some research, I ended up selecting the well known STM32F4 series of microcontrollers. The STM32F407 is a high powered μC with an ARM Cortex M4 processor running at 168 MHz. The development board available has 1 Mb of onchip flash and 192 kB of SRAM. After playing with GBs of RAM, it sure was tough to be excited about a few kB of SRAM, but it was a different challenge to solve the problem using as few resources as possible. The STM32F407 features a a hardware camera interface known as DCMI ( Digital Camera Interface). It is compatible with a huge range of camera modules on the market.

I had also decided to use the OV2640 camera module as it features an on-board JPEG encoder and is quite well documented. After a few days of familiarizing with the basic concepts and fiddling around with the standard peripherals library from  STM, I came across this amazing implementation, OpenMV. I found it extremely helpful to understand the intricacies of image processing on embedded systems.

The primary issues that I faced while working on this project were:

  • Understanding DCMI and DMA interfaces of the STM32 controller.
  • Understanding the communication and synchronization between the camera and the controller.
  • Clocks and STM’s unique proposition of allowing us to turn off peripheral clocks when required for low power usage.

I am going to be blogging about my experiences regarding my foray into the world of micro-controllers and camera control soon. Until then, here are a few images that I captured with my setup.


Back to Basics: Sparse Coding?


A good introduction to Sparse Coding. Hope to do some stuff regarding this in the future.

Originally posted on the Serious Computer Vision Blog:

by Gooly (Li Yang Ku)

Gabor like filters

It’s always good to go back to the reason that lured you into computer vision once in a while. Mine was to understand the brain after I astonishingly realized that computers have no intelligence while I was studying EE in undergrad. In fact if they use the translation “computer” instead of  “electrical brain” in my mother language, I would probably be better off.

Anyway, I am currently revisiting some of the first few computer vision papers I read, and to tell the truth I still learn a lot from reading stuffs I read several times before, which you can also interpret it as I never actually understood a paper.

So back to the papers,

Simoncelli, Eero P., and Bruno A. Olshausen. “Natural image statistics and neural representation.” Annual review of neuroscience 24.1 (2001): 1193-1216.

Olshausen, Bruno A., and David J. Field. “Sparse coding with an…

View original 237 more words

VLAD- An extension of Bag of Words

Recently, I was a participant at TagMe- an image categorization competition conducted by Microsoft and Indian Institute of Science, Bangalore. The problem statement was to classify a set of given images into five classes: faces, shoes, flowers, buildings and vehicles. As it goes, it is not a trivial problem to solve. So, I decided to attempt my existing bag-of-words algorithm on that. It worked to an extent, I got an accuracy of 86% approximately with SIFT features and an RBF SVM for classification. In order to improve my score though, I decided to look at better methods of feature quantization. I had been looking at VLAD (Vector of Locally Aggregated Descriptors): A first order extension to BoW for my Leaf Recognition project.

So, I decided to attempt to use VLAD using OpenCV and implemented a small function based on the BoW API currently in OpenCV for VLAD. The results showed remarkable improvement with an accuracy of 96.5 % using SURF descriptors on teh validation dataset provided by the organizers.

What is VLAD

Recalling BoW, it involved simply counting the no. of descriptors associated with each cluster in a codebook(vocabulary) and creating a histogram for each set of descriptors from an image, thus representing the information in a an image in a compact vector. VLAD is an extension of this concept. We accumulate the residual of each descriptor with respect to its assigned cluster. In simpler terms, we match a descriptor to its closest cluster, then for each cluster, we store the sum of the differences of the descriptors assigned to the cluster and the centroid of the cluster. Let us have a look at the math behind VLAD..

Mathematical Formulation

As with bag of words, we first train a codebook from the descriptors from our training dataset, as C=\{c_1,c_2,...c_k\} where k is the no. of clusters in K-means. We then associate each d-dimensional local descriptor, x from an image with its nearest neighbour in the codebook.

The idea behind VLAD feature quantization is that, for each cluster centroid, c_i, we accumulate the difference x-c_i where for each x, c_i = NN(x)

Representing the VLAD vector for each image by v, we have,

v_{ij} =\sum_{x|x=NN(c_i)} {(x_j - c_{ij})}

where i=1,2,3...k and j=1,2,3..d

The vector v is subsequently normalized with its L_2 norm as v=\frac{v}{\|v\|_2}

Comparison with BoW

The primary advantage of VLAD over BoW is that we add more discriminative property in our feature vector by adding the difference of each descriptor from the mean in its voronoi cell. This first order statistic adds more information in our feature vector and hence gives us better discrimination for our classifier. This also points us to other improvements we can   adding higher order statistics to our feature encoding as well as looking at soft assignment,i,e. assigning each descriptor multiple centroids weighed by their distance from the descriptor.


Here are a few of my results on the TagMe dataset.


Improvements to VLAD:

There are several extension possible for VLAD, primarily various normalization options. Arandjelov and Zissermann in their paper, All about VLAD, propose several normalization techniques, including intra normalization and power normalization alonging with a spatial extension – MultiVLAD. Delhumeau et al, propose several different normalization techniques as well as a modification to the VLAD pipeline to show improvements to almost state of the art.

Other references also stress on spatial pooling i.e. dividing your image into regions to get multiple VLAD vectors for each tile to better represent local features and spatial structure. A few also advise soft assignment, which refers to assignment of descriptors to multiple clusters, weighed by their distance from the cluster.


Here is a link to my code for TagMe. It was a quick has job for testing so it is not very clean though I am going to clean it up soon.

also, a few references for those who want to read the papers I referred:

1.Jégou, H., Perronnin, F., Douze, M., Sánchez, J., Pérez, P., & Schmid, C. (2012). Aggregating local image descriptors into compact codes. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 34(9), 1704-1716.

2. Delhumeau, J., Gosselin, P. H., Jégou, H., & Pérez, P. (2013, October). Revisiting the VLAD image representation. In Proceedings of the 21st ACM international conference on Multimedia (pp. 653-656). ACM.

3. Arandjelovic, R., & Zisserman, A. (2013, June). All about VLAD. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on (pp. 1578-1585). IEEE.

Well, that concludes this post. Be on the lookout for more on image retrieval – improvements to VLAD and Fisher Vectors.


SuperPixels: SLIC and more

Hey guys,

I recently came across a very cool computer vision concept – SuperPixels. Superpixels are what you would call aggregations of pixels with common features – for example, color, illumination, absorbance, spatial location etc. It is perhaps the precursor to full object segmentation. Here, I am going to discuss a specific kind of superpixels called SLIC.

SLIC stands for Simple Linear Iterative Clustering. The name defines it all as it is just a simple representation of clustering using k-means. This was first proposed in the following paper:

Radhakrishna Achanta, Appu Shaji, Kevin Smith, Aure-lien Lucchi, Pascal Fua, and Sabine Susstrunk, SLIC Superpixels, EPFL Technical Report 149300, June 2010.

The basic idea of superpixels is that they contain a lot more high level information as compared to just pixels. Pixels give extremely low level and local information which many-a-times cannot be used without heavy duty processing. Instead, superpixels provide less localized information which makes more sense to have considering problems like segmentation of objects. A simple analogy would be to step back a bit and look at a slightly bigger picture.

To start of with, lets look at a few example images given in the paper:



As you can see, pixels are clustered according to their location and color. The different slices represent different sizes of superpixels. One of the major advantages of using SLIC  superpixels is that they show low undersegmentation and high boundary recall. It is one of the most efficient methods in the methods for getting superpixels.

The algorithm is very intuitive. We start off by converting the image to L*a*b* color-space. We then create feature points from each pixel consisting of the L*,a*,b* values and the x and y co-ordinates. These features are then clustered using k-means clustering

Now, one of the brilliant things of SLIC is that it takes the x-y co-ordinates into the picture while clustering. So, you get uniform and well defines superpixels. This can then be further used for segmentation. The distance function used for clustering is given by

So, we start off with K equally spaced clusters all across the image. These clusters are centred around the lowest gradient position in a 3×3 neighbourhood around the equally spaced points. We do this to avoid putting them at edges and to reduce the probability of choosing a noisy pixel. We calculate the image gradients as follows:

G(x,y) =  \| I(x+1,y) - I(x-1,y) \|^2 + \|I(x,y+1) - I(x-1,y)\|^2

where I (x,y) is the lab vector corresponding to the pixel and \|.\| is the L2 norm.
Each pixel in the image is then associated with the nearest cluster centre in the labxy feature space. After all pixels are associated with one or the other cluster, we again calculate a new cluster centre as the average value of the labxy features in each cluster. We then repeat the process of clustering and recalculating until we achieve convergence.

As a result, we will have nicely clustered groups of pixels, both in color as well as spatial domain. Now, there might be a few stray pixels that may have got clustered in the wrong clusters. Though this is pretty rare according to the authors of the paper and my experiments, we enforce a connectivity constraint on them in order to remove these outliers. The paper doesnot mention this in any great detail, but connectivity is an important condition to impose, especially if we are going to use SLIC superpixels for segmentation.

Thats it for the explanation of an amazing technique, folks! I will post my OpenCV code link in a few days as well as will discuss a few issues with the code and segmentation techniques using these in the next few posts.

P.S.:By the way, we will also be looking at Bag of Words in a little more detail in a few more days. Be on the look out for a barrage of posts.

Bag of Words – Object Recognition

Hey guys,

Its been a really long time since my last post. But this series of posts is going to be a really cool I hope.

Today we are going to discuss one of the most important problems in Computer Vision- Object Recognition. We humans tend to trivially recognize objects without consciously paying attention to the fact or even wondering how exactly do we achieve this. You look at a baseball flying towards your face, you recognize it as a baseball about to break your nose, and you duck! All in a matter of a few microseconds.

But the process that your brain undertakes in those few microseconds has eluded perfect implementation in computation for several years now. Object recognition is perhaps, rightly considered the primary problem in computer vision. But recent research advances have made strides in this matter.

I recently undertook a project in which I had to classify leaves into species they come from. And as it sounds, it’s not really a trivial problem. It took me a few days to figure out the first steps to such a process. And to start of with I decided to use the Bag-of Words model, a highly cited method for scene and object classification for the above problem.

To begin with, I found a really nice dataset to work with here: . The dataset contains images for 32 species of leaves on plain white backgrounds which simplified my experiment. I am really grateful to them for providing such a comprehensive dataset for free on the web. (Kinda all for Open Access now.).

Bag of words is a basically a simplified representation of an image. Its actually a concept taken form Natural Language Processing where you represent documents as an unordered collection of words disregarding grammar. Translating this into CV jargon, it means that we simplify images by picking out features from an image and representing it as a collection of features. A good explanation of what features are can be found at my friend, Siddharth’s blog here.

To get more technical about the BoW- we construct a vocabulary of features. We then use this vocabulary to create histograms from features for each image and then use a simple machine learning algorithm like SVM or Naive Bayes for classification.

This is the algorithm I followed for BoW. I got a lot of help from Roy’s blog here.

1. We pick out features from all the images in our training dataset. I used SIFT (Scale Invariant Feature Transform).

2. We cluster these features using any clustering algorithm. I used K-Means. (Pretty fast in OpenCV)

3. We use the cluster as a vocabulary to construct histograms. We simply count the no. of features from each image belonging to each cluster. Then we normalize the histograms by dividing it with the no. of features. Therefore, each image in the dataset is represented by one histogram.

4. Then these histograms are passed to an SVM for training. I currently use a Radial Basis function multi-class SVM in OpenCV. Using OpenCV’s CvSVM::train_auto() function, we get parameters for the SVM using cross validation.

Now why does Bag of Words work? Why use it rather than simple feature matching? The answer to that question is simple: features provide just local information. Using the bag-of-words model we create a global representation of an object. Thus, we take a group of features, create a representation of the image in a simpler form and classify it.

That was for the pros of the algorithm. But there are a few cons associated with this model.

1. As evident, we cannot localize an object in an image using this model. That is to say, the problem of finding where the object of interest lies is still open and needs other methods.

2. We neglect grammar. In CV terms, it means we neglect the position of features relative to each other. Thus the concept of a global shape maybe lost.

As for our Leaf Recognizer, we are still working on improving the accuracy. We are almost at our goal! The following are some of the images we got as a result of the above algorithm: