Friday, June 24, 2011

Repeatability experiment with real images

In this experiment we count the number of features extracted using the LB approach and compared them to the features extracted using SIFT. We use a real image and a pi/2 rotated version of this image. We count the number of points with a distance smaller than 2 pixels between the extracted features and the projection of the rotated points from the original image.

Original image SIFT


Rotated image SIFT


TOTAL POINTS = 650

Original image LB

Rotated image LB
TOTAL POINTS = 741

Wednesday, June 22, 2011

Matching using LB scale space on real images

In this experiment we use the same calibrated omnidirectional images but the scale space is computed using our Laplace-Beltrami operator. The first experiment shows the matching considering only rotation.

Matching using LB and Polar Descriptor

Matching using LB and SIFT Descriptor

The next matching experiment is performed between images with rotation and translation.

Matching using LB and Polar Descriptor


Matching using LB and SIFT Descriptor

Friday, June 10, 2011

Matching using real images and the polar descriptor

The first experiment consist of matching two omnidirectional images where one of them is obtained from the rotation of the first one around the z-axis by Pi/2.

Matching using Polar Descriptor


Matching using SIFT

The second experiment shows the matching between two different omnidirectional images, considering rotation and translation. A single octave is considered.

Matching using Polar Descriptor


Matching using SIFT


In the next experiment we use the four octaves.

Matching using Polar Descriptor




Matching using SIFT




We observe that the matching using LB through the scales cause the mismatching of the features, while that using the SIFT descriptor the matching is performed correctly. This indicates that the LB approach has problems with the matching throught the scales. More experiments have to be performed to identify the source of this behavior.

The next experiment matches the SIFT descriptors (128 vectors) using the QC criteria.

Matching SIFT using QC

Wednesday, June 1, 2011

Computing the support region on the sphere

We compute the support region on the sphere, which is required to compute the orientation and the descriptor.

Using the calibration, we map the extrema point detected to the sphere and define a vicinity proportional to the sin(sigma) where the point was detected. Then this vicinity is projected to the omnidirectional image and the orientation and magnitude gradients are computed. A weighted histogram is computed and its peak is selected as the orientation of the point.

Support region on the sphere




We observe that depending on the position on the sphere the support region in the image varies from circular to elliptical.

Support region for the descriptor

A similar process is followed to compute the descriptor corresponding to the detected feature. In this case the support region is divided in 36 bins. We verify the correctness of the orientation previously computed.

We can observe two examples of support regions over two omnidirectional images with the orientation previously computed

Good orientation

Image 1



Image 2

Bad orientation

Image 1


Image 2

This bad orientation computation can be caused because the orientation is computed with the gradients in the original image and not in the smoothed image where the feature was detected.