Image Stitching (Panorama)

This was the final assignment of COMP558:Fundamentals of Computer Vision course, where we had to implement an image stitching(Panorama) algorithm from scratch. We were given a set of images taken by rotating the camera vertically and horizontally and the goal was to stitch them together to form a panorama exactly like how mobile devices do.

We used the SIFT algorithm implemented as part of this project with certain modifications(second order keypoint extraction) for feature extraction. Features along edges are eliminated using eigenvalues of the hessian matrix, and weak features along edges will have low eigenvalues along the edge and are therefore suppressed. The low contrast features are eliminated in this implementation using second order Taylor series based thresholding. Instead of 36 dimension feature histograms, now we had 128 dimensional feature vectors which are intuitively better descriptors.

For the extracted features, two different matching strategies viz matchFeatures(MATLAB function) and our own implementation of Bhattacharyya Distance that requires normalized histograms were compared. We decided to proceed with featureMatch for the relative simplicity, even though Bhattacharyya measure was more robust and rich.

Using the feature matches we implemented a least squares based Random Sample Consensus(RANSAC) algorithm to find a homography H between corresponding images that puts matched points in exact correspondence. This step is called Image Registration . The homography was found by solving the equation of the form Ax+B given below, using Singular Value Decomposition .

Least Squares Estimation equation for finding Homography

Least Squares Estimation equation for finding Homography

For solving this equation we just need 4 matches, so in our RANSAC algorithm we select 4 random points at each iteration to find homography and then using the Homography Matrix, we find a consensus set, i.e the matches in two images that agree to the homography calculated by using Euclidean Distance . We calculate the distance between transformed points for each match(using H) and corresponding actual matches and threshold them at 0.5 to filter inliers.

Following the sequential image registration we use the matched features from consecutive images to learn geometric transformations between them in order to project them into a panoramic image. This process is called Image Stitching . In order to perform image stitching, an empty panorama is created, then the images are aligned and blended based on the learned homography after which they are warped on to the panorama canvas.

Result of our Image stitching algorithm on Real images taken from my OnePlus phone

Result of our Image stitching algorithm on Real images taken from my OnePlus phone

comments powered by Disqus

Related