Direct Photometric Alignment by Mesh Deformation

Kaimo Lin1,2, Nianjuan Jiang2,4, Shuaicheng Liu3, Loong-Fah Cheong1, Minh Do2, Jiangbo Lu2,4

1. National University of Singapore   2. Advanced Digital Sciences Center, Singapore
3. University of Electronic Science and Technology of China   4. Shenzhen Cloudream Technology, China

Abstract

The choice of motion models is vital in applications like image/video stitching and video stabilization. Conventional methods explored different approaches ranging from simple global parametric models to complex per-pixel optical flow. Mesh-based warping methods achieve a good balance between computational complexity and model flexibility. However, they typically require high quality feature correspondences and suffer from mismatches and low-textured image content. In this paper, we propose a mesh-based photometric alignment method that minimizes pixel intensity difference instead of Euclidean distance of known feature correspondences. The proposed method combines the superior performance of dense photometric alignment with the efficiency of mesh-based image warping. It achieves better global alignment quality than the feature-based counterpart in textured images, and more importantly, it is also robust to low-textured image content. Abundant experiments show that our method can handle a variety of images and videos, and outperforms representative state-of-the-art methods in both image stitching and video stabilization tasks.

Materials

Examples

BibTex

@inproceedings{lin2017MPA,
        author = {Lin, Kaimo and Jiang, Nianjuan and Liu, Shuaicheng and Cheong, Loong-Fah and Do, Minh and Lu, Jiangbo},
        title = {Direct Photometric Alignment by Mesh Deformation},
        booktitle = {Proceedings of the Computer Vision and Pattern Recognition ({CVPR})},
        year = {2017}
}