Options
2010
Conference Paper
Titel
Local multi-modal image matching based on self-similarity
Abstract
A fundamental problem in computer vision is the precise determination of correspondences between pairs of images. Many methods have been proposed which work very well for image data from one modality. However, with the wide availability of sensor systems with different spectral sensitivities there is growing demand to automatically fuse the information from multiple sensor types. We focus on the problem of finding point and local region correspondences in an inter-modality imaging setup. We use a Generalized Hough Transform to determine small regions with a similar geometric relationship of local image features to robustly identify correct matches. We additionally optimize region correspondences by a fast non-linear optimization of a self-similarity distance measure. This measure outperforms standard multi-modal registration approaches like mutual information or correlation ratio in case of local image regions. The method is evaluated on Visible/Infrared (IR) and Visible/Light Detection and Ranging (LiDAR) intensity image data pairs and shows very promising results. Potential applications are numerous and include for instance multi-spectral camera calibration, multi-spectral texturing of 3D-models, multi-spectral segmentation or multi-spectral super-resolution.