Options
2006
Conference Paper
Titel
Online camera pose estimation in partially known and dynamic scenes
Abstract
One of the key requirements of augmented reality systems is a robust real-time camera pose estimation. In this paper we present a robust approach, which does neither depend on of.ine pre-processing steps nor on pre-knowledge of the entire target scene. The connection between the real and the virtual world is made by a given CAD model of one object in the scene. However, the model is only needed for initialization. A line model is created out of the object rendered from a given camera pose and registrated onto the image gradient for finding the initial pose. In the tracking phase, the camera is not restricted to the modeled part of the scene anymore. The scene structure is recovered automatically during tracking. Point features are detected in the images and tracked from frame to frame using a brightness invariant template matching algorithm. Several template patches are extracted from different levels of an image pyramid and are used to make the 2D feature tracking capable for large changes in scale. Occlusion is detected already on the 2D feature tracking level. The features' 3D locations are roughly initialized by linear triangulation and then refined recursively over time using techniques of the Extended Kalman Filter framework. A quality manager handles the influence of a feature on the estimation of the camera pose. As structure and pose recovery are always performed under uncertainty, statistical methods for estimating and propagating uncertainty have been incorporated consequently into both processes. Finally, validation results on synthetic as well as on real video sequences are presented.