Reducing the gap between augmented reality and 3D modeling with real-time depth imaging
Whereas 3D surface models are often used for augmented reality (e.g., for occlusion handling or model based camera tracking), the creation and the use of such dense 3D models in augmented reality applications usually are two separated processes. The 3D surface models are often created in offline preparation steps, which makes it difficult to detect changes and to adapt the 3D model to these changes. This work presents a 3D change detection and model adjustment framework that combines AR techniques with real-time depth imaging to close the loop between dense 3D modeling and augmented reality. The proposed method detects the differences between a scene and a 3D model of the scene in real time. Then, the detected geometric differences are used to update the 3D model, thus bringing AR and 3D modeling closer together. The accuracy of the geometric difference detection depends on the depth measurement accuracy as well as on the accuracy of the intrinsic and extrinsic parameters. To evaluate the influence of these parameters, several experiments were conducted with simulated ground truth data. Furthermore, the evaluation shows the applicability of AR and depth image-based 3D modeling for model-based camera tracking.