3-D reconstruction of a dynamic environment with a fully calibrated background for traffic scenes
Vision-based traffic surveillance systems are more and more employed for traffic moni-toring, collection of statistical data and traffic control. We present an extension of such a sys-tem that additionally uses the captured image content for 3D scene modeling and reconstruc-tion. A basic goal of surveillance systems is to get a good coverage of the observed area with as few cameras as possible to keep the costs low. Therefore the 3D reconstruction has to be done from only a few original views with limited overlap and different lighting conditions. To cope with these specific restrictions we developed a model-based 3D reconstruction scheme that exploits a priori knowledge about the scene. The system is fully calibrated offline by es-timating camera parameters from measured 3D-2D correspondences. Then the scene is di-vided into static parts, which are modeled offline and dynamic parts, which are processed online. Therefore we segment all views into moving objects and static background. The back-ground is modeled as multi-texture planes using the original camera textures. Moving objects are segmented and tracked in each view. All segmented views of a moving object are com-bined to a 3D object, which is positioned and tracked in 3D. Here we use predefined geomet-ric primitives and map the original textures onto them. Finally the static and dynamic ele-ments are combined to create the reconstructed 3D scene, where the user can freely navigate, i.e. choosing an arbitrary viewpoint and direction. Additionally the system allows analyzing the 3D properties of the scene and the moving objects.