Now showing 1 - 3 of 3
  • Publication
    Towards situational awareness systems based on semi-stationary multi-camera components
    Automatic scene analysis using multiple cameras connected in a network is an important step to enhance the capabilities of future situation awareness tools. In this paper we present a self-adaptive multi-camera component, which can be considered as a single node of a camera network. The network node consists of three cameras: A high definition overview camera (the master) with a large field of view, a pan-tilt-zoom camera (the slave), and a long-wave infrared camera. In order to control the pan-tilt-zoom camera in terms of image coordinates of the master camera, the system learns the relationship between the individual cameras automatically. The incremental learning procedure is based on local image features. The system is a reliable basis for further generic image processing and situational awareness plugins: blob detection and tracking, person detection and identification, car detection and number plate recognition, as well as action recognition. On top of the acquired information a conceptual situation recognition system fuses all available input data and infers potentially interesting situations in the scene leading to comprehensive situational awareness.
  • Publication
    Automatic unconstrained online configuration of a master-slave camera system
    Master-slave camera systems - consisting of a wide-angle master camera and an actively controllable pan-tilt-zoom camera - provide a large field of view, allowing monitoring the full situational context, as well as a narrow field of view, to capture sufficient details. Unconstrained calibration of such a system is a non-trivial task. In this paper a fully automatic and adaptive configuration method is proposed. It learns a motor map relating image coordinates from the master view to motor commands of the slave camera. First, a rough initial configuration is estimated by registering images from the slave camera onto the master view. In order to be operational in poorly textured environments, like hallways, the motor map is online refined by utilizing correspondences originating from moving objects. The accuracy is evaluated in different environments, as well as in the visual and the infrared spectrum. The overall accuracy is significantly improved by the online refinement
  • Publication
    Feature-based automatic configuration of semi-stationary multi-camera components
    Autonomously operating semi-stationary multi-camera components are the core modules of ad-hoc multi-view methods. On the one hand a situation recognition system needs overview of an entire scene, as given by a wide-angle camera, and on the other hand a close-up view from e.g. an active pan-tilt-zoom (PTZ) camera of interesting agents is required to further increase the information to e.g. identify those agents. To configure such a system we set the field of view (FOV) of the overview-camera in correspondence to the motor configuration of a PTZ camera. Images are captured from a uniformly moving PTZ camera until the entire field of view of the master camera is covered. Along the way, a lookup table (LUT) of motor coordinates of the PTZ camera and image coordinates in the master camera is generated. To match each pair of images, features (SIFT, SURF, ORB, STAR, FAST, MSER, BRISK, FREAK) are detected, selected by nearest neighbor distance ratio (NNDR), and matched. A homography is estimated to transform the PTZ image to the master image. With that information comprehensive LUTs are calculated via barycentric coordinates and stored for every pixel of the master image. In this paper the robustness, accuracy, and runtime are quantitatively evaluated for different features.