Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Feature-based automatic configuration of semi-stationary multi-camera components

 
: Grosselfinger, Ann-Kristin; Münch, David; Hübner, Wolfgang; Arens, Michael

:
Fulltext urn:nbn:de:0011-n-2649943 (1.3 MByte PDF)
MD5 Fingerprint: 83585c2f0d5372a991f34fe24e185f5f
Copyright 2013 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.
Created on: 5.11.2013


Huckridge, D.A. ; Society of Photo-Optical Instrumentation Engineers -SPIE-, Bellingham/Wash.:
Electro-Optical and Infrared Systems: Technology and Applications X : 23. Sept. 2013, Dresden, Germany
Bellingham, WA: SPIE, 2013 (Proceedings of SPIE 8896)
ISBN: 978-0-8194-9765-9
Paper 88960P
Conference "Electro-Optical and Infrared Systems - Technology and Applications" <10, 2013, Dresden>
English
Conference Paper, Electronic Publication
Fraunhofer IOSB ()
master-slave camera system; video surveillance; self-calibration; semi-stationary camera system; weak calibration

Abstract
Autonomously operating semi-stationary multi-camera components are the core modules of ad-hoc multi-view methods. On the one hand a situation recognition system needs overview of an entire scene, as given by a wide-angle camera, and on the other hand a close-up view from e.g. an active pan-tilt-zoom (PTZ) camera of interesting agents is required to further increase the information to e.g. identify those agents. To configure such a system we set the field of view (FOV) of the overview-camera in correspondence to the motor configuration of a PTZ camera. Images are captured from a uniformly moving PTZ camera until the entire field of view of the master camera is covered. Along the way, a lookup table (LUT) of motor coordinates of the PTZ camera and image coordinates in the master camera is generated. To match each pair of images, features (SIFT, SURF, ORB, STAR, FAST, MSER, BRISK, FREAK) are detected, selected by nearest neighbor distance ratio (NNDR), and matched. A homography is estimated to transform the PTZ image to the master image. With that information comprehensive LUTs are calculated via barycentric coordinates and stored for every pixel of the master image. In this paper the robustness, accuracy, and runtime are quantitatively evaluated for different features.

: http://publica.fraunhofer.de/documents/N-264994.html