Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Advanced time-of-flight range camera with novel real-time 3D image processing

: König, B.; Hosticka, B.J.; Mengel, P.; Listl, L.


Awwal, A.A.S. ; Society of Photo-Optical Instrumentation Engineers -SPIE-, Bellingham/Wash.:
Optics and photonics for information processing : 28 - 30 August 2007, San Diego, California, USA
Bellingham, WA: SPIE, 2007 (SPIE Proceedings Series 6695)
ISBN: 978-0-8194-6843-7
Paper 66950D
Conference on Optics and Photonics for Information Processing <2007, San Diego/Calif.>
Fraunhofer IMS ()
3D image processing; active safety application; time-of-flight; Time-of-Flight-Prinzip; Echtzeitbetrieb; 3D-Darstellung; dreidimensionale Darstellung; Kalman-Filter

We present a solid state fange camera covering measuring distances from 2 m to 25 m and novel real-urne 3D image processing algorithms for object detection, tracking and elassification based on the three-dimensional features of the camera's output data.
The technology is based on a 64x8 pixel array CMOS image sensor1 which is capable of capturing three-dimensional irnages by executing indirect time-of-flight (T0F) measurement of NIR laser pulses emitted by the camera and reflected by the objects in the cameras field of view. Here the so-called "multiple double short time integration" (MDSI) method2 enables unprecedented reliability and robustness with respeet to suppression of background irradiance and insensitiveness to reflectivity variations in the object scene. Output data are conventional intensity values and distance values with accuracies in the centimeter range at image repetition rates up to 100 Hz. An evaluation of the camera's performance in typical road safety related test scenarios is subject of this paper.
Furthermore we introduce real-time image processing of the output data stream ofthe camera aiming at the segmentation of objects being located in the camera's surrounding and the derivation of reliable position, speed and acceleration estimates. The segmentation algorithm utilizes the position information of all three spatial dimensions as weil as the intensity values and thus yields significant segmentation improvement compared to segmentation in conventional 2D pietures. Position, velocity and acceleration values of the segmented objects are estimated by means of Kalman filtering in 3D space. The filter is dynamically adapting to the measurement conditions to take care of changes of the scene data properties. Flow and performance of the whole processing cham are presented by means of example scenes.