Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

: Hild, Jutta; Krüger, Wolfgang; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen


Pellechia, M.F. ; Society of Photo-Optical Instrumentation Engineers -SPIE-, Bellingham/Wash.:
Geospatial Informatics, Fusion, and Motion Video Analytics VI : 19-21 April 2016, Baltimore, Maryland, United States
Bellingham, WA: SPIE, 2016 (Proceedings of SPIE 9841)
ISBN: 978-1-5106-0082-9
Paper 98410K, 9 S.
Conference "Geospatial Informatics, Fusion, and Motion Video Analytics" <6, 2016, Baltimore/Md.>
Fraunhofer IOSB ()
computer-human interaction; gaze-based interaction; human observer; image exploitation system; pilot study; real-time motion video analysis; target tracking

Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator’s visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer’s perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object’s image region in order to start the tracking algorithm.