Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Robust visual object tracking with interleaved segmentation

 
: Abel, Peter; Kieritz, Hilke; Becker, Stefan; Arens, Michael

:

Bouma, H. ; Society of Photo-Optical Instrumentation Engineers -SPIE-, Bellingham/Wash.:
Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies : 11-14 September 2017, Warsaw, Poland
Bellingham, WA: SPIE, 2017 (Proceedings of SPIE 10441)
Paper 104410B, 12 S.
Conference "Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies" <2017, Warsaw>
Englisch
Konferenzbeitrag
Fraunhofer IOSB ()
visual object tracking; confidence map fusion; image segmentation

Abstract
In this paper we present a new approach for tracking non-rigid, deformable objects by means of merging an on-line boosting-based tracker and a fast foreground background segmentation. We extend an on-line boosting- based tracker, which uses axes-aligned bounding boxes with fixed aspect-ratio as tracking states. By constructing a confidence map from the on-line boosting-based tracker and unifying this map with a confidence map, which is obtained from a foreground background segmentation algorithm, we build a superior confidence map. For constructing a rough confidence map of a new frame based on on-line boosting, we employ the responses of the strong classifier as well as the single weak classifier responses that were built before during the updating step. This confidence map provides a rough estimation of the object’s position and dimension. In order to refine this confidence map, we build a fine, pixel-wisely segmented confidence map and merge both maps together. Our segmentation method is color-histogram-based and provides a fine and fast image segmentation. By means of back-projection and the Bayes’ rule, we obtain a confidence value for every pixel. The rough and the fine confidence maps are merged together by building an adaptively weighted sum of both maps. The weights are obtained by utilizing the variances of both confidence maps. Further, we apply morphological operators in the merged confidence map in order to reduce the noise. In the resulting map we estimate the object localization and dimension via continuous adaptive mean shift. Our approach provides a rotated rectangle as tracking states, which enables a more precise description of non-rigid, deformable objects than axes-aligned bounding boxes. We evaluate our tracker on the visual object tracking (VOT) benchmark dataset 2016.

: http://publica.fraunhofer.de/dokumente/N-470153.html