Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Visual-inertial model target tracking for consumer hardware

: Heilmann, Johannes
: Kuijper, Arjan; Wuest, Harald

Darmstadt, 2018, 57 S.
Darmstadt, TU, Bachelor Thesis, 2018
Bachelor Thesis
Fraunhofer IGD ()
Augmented reality (AR); mobile augmented reality (AR); Guiding Theme: Digitized Work; Research Area: Computer vision (CV)

This thesis explores visual-inertial tracking for the application in Augmented Reality. Combining vision based tracking with data from inertial sensors like accelerometer and gyroscope can result in a faster and more robust tracking system. The two types of data complement each other well. Tracking results in the form of 2D/3D correspondences from either a poster tracker or model tracker are combined with inertial data from the sensors of a Surface Pro 2 and fused in an Extended Kalman Filter. Different configurations are available. On the vision side there is a poster tracker using FAST and BRIEF for matching and the KLT for tracking. Alternatively a model tracker can be used. It builds a line model from a CAD model and tracks points on the lines in the image. On the inertial sensor side there are also two options. Either only gyroscope measurements can be used, or data from the gyroscope and an accelerometer can be included. The system is built to be used with consumer hardware. For this thesis a Microsoft Surface Pro 2 was used. This means that the data is not synchronised to a common clock and the inertial sensors are less accurate than those found in specialised hardware. The system is evaluated on a recorded image sequence and corresponding sensor data. The different parameters of the EKF models are tuned by experimentally minimising the RSME between EKF estimations and accurate baseline results. It is shown that combining visual and inertial data allows the vision system to rely on tracking less features. This results in reduced computational cost. But the inertial sensors of the Surface Pro 2 are not accurate enough to allow a camera pose estimation based on inertial data alone in case the camera tracking fails.