Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Single frame based video geo-localisation using structure projection

 
: Bodensteiner, Christoph; Bullinger, Sebastian; Lemaire, Simon; Arens, Michael

:
Postprint urn:nbn:de:0011-n-3722240 (9.7 MByte PDF)
MD5 Fingerprint: bb7c5854c5b8d114f212b481c77e97e6
© IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Erstellt am: 20.1.2016


IEEE International Conference on Computer Vision Workshop (ICCVW 2015) : Santiago, Chile 7-13 December 2015
Piscataway, NJ: IEEE, 2015
ISBN: 978-1-4673-9712-4 (print)
ISBN: 978-1-4673-9711-7 (electronic)
ISBN: 978-1-4673-8390-5
S.1036-1043
International Conference on Computer Vision Workshop (ICCVW) <2015, Santiago/Chile>
Englisch
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IOSB ()

Abstract
Community image and video platforms like FlickR and Youtube offer large image collections from different perspectives. However, the majority of publicly available imagery from online communities lack a reasonable exact location and orientation information, which is important for many geo-spatial applications like object geo-referencing, knowledge transfer or augmented reality. In this work we exploit publicly available drone videos in order to bridge the gap between ground and aerial imagery. We propose a framework for the fast determination of full 6-D georeferenced motion trajectories of online community drone video footage using geo-localized map data. Our method requires the registration of a single video frame from an video sequence in order to exactly geo-reference complete motion trajectories w.r.t. to existing geo-referenced map data. The method relies on SfM and SLAM techniques in combination with a simple, yet efficient appearance and structure matching based on rendered map data (e.g. Li-DAR) in order to generate geo-registered 3D feature maps. These maps enable a simple and fast global appearance based geo-registration of visually overlapping community videos and images. We evaluate our method on a large set of community drone videos. Our method produces drift free geo-data overlays at an average speed of 29:7 frames per second with an average positional error of 0:4m. In addition we release a large scale processed LiDAR dataset and geo-registered feature maps as an extension to the converging perspectives dataset. This data may provide visual links from ground based sensors to aerial imagery. Possible applications are numerous and include autonomous navigation, map updating/extension, image and video dehazing, object localisation or augmented reality.

: http://publica.fraunhofer.de/dokumente/N-372224.html