Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

c-Space: Time-evolving 3D Models (4D) from Heterogeneous Distributed Video Sources

: Ritz, Martin; Knuth, Martin; Domajnko, Matevz; Posniak, Oliver; Santos, Pedro; Fellner, Dieter W.


Catalano, Chiara Eva (Ed.); Luca, Livio de (Ed.); Falcidieno, Bianca (Event Co-Chair); Fellner, Dieter W. (Event Co-Chair) ; European Association for Computer Graphics -EUROGRAPHICS-; TU Graz, Institut für ComputerGraphik und WissensVisualisierung -CGV-; Fraunhofer-Institut für Graphische Datenverarbeitung -IGD-, Darmstadt:
GCH 2016, Eurographics Workshop on Graphics and Cultural Heritage : Genova, Italy, 5-7 October 2016
Goslar: Eurographics Association, 2016
DOI: 10.2312/gch.20162025
ISBN: 978-3-03868-011-6
Symposium on Graphics and Cultural Heritage (GCH) <14, 2016, Genova>
Conference Paper
Fraunhofer IGD ()
3D computer graphics; 3D data acquisition; 3D data processing; 3D data representation; 3D model acquisition; 3D Model Reconstruction; 3D computer vision; 3D digitization; 3D rendering; 3D Modeling; emotion detection; emotion expression; emotion recognition; learning platforms; storytelling platforms; Augmented reality (AR); augmented reality platforms; multimedia database systems; Guiding Theme: Digitized Work; Guiding Theme: Smart City; Guiding Theme: Visual Computing as a Service; Research Area: Computer graphics (CG); Research Area: Computer vision (CV); Research Area: Human computer interaction (HCI); Research Area: Modeling (MOD)

We introduce c-Space, an approach to automated 4D reconstruction of dynamic real world scenes, represented as time-evolving 3D geometry streams, available to everyone. Our novel technique solves the problem of fusing all sources, asynchronously captured from multiple heterogeneous mobile devices around a dynamic scene at a real word location. To this end all captured input is broken down into a massive unordered frame set, sorting the frames along a common time axis, and finally discretizing the ordered frame set into a time-sequence of frame subsets, each subject to photogrammetric 3D reconstruction.
The result is a time line of 3D models, each representing a snapshot of the scene evolution in 3D at a specific point in time. Just like a movie is a concatenation of time-discrete frames, representing the evolution of a scene in 2D, the 4D frames reconstructed by c-Space line up to form the captured and dynamically changing 3D geometry of an event over time, thus enabling the user to interact with it in the very same way as with a static 3D model. We do image analysis to automatically maximize the quality of results in the presence of challenging, heterogeneous and asynchronous input sources exhibiting a wide quality spectrum. In addition we show how this technique can be integrated as a 4D reconstruction web service module, available to mobile end-users.