Options
2018
Conference Paper
Titel
Animatable 3D model generation from 2D monocular visual data
Abstract
In this paper, we present an approach for creating animatable 3D models from temporal monocular image acquisitions of non-rigid objects. During deformation, the object of interest is captured with only a single camera under full perspective projection. The aim of the presented framework is to obtain a shape deformation model in terms of joints and skinning weights that can finally be used for animating the model vertices. First, the monocular rigid shape estimation problem is solved by computing a template model of the object in rest pose from an image sequence. Next, the unknown external camera parameters and the deformation for each vertex are estimated alternately in a sequential approach. The resulting consistent non-rigid shape geometries are used to compute a kinematic skeleton control structure including skinning weights and optimized shape. For that, a completely data-driven optimization scheme is used, which iterates over three steps: (a) optimization of pose for each frame as well as joint parameters consistent over the entire sequence, (b) optimization of rest pose vertices to enhance the shape and (c) optimization of skinning weights for improved deformation characteristics. With experimental results on publicly available synthetic as well as real-world datasets, we demonstrate the quality of the proposed approach. The resulting models with fixed topology and rigged with skeleton and skinning weights can be animated in existing render engines.