Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Realistic texture extraction for 3D face models robust to self-occlusion

 
: Qu, C.

:
Volltext urn:nbn:de:0011-n-3667506 (3.4 MByte PDF)
MD5 Fingerprint: ee101c65503af2cdb4ee323b3c67cd94
Erstellt am: 19.11.2015


Beyerer, Jürgen (Ed.); Pak, Alexey (Ed.):
Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory 2014. Proceedings : July, 20 to 26, 2014; Triberg-Nussbach in Germany
Karlsruhe: KIT Scientific Publishing, 2015 (Karlsruher Schriften zur Anthropomatik 20)
ISBN: 978-3-7315-0401-6
DOI: 10.5445/KSP/1000047712
S.89-101
Fraunhofer Institute of Optronics, System Technologies and Image Exploitation and Institute for Anthropomatics, Vision and Fusion Laboratory (Joint Workshop) <2014, Triberg-Nussbach>
Englisch
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IOSB ()

Abstract
In the context of face modeling, probably the most well-known approach to represent 3D faces is the 3D Morphable Model (3DMM). When 3DMM is fitted to a 2D image, the shape as well as the texture and illumination parameters are simultaneously estimated. However, if real facial texture is needed, texture extraction from the 2D image is necessary. This paper addresses the problems in texture extraction of a single image caused by selfocclusion. Unlike common approaches that leverage the symmetric property of the face by mirroring the visible facial part, which is sensitive to inhomogeneous illumination, this work first generates a virtual texture map for the skin area iteratively by averaging the color of neighbored vertices. Although this step creates unrealistic, overly smoothed texture, illumination stays constant between the real and virtual texture. In the second pass, the mirrored texture is gradually blended with the real or generated texture according to the visibility. This scheme ensures a gentle handling of illumination and yet yields realistic texture. Because the blending area only relates to non-informative area, main facial features still have unique appearance in different face halves. Evaluation results reveal realistic rendering in novel poses robust to challenging illumination conditions and small registration errors.

: http://publica.fraunhofer.de/dokumente/N-366750.html