Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Realistic texture extraction for 3D face models robust to self-occlusion

 
: Qu, Chengchao; Monari, Eduardo; Schuchert, Tobias; Beyerer, Jürgen

:
Postprint urn:nbn:de:0011-n-3436343 (9.0 MByte PDF)
MD5 Fingerprint: 81e8d579b5c2fd5cbd4f1529c5024418
Copyright Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.
Erstellt am: 17.11.2015


Lam, E.Y. ; Society of Photo-Optical Instrumentation Engineers -SPIE-, Bellingham/Wash.:
Image Processing: Machine Vision Applications VIII : 10-11 February 2015, San Francisco, California
Bellingham, WA: SPIE, 2015 (Proceedings of SPIE 9405)
ISBN: 978-1-62841-495-0
Paper 94050P, 9 S.
Conference "Image Processing - Machine Vision Applications" <8, 2015, San Francisco/Calif.>
Englisch
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IOSB ()
3D face model; texture extraction; self-occlusion

Abstract
In the context of face modeling, probably the most well-known approach to represent 3D faces is the 3D Morphable Model (3DMM). When 3DMM is fitted to a 2D image, the shape as well as the texture and illumination parameters are simultaneously estimated. However, if real facial texture is needed, texture extraction from the 2D image is necessary. This paper addresses the possible problems in texture extraction of a single image caused by self-occlusion. Unlike common approaches that leverage the symmetric property of the face by mirroring the visible facial part, which is sensitive to inhomogeneous illumination, this work first generates a virtual texture map for the skin area iteratively by averaging the color of neighbored vertices. Although this step creates unrealistic, overly smoothed texture, illumination stays constant between the real and virtual texture. In the second pass, the mirrored texture is gradually blended with the real or generated texture according to the visibility. This scheme ensures a gentle handling of illumination and yet yields realistic texture. Because the blending area only relates to non-informative area, main facial features still have unique appearance in different face halves. Evaluation results reveal realistic rendering in novel poses robust to challenging illumination conditions and small registration errors.

: http://publica.fraunhofer.de/dokumente/N-343634.html