Options
1997
Conference Paper
Titel
Model-Based Synthetic View Generation from a Monocular Video Sequence
Abstract
In this paper a model-based multi-view image generation system for video conferencing is presented. The system assumes that a 3-D model of the person in front of the camera is available. It extracts texture from speaking person sequence images and maps it to the static 3-D model during the videoconference session. Since only the incrementally updated texture information is transmitted during the whole session, the bandwidth requirement is very small. Based on the experimental results one can conclude that the proposed system is very promising for practical applications.