Ohm, J.-R.J.-R.OhmMuller, K.K.Muller2022-03-092022-03-091998https://publica.fraunhofer.de/handle/publica/331780We introduce a new form of representation for three-dimensional video objects. We have developed a technique to extract disparity and texture data from video objects, that are captured simultaneously with multiple-camera configurations. As a result, we obtain the video object plane as an unwrapped surface of a 3D object, containing all texture data visible from any of the cameras. This texture surface can be encoded like any 2D video object plane, while the 3D information is contained in the associated disparity map. It is then possible to reconstruct different viewpoints from the texture surface by simple disparity-based projection. The merits of the technique are efficient multiview encoding of single video objects, and support for viewpoint adaptation functionality, which is desirable in mixing natural and synthetic images. We have performed experiments with the MPEG-4 video verification model, where the disparity map is encoded by use of the tools provided for grayscale alpha data encoding. Due to its simplicity, the technique is capable for applications with requirement for real-time viewpoint adaptation towards video objects.encode standardsimage representationimage texturereal-time systemstelecommunication standardsvideo camerasvideo codingmultiview representation3d video object synthesisthree-dimensional video objectstexture datamultiple-camera configurationtexture surface2d video object planedisparity mapdisparity-based projectionmultiview encodingreal time viewpoint adaptationmpeg-4video verification modelgrayscale alpha data encoding621400Incomplete 3D for multiview representation and synthesis of video objectsconference paper