Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Towards a new camera model for X3D

: Jung, Yvonne; Behr, Johannes


Fellner, D.W.; Sourin, A.; Behr, J.; Walczak, K.; Spencer, S.N. ; Association for Computing Machinery -ACM-, Special Interest Group on Graphics -SIGGRAPH-; European Association for Computer Graphics -EUROGRAPHICS-; Fraunhofer-Institut für Graphische Datenverarbeitung -IGD-, Darmstadt; Gesellschaft für Informatik -GI-, Fachbereich Graphische Datenverarbeitung:
Web3D 2009, 14th International Conference on 3D Web Technology. Proceedings : June 16-17, 2009 at Fraunhofer Institute for Computer Graphics, Darmstadt, Germany
New York: ACM Press, 2009
ISBN: 978-1-60558-432-4
International Conference on 3D Web Technology (WEB3D) <14, 2009, Darmstadt>
Conference Paper
Fraunhofer IGD ()
virtual reality (VR); extensible 3D (X3D); camera model; visual effects

Creating and setting the right parameters for the virtual camera is crucial for any content creation process. However, this is not easy since most current camera models, including the X3D Viewpoint, use a 3D position and orientation in 3D space to define the final visualized image. People use authoring tools or simple interactive navigation methods (e.g. "lookAt" or "showAll") to ease the process but at the end they still move a 6D (translation and rotation) camera beacon to get the final image.

We thus propose a new X3D camera model, the CinematographicViewpoint node, which does not force the content creator to move the camera but allows the author to directly define what objects he would like to see on the screen. We borrow established techniques from the film area (e.g. rule of thirds and line of action) that allow defining objects and object-relations, which the camera model will use to automatically calculate the final transformation in space. The new camera model includes additionally a model for global visual effects (e.g. motion blur and depth of field), which allows incorporating classical film effects to real-time scenes. Both approaches combined allow content creators building visual results and camera movements that are closer to traditional filming much easier. The proposed approach also supports automatic camera movements that are bound to interactive content, which has not been possible before.