Rudiments for a 3D Freehand Sketchbased Human-Computer-Interface for Large Scale Virtual Environments
This paper investigates the applicability of 3D freehand sketching to implement a human-computer interface for virtual environments, and the possibility of using context-free grammars to interpret the sketches. Dynamic gestures are the foundation for outlining virtual objects on a sketch-like basis to create them, and for interacting with the objects in the three dimensional space. Specifically, the question whether the support of the third dimension improves the user's articulation possibility, in contrast to sketching within two dimensions only, is one point of interest. We focus on the application of standard LALR parser generators, which are commonly used as compiler-compilers to create parsers which support multiple gesture recognition, and automatically reconstruct sketched objects with their indicated properties. In combination with a gesture-based interaction metaphor and the consideration of different modalities, we achieve to maximize the information flow between human and computer. In addition, we propose a complementary navigation method for virtual planes that makes use of physical reflective props.