Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

A generic UPnP architecture for ambient intelligence meeting rooms and a control point allowing for integrated 2D and 3D interaction

 
: Nazari Shirehjini, A.A.

:

Bailly, G.:
Smart Objects and Ambient Intelligence. SOC-EUSAI 2005. Proceedings : A Joint Conference SOC-EUSAI. Innovative Context-Aware Services : Usages and Technologies
Grenoble, 2005
S.207-212
Smart Objects Conference (SOC) <3, 2005, Grenoble, France>
European Symposium on Ambient Intelligence (EUSAI) <3, 2005, Grenoble, France>
Englisch
Konferenzbeitrag
Fraunhofer IGD ()
ambient intelligence; 3d visualization; direct manipulation; multimedia appliance; personal media management

Abstract
In this paper we present a generic UPnP Presentation Architecture for AmI meeting rooms. It allows the development of applications based on standardized access mechanisms. This architecture introduces besides standard lighting devices also a UPnP design for complex projection settings, analog audiovideo devices, shutter blinds and media repositories. Using this architecture, AmI developers benefit from UPnP device discovery as well as standardized access to devices and media repositories. This allows the development of interaction solutions working in various AmI environments that may be composed by different device infrastructures. Based on this architecture, we present the PECo system, a novel Control Point which provides integrated and intuitive access to the user's surrounding and media repositories allowing to control and manage intelligent environments. PECo uses an automatically created 3D visualization of the environment. Entering a room, PECo discovers the infrastructure and available devices and builds the integrated user interface. The 3D visualization creates a logical link between physical devices and their virtual representation on the user's PDA. By doing so, the user can easily identify a device within his environment based on its position, orientation and form. There he can access the identified devices through the 3D interface and manipulate them directly within the scene. For example he can click on a 3D object to turn on a light. The 3D interface allows the user to access the infrastructure without demanding knowledge about specific device names, IP-numbers, URLs etc.

: http://publica.fraunhofer.de/dokumente/N-33459.html