Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Prototyping Smart Eyewear with Capacitive Sensing for Facial and Head Gesture Detection

 
: Matthies, Denys J.C.; Woodall, Alex; Urban, Bodo

:

Association for Computing Machinery -ACM-; Association for Computing Machinery -ACM-, Special Interest Group on Mobility of Systems, Users, Data and Computing; Association for Computing Machinery -ACM-, Special Interest Group on Computer and Human Interaction -SIGCHI-; Association for Computing Machinery -ACM-, Special Interest Group on Spatial Information -SIGSPATIAL-:
ACM International Joint Conference on Pervasive and Ubiquitous Computing and the ACM International Symposium on Wearable Computers, UbiComp/ISWC 2021 : September 21-25, 2021, Virtual Event
New York: ACM, 2021
ISBN: 978-1-4503-8461-2
S.476-480
International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp) <2021, Online>
International Symposium on Wearable Computers (ISWC) <2021, Online>
Bundesministerium fur Wirtschaft und Energie BMWi (Deutschland)
16KN049102
Bundesministerium fur Wirtschaft und Energie BMWi (Deutschland)
16KN049131
Englisch
Konferenzbeitrag
Fraunhofer IGD, Institutsteil Rostock ()
electric field sensing; prototyping; machine learning; data mining; Lead Topic: Digitized Work; Research Line: Human computer interaction (HCI); user interaction; input; Human-computer interaction (HCI)

Abstract
The prevalence of smart eyewear, particularly in the form of eye glasses, has continuously increased. In this paper, we explore instrumenting a pair of glasses with Capacitive Sensing (CapSense) technology. We conducted three studies, in which we (1) explore a suitable CapSense setup for rapid prototyping; (2) reveal the potential of detecting facial and head gestures with a CapSense glasses prototype; and (3) investigate the technical feasibility of using a passive electric field sensing. Instead of training a low-dimensional gesture set that works across different users (e.g., eye winks), we selected a set of 12 facial and head related gestures, which we classified by relying on user-dependent machine learning models.

: http://publica.fraunhofer.de/dokumente/N-641096.html