Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

EarFieldSensing: A novel in-ear electric field sensing to enrich wearable gesture input through facial expressions

: Matthies, Denys J.C.; Strecker, Bernhard A.; Urban, Bodo

Volltext urn:nbn:de:0011-n-4590244 (2.9 MByte PDF)
MD5 Fingerprint: 38af307da9e46e8eb5bc356394bc0bb6
© ACM This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution.
Erstellt am: 20.9.2017

Mark, Gloria (Ed.) ; Association for Computing Machinery -ACM-; Association for Computing Machinery -ACM-, Special Interest Group on Computer and Human Interaction -SIGCHI-:
CHI 2017, CHI Conference on Human Factors in Computing Systems. Proceedings : Denver, Colorado, USA, May 06 - 11, 2017
New York: ACM, 2017
ISBN: 978-1-4503-4655-9
Conference on Human Factors in Computing Systems (CHI) <35, 2017, Denver/Colo.>
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IGD ()
Fraunhofer IGD-R ()
user interaction; input; Human-computer interaction (HCI); wearable computing; electric field sensing; Guiding Theme: Digitized Work; Research Area: Human computer interaction (HCI)

EarFieldSensing (EarFS) is a novel input method for mobile and wearable computing using facial expressions. Facial muscle movements induce both electric field changes and physical deformations, which are detectable with electrodes placed inside the ear canal. The chosen ear-plug form factor is rather unobtrusive and allows for facial gesture recognition while utilizing the close proximity to the face. We collected 25 facial-related gestures and used them to compare the performance levels of several electric sensing technologies (EMG, CS, EFS, EarFS) with varying electrode setups.
Our developed wearable fine-tuned electric field sensing employs differential amplification to effectively cancel out environmental noise while still being sensitive towards small facial-movement-related electric field changes and artifacts from ear canal deformations. By comparing a mobile with a stationary scenario, we found that EarFS continues to perform better in a mobile scenario. Quantitative results show EarFS to be capable of detecting a set of 5 facial gestures with a precision of 90% while sitting and 85.2% while walking. We provide detailed instructions to enable replication of our low-cost sensing device. Applying it to different positions of our body will also allow to sense a variety of other gestures and activities.