Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Unsupervised feature learning for EEG-based emotion recognition

: Lan, Zirui; Sourina, Olga; Wang, Lipo; Scherer, Reinhold; Müller-Putz, Gernot


European Association for Computer Graphics -EUROGRAPHICS-; International Federation for Information Processing -IFIP-; Association for Computing Machinery -ACM-, Special Interest Group on Computer Graphics and Interactive Techniques -SIGGRAPH-:
International Conference on Cyberworlds, CW 2017 : Chester, United Kingdom, 20-22 September 2017
Piscataway, NJ: IEEE, 2017
ISBN: 978-1-5386-2089-2
ISBN: 978-1-5386-2090-8
International Conference on Cyberworlds (CW) <2017, Chester>
Conference Paper
Fraunhofer Singapore ()
Guiding Theme: Digitized Work; Research Area: Human computer interaction (HCI); emotion classification; Electroencephalography (EEG); brain-computer interfaces (BCI)

Spectral band power features are one of the most widely used features in the studies of electroencephalogram (EEG)-based emotion recognition. The power spectral density of EEG signals is partitioned into different bands such as delta, theta, alpha and beta band etc. Though based on neuroscientific findings, the partition of frequency bands is somewhat on an adhoc basis, and the definition of frequency ranges of the bands of interest can vary between studies. On the other hand, it is also arguable that one definition of power bands could perform equally well on all subjects. In this paper, we propose to use autoencoder to automatically learn from each subject the salient frequency components from power spectral density estimated as periodogram by Fast Fourier Transform (FFT). We propose a network architecture especially for EEG feature extraction, one that adopts hidden unit clustering with added pooling neuron per cluster. The classification accuracy with features extracted by our proposed method is benchmarked against that with standard power features. Experimental results show that our proposed feature extraction method achieves accuracy ranging from 44% to 59% for three-emotion classification. We also see a 4-20% accuracy improvement over standard band power features.