Putze, F.F.PutzeHild, J.J.HildSano, A.A.SanoKasneci, E.E.KasneciSolovey, E.E.SoloveySchultz, T.T.Schultz2022-03-142022-03-142018https://publica.fraunhofer.de/handle/publica/40750610.1145/3242969.3265861Multimodal signals allow us to gain insights into internal cognitive processes of a person, for example: speech and gesture analysis yields cues about hesitations, knowledgeability, or alertness, eye tracking yields information about a person's focus of attention, task, or cognitive state, EEG yields information about a person's cognitive load or information appraisal. Capturing cognitive processes is an important research tool to understand human behavior as well as a crucial part of a user model to an adaptive interactive system such as a robot or a tutoring system. As cognitive processes are often multifaceted, a comprehensive model requires the combination of multiple complementary signals. In this workshop at the ACM International Conference on Multimodal Interfaces (ICMI) conference in Boulder, Colorado, USA, we discussed the state-of-the-art in monitoring and modeling cognitive processes from multi-modal signals.en004670Modeling cognitive processes from multimodal signalsconference paper