Now showing 1 - 4 of 4
  • Publication
    Generating Proactive Suggestions based on the Context: User Evaluation of Large Language Model Outputs for In-Vehicle Voice Assistants
    ( 2024) ;
    Günes, Can
    ;
    Entz, Kathleen
    ;
    Lerch, David
    ;
    ;
    Large Language Models (LLMs) have recently been explored for a variety of tasks, most prominently for dialogue-based interactions with users. The future in-car voice assistant (VA) is envisioned as a proactive companion making suggestions to the user during the ride. We investigate the use of selected LLMs to generate proactive suggestions for a VA given different context situations by using a basic prompt design. An online study with users was conducted to evaluate the generated suggestions. We demonstrate the feasibility of generating context-based proactive suggestions with different off-the-shelf LLMs. Results of the user survey show that suggestions generated by the LLMs GPT4.0 and Bison received an overall positive evaluation regarding the user experience for response quality and response behavior over different context situations. This work can serve as a starting point to implement proactive interaction for VA with LLMs based on the recognized context situation in the car.
  • Publication
    Activities that Correlate with Motion Sickness in Driving Cars – An International Online Survey
    ( 2024) ;
    Herrmanns, Amina
    ;
    Lerch, David
    ;
    Zhong, Zeyun
    ;
    Daniela Piechnik
    ;
    ;
    Xian, Boyu
    ;
    Vaupel, Nicklas Jakob Elia
    ;
    Vijayakumar, Ajona
    ;
    Cabaroglu, Canmert
    ;
    Rausch, Jessica
    Up to 2 out of 3 passengers suffer from motion sickness, caused by non-driving related activities. Occupant monitoring systems detect such activities via cameras in the vehicle interior and hence can be used to warn passengers or to assist them. An international online survey in Germany, USA, China, India, Turkey and Mexico was conducted in order to identify activities that correlate with motion sickness. The results identify reading, using a device, watching a movie and turning in the seat to be the most relevant activities for occupant monitoring systems to detect and hence for motion sickness assistance systems to address.
  • Publication
    Artificial Intelligence for Adaptive, Responsive, and Level-Compliant Interaction in the Vehicle of the Future (KARLI)
    ( 2022) ;
    Wannemacher, Christoph
    ;
    Faller, Fabian
    ;
    Schmidt, Eike
    ;
    Engelhardt, Doreen
    ;
    Mikolajewski, Martin
    ;
    Rittger, Lena
    ;
    ; ; ;
    Hashemi, Vahid
    ;
    Sahakyan, Manya
    ;
    Romanelli, Massimo
    ;
    Kiefer, Bernd
    ;
    Fäßler, Victor
    ;
    Rößler, Tobias
    ;
    Großerüschkamp, Marc
    ;
    Kurbos, Andreas
    ;
    Bottesch, Miriam
    ;
    Immoor, Pia
    ;
    Engeln, Arnd
    ;
    Fleischmann, Marlis
    ;
    Schweiker, Miriam
    ;
    Pagenkopf, Anne
    ;
    Daniela Piechnik
    ;
    The KARLI project consortium investigates and develops monitoring systems for drivers and other occupants with new artificial intelligence approaches, based on high quality labeled data that is collected in real vehicles. The project’s target applications are integrated in vehicles that enable various levels of automation and transitions of control. Level-compliant occupant behavior is assessed with AI algorithms and modulated with responsive and adaptive human machine interface (HMI) solutions. The project also targets the prediction and prevention of motion sickness in order to improve the user experience, enabling productivity and maintaining an adequate driver state. The user-centered approach is represented by defining five KARLI User Roles which specify the driving related behavior requirements for all levels of automation. The project results will be evaluated at the end of the project. The KARLI applications will be evaluated regarding user experience benefits and AI performance measures. The KARLI project is approaching two main challenges that are ambitious and have a high potential: First, raising and investigating the potential of AI for driver monitoring and driver-vehicle interaction, and second, accelerating the transfer from research to series production applications.
  • Publication
    Improving Driver Performance and Experience in Assisted and Automated Driving with Visual Cues in the Steering Wheel
    ( 2022) ;
    Muthumani, Arun
    ;
    Feierle, Alexander
    ;
    Galle, Melanie
    ;
    ; ; ;
    Bengler, Klaus
    In automated driving it is important to ensure drivers’ awareness of the currently active level of automation and to support transitions between those levels. This is possible with a suitable human-machine interface (HMI). In this driving simulator study, two visual HMI concepts (Concept A and B ) were compared with a baseline for informing drivers about three modes: manual driving, assisted driving, and automated driving. The HMIs, consisting of LED strips on the steering wheel that differed in luminance, color, and pattern, provided continuous information about the active mode and announced transitions. The assisted mode was conveyed in Concept A using a combination of amber and blue LEDs, while in Concept B only amber LEDs were used. During automated driving Concept A displayed blue LEDs and Concept B, turquoise. Both concepts were compared to a baseline HMI, with no LEDs. Thirty-eight drivers with driving licence were trained and participated. Objective measures (hands-on-wheel time, takeover time, and visual attention) are reported. Self-reported measures (mode awareness, trust, user experience, and user acceptance) from a previous publication are briefly repeated in this context (Muthumani et al.). Concept A showed 200 ms faster hands-on-wheel times than the baseline, while in Concept B several outliers were observed that prevented significance. The visual HMIs with LEDs did not influence the eyes-on-road time in any of the automation levels. Participants preferred Concept B, with more prominent differentiation between the automation levels, over Concept A.