Now showing 1 - 7 of 7
  • Publication
    Generating Proactive Suggestions based on the Context: User Evaluation of Large Language Model Outputs for In-Vehicle Voice Assistants
    ( 2024) ;
    Günes, Can
    ;
    Entz, Kathleen
    ;
    Lerch, David
    ;
    ;
    Large Language Models (LLMs) have recently been explored for a variety of tasks, most prominently for dialogue-based interactions with users. The future in-car voice assistant (VA) is envisioned as a proactive companion making suggestions to the user during the ride. We investigate the use of selected LLMs to generate proactive suggestions for a VA given different context situations by using a basic prompt design. An online study with users was conducted to evaluate the generated suggestions. We demonstrate the feasibility of generating context-based proactive suggestions with different off-the-shelf LLMs. Results of the user survey show that suggestions generated by the LLMs GPT4.0 and Bison received an overall positive evaluation regarding the user experience for response quality and response behavior over different context situations. This work can serve as a starting point to implement proactive interaction for VA with LLMs based on the recognized context situation in the car.
  • Publication
    Are Drivers Allowed to Sleep?
    ( 2023)
    Schwarze, Doreen
    ;
    ;
    Weiser, Lukas
    ;
    ;
    Verhoeven, Rolf
    ;
    Rötting, Matthias
    Higher levels of automated driving may offer the possibility to sleep in the driver’s seat in the car, and it is foreseeable that drivers will voluntarily or involuntarily fall asleep when they do not need to drive. Post-sleep performance impairments due to sleep inertia, a brief period of impaired cognitive performance after waking up, is a potential safety issue when drivers need to take over and drive manually. The present study assessed whether sleep inertia has an effect on driving and cognitive performance after different sleep durations. A driving simulator study with n = 13 participants was conducted. Driving and cognitive performance were analyzed after waking up from a 10-20 min sleep, a 30-60 min sleep, and after resting without sleep. The study’s results indicate that a short sleep duration does not reliably prevent sleep inertia. After the 10-20 min sleep, cognitive performance upon waking up was decreased, but the sleep inertia impairment faded within 15 min. Although the driving parameters showed no significant difference between the conditions, participants subjectively felt more tired after both sleep durations compared to resting. The small sample size of 13 participants, tested in a within-design, may have prevented medium and small effects from becoming significant. In our study, take-over was offered without time pressure, and take-over times ranged from 3.15 min to 4.09 min after the alarm bell, with a mean value of 3.56 min in both sleeping conditions. The results suggest that daytime naps without previous sleep deprivation result in mild and short-term impairments. Further research is recommended to understand the severity of impairments caused by different intensities of sleep inertia.
  • Publication
    Artificial Intelligence for Adaptive, Responsive, and Level-Compliant Interaction in the Vehicle of the Future (KARLI)
    ( 2022) ;
    Wannemacher, Christoph
    ;
    Faller, Fabian
    ;
    Schmidt, Eike
    ;
    Engelhardt, Doreen
    ;
    Mikolajewski, Martin
    ;
    Rittger, Lena
    ;
    ; ; ;
    Hashemi, Vahid
    ;
    Sahakyan, Manya
    ;
    Romanelli, Massimo
    ;
    Kiefer, Bernd
    ;
    Fäßler, Victor
    ;
    Rößler, Tobias
    ;
    Großerüschkamp, Marc
    ;
    Kurbos, Andreas
    ;
    Bottesch, Miriam
    ;
    Immoor, Pia
    ;
    Engeln, Arnd
    ;
    Fleischmann, Marlis
    ;
    Schweiker, Miriam
    ;
    Pagenkopf, Anne
    ;
    Daniela Piechnik
    ;
    The KARLI project consortium investigates and develops monitoring systems for drivers and other occupants with new artificial intelligence approaches, based on high quality labeled data that is collected in real vehicles. The project’s target applications are integrated in vehicles that enable various levels of automation and transitions of control. Level-compliant occupant behavior is assessed with AI algorithms and modulated with responsive and adaptive human machine interface (HMI) solutions. The project also targets the prediction and prevention of motion sickness in order to improve the user experience, enabling productivity and maintaining an adequate driver state. The user-centered approach is represented by defining five KARLI User Roles which specify the driving related behavior requirements for all levels of automation. The project results will be evaluated at the end of the project. The KARLI applications will be evaluated regarding user experience benefits and AI performance measures. The KARLI project is approaching two main challenges that are ambitious and have a high potential: First, raising and investigating the potential of AI for driver monitoring and driver-vehicle interaction, and second, accelerating the transfer from research to series production applications.
  • Publication
    Eliciting potential for positive UX using psychological needs: Towards a user-centered method to identify technologies for UX in the car interior
    ( 2022)
    Bopp-Bertenbreiter, Valeria
    ;
    Klein, Stefan
    ;
    Engelhardt, Doreen
    ;
    Rittger, Lena
    ;
    ; ;
    Positive user experiences (PUX) in the vehicle interior will be enabled by choosing the technologies with the potential to provide such experiences. Design for PUX in general exists, but methods to assess and compare technologies regarding their PUX potential are missing. Building on the insight that fulfillment of basic psychological needs may lead to PUX (Hassenzahl et al., 2010), this paper presents the first iteration of the user-centered method Tec4UXNeeds. Tec4UXNeeds combines VR representations of technologies and half-structured interviews to identify PUX potential of technologies: which basic psychological needs a technology may fulfill and in which use cases the technology could be used to enable need fulfillment. The method is applied for two display technologies in a standardized within-subjects study (n = 27). The study investigates whether the method Tech4UX enables participants to describe whether a technology has a potential to fulfill psychological needs for them and whether the method is specific enough to find differences in need fulfillment potential between technologies described by participants.Preliminary results identified distinct levels of need fulfillment for the first and second display technology (Display on Demand & Holography). Data will be analyzed further using qualitative content analysis. The method will be optimized iteratively in the future.
  • Publication
    PersonalAIzation - Exploring concepts and guidelines for AI-driven personalization of in-car HMIs in fully automated vehicles
    ( 2022)
    Sundar, Shrivaas Madapusi
    ;
    Bopp-Bertenbreiter, Valeria
    ;
    ;
    Kosuru, Ravi Kanth
    ;
    ;
    Pfleging, Bastian
    ;
    ;
    The role of the driver changes to that of a passenger in autonomous cars. Thus, the vehicle interior transforms from a cockpit into a multimedia station and workspace. This work explores concepts for Artificial Intelligence (AI) to provide a personalized user experience for the passengers in the form of Contextual Personalized Shortcuts and Personalized Services in the infotainment system. The two use cases were iteratively developed based on literature research and surveys. We evaluated AI- Personalized Services and compared AI-generated to the manually configurable shortcuts. AttrakDiff (Hassenzahl et al., 2003) and Car Technology Acceptance Model (CTAM; Osswald et al., 2012) were used to evaluate UX and user acceptance. The AI-Personalized interface obtained positive scores and reactions in the user testing and shows potential. Based on the insight from the user studies and literature review, we present and human-AI interaction guidelines to build effective AI-personalized HMIs.
  • Publication
    Improving Driver Performance and Experience in Assisted and Automated Driving with Visual Cues in the Steering Wheel
    ( 2022) ;
    Muthumani, Arun
    ;
    Feierle, Alexander
    ;
    Galle, Melanie
    ;
    ; ; ;
    Bengler, Klaus
    In automated driving it is important to ensure drivers’ awareness of the currently active level of automation and to support transitions between those levels. This is possible with a suitable human-machine interface (HMI). In this driving simulator study, two visual HMI concepts (Concept A and B ) were compared with a baseline for informing drivers about three modes: manual driving, assisted driving, and automated driving. The HMIs, consisting of LED strips on the steering wheel that differed in luminance, color, and pattern, provided continuous information about the active mode and announced transitions. The assisted mode was conveyed in Concept A using a combination of amber and blue LEDs, while in Concept B only amber LEDs were used. During automated driving Concept A displayed blue LEDs and Concept B, turquoise. Both concepts were compared to a baseline HMI, with no LEDs. Thirty-eight drivers with driving licence were trained and participated. Objective measures (hands-on-wheel time, takeover time, and visual attention) are reported. Self-reported measures (mode awareness, trust, user experience, and user acceptance) from a previous publication are briefly repeated in this context (Muthumani et al.). Concept A showed 200 ms faster hands-on-wheel times than the baseline, while in Concept B several outliers were observed that prevented significance. The visual HMIs with LEDs did not influence the eyes-on-road time in any of the automation levels. Participants preferred Concept B, with more prominent differentiation between the automation levels, over Concept A.
  • Publication
    Klassifikation von Fahrerzuständen und Nebentätigkeiten über Körperposen bei automatisierter Fahrt
    Durch die fortschreitende Automatisierung von Fahrzeugen, besonders des Fahrvorgangs selbst, verändert sich die Rolle des Fahrers mehr und mehr hin zum Passagier. Damit steigt die Bedeutung von Nebenaufgaben und fahrfremden Tätigkeiten. Solange jedoch mit Rückübergaben der Fahraufgabe an den Fahrer während der Fahrt gerechnet werden muss, müssen aus Sicherheits- und Komfortgründen die Aktivitäten des Fahrers erfasst werden. Eine Möglichkeit hierfür ist die optische Erfassung und Klassifikation der Körperhaltung. In diesem Beitrag präsentieren wir ein System zur manuellen Analyse der Körperhaltung für Simulator-Studien sowie einen Ansatz zur automatischen Erfassung der Körperhaltung im Fahrzeug.