Now showing 1 - 3 of 3
  • Publication
    Toward Safe Human Machine Interface and Computer-Aided Diagnostic Systems
    ( 2023) ;
    Espinoza, Delfina
    ;
    ;
    Mata, Núria
    ;
    Doan, Nguyen Anh Vu
    Computer-Aided Diagnosis (CADx) systems are safety-critical systems that provide automated medical diagnoses based on their input data. They are Artificial Intelligence based systems which make use of Machine Learning or Deep Learning techniques to differentiate between healthy and unhealthy medical images, as well as, physiological signals acquired from patients. Although current CADx systems offer many advantages in diagnostics, validation is still a challenge, i.e. ensuring that no false negative happens while limiting the occurrence of false positives. This is a major concern since such safety-critical systems have to be verified before deployment into a clinical environment. For that reason, this paper aims to improve the reliability of the CADx systems by adding a Human Machine Interface (HMI) component to enhance the data acquisition process and providing a safety-related framework which includes the HMI/CADx system life cycle to bridge the identified gaps.
  • Publication
    Towards the Quantitative Verification of Deep Learning for Safe Perception
    Deep learning (DL) is seen as an inevitable building block for perceiving the environment with sufficient detail and accuracy as required by automated driving functions. Despite this, its black-box nature and the therewith intertwined unpredictability still hinders its use in safety-critical systems. As such, this work addresses the problem of making this seemingly unpredictable nature measurable by providing a risk-based verification strategy, such as required by ISO 21448. In detail, a method is developed to break down acceptable risk into quantitative performance targets of individual DL-based components along the perception architecture. To verify these targets, the DL input space is split into areas according to the dimensions of a fine-grained operational design domain (μODD) . As it is not feasible to reach full test coverage, the strategy suggests to distribute test efforts across these areas according to the associated risk. Moreover, the testing approach provides answers with respect to how much test coverage and confidence in the test result is required and how these figures relate to safety integrity levels (SILs).
  • Publication
    Trustworthy AI for Intelligent Traffic Systems (ITS)
    (Fraunhofer IKS, 2021)
    Bortoli, Stefano
    ;
    Grossi, Margherita
    ;
    ; ; ;
    AI-enabled Intelligent Traffic Systems (ITS) offer the potential to greatly improve the efficiency of traffic flow in inner cities resulting in shorter travel times, increased fuel efficiency and reduction in harmful emissions. These systems make use of data collected in real-time across different locations in order to adapt signaling infrastructure (such as traffic lights and lane signals) based on a set of optimized algorithms. Consequences of failures in such systems can range from increased congestion and the associated rise in traffic accidents to increased vehicle emissions over time. This white paper summarizes the results of consultations between safety, mobility and smart city experts to explore the consequences of the application of AI methods in Intelligent Traffic Systems. The consultations were held as a roundtable event on the 1st July 2021, hosted by Fraunhofer IKS and addressed the following questions: How does the use of AI fundamentally change our understanding of safety and risk related to such systems? Which challenges are introduced when using AI for decision making functions in Smart Cities and Intelligent Traffic Systems? How should these challenges be addressed in future? Based on these discussions, the white paper summarizes current and future challenges of introducing AI into Intelligent Traffic Systems in a trustworthy manner. Here, special focus is laid on the complex, heterogeneous, multi-disciplinary nature of ITS in Smart Cities. In doing so, we motivate a combined consideration of the emerging complexity and inherent uncertainty related to such systems and the need for collaboration and communication between a broad range of disciplines.