Now showing 1 - 3 of 3
  • Publication
    Toward Safe Human Machine Interface and Computer-Aided Diagnostic Systems
    ( 2023) ;
    Espinoza, Delfina
    ;
    ;
    Mata, Núria
    ;
    Doan, Nguyen Anh Vu
    Computer-Aided Diagnosis (CADx) systems are safety-critical systems that provide automated medical diagnoses based on their input data. They are Artificial Intelligence based systems which make use of Machine Learning or Deep Learning techniques to differentiate between healthy and unhealthy medical images, as well as, physiological signals acquired from patients. Although current CADx systems offer many advantages in diagnostics, validation is still a challenge, i.e. ensuring that no false negative happens while limiting the occurrence of false positives. This is a major concern since such safety-critical systems have to be verified before deployment into a clinical environment. For that reason, this paper aims to improve the reliability of the CADx systems by adding a Human Machine Interface (HMI) component to enhance the data acquisition process and providing a safety-related framework which includes the HMI/CADx system life cycle to bridge the identified gaps.
  • Publication
    Safety Assessment: From Black-Box to White-Box
    Safety assurance for Machine-Learning (ML) based applications such as object detection is a challenging task due to the black-box nature of many ML methods and the associated uncertainties of its output. To increase evidence in the safe behavior of such ML algorithms an explainable and/or interpretable introspective model can help to investigate the black-box prediction quality. For safety assessment this explainable model should be of reduced complexity and humanly comprehensible, so that any decision regarding safety can be traced back to known and comprehensible factors. We present an approach to create an explainable, introspective model (i.e., white-box) for a deep neural network (i.e., black-box) to determine how safety-relevant input features influence the prediction performance, in particular, for confidence and Bounding Box (BBox) regression. For this, Random Forest (RF) models are trained to predict a YOLOv5 object detector output, for specifically selected safety-relevant input features from the open context environment. The RF predicts the YOLOv5 output reliability for three safety related target variables, namely: softmax score, BBox center shift and BBox size shift. The results indicate that the RF prediction for softmax score are only reliable within certain constrains, while the RF prediction for BBox center/size shift are only reliable for small offsets.
  • Publication
    Safe adaptation for reliable and energy-efficient E/E architectures
    ( 2017) ; ; ;
    Ruiz, Alejandra
    ;
    Radermacher, Ansgar
    The upcoming changing mobility paradigms request more and more services and features to be included in future cars. Electric mobility and highly automated driving lead to new requirements and demands on vehicle information and communication (ICT) architectures. For example, in the case of highly automated driving, future drivers no longer need to monitor and control the vehicle all the time. This calls for new fault-tolerant approaches of automotive E/E architectures. In addition, the electrification of vehicles requires a flexible underlying E/E architecture which facilitates enhanced energy management. Within the EU-funded SafeAdapt project, a new E/E architecture for future vehicles has been developed in which adaptive systems ensure safe, reliable, and cost-effective mobility. The holistic approach provides the necessary foundation for future invehicle systems and its evaluation shows the great potential of such reliable and energy-efficient E/E architectures.