• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. AI for Safety: How to use Explainable Machine Learning Approaches for Safety Analyses
 
  • Details
  • Full
Options
2023
Conference Paper
Title

AI for Safety: How to use Explainable Machine Learning Approaches for Safety Analyses

Abstract
Current research in machine learning (ML) and safety focuses on safety assurance of ML. We, however, show how to interpret results of explainable ML approaches for safety. We investigate how individual evaluation of data clusters in specific explainable, outside-model estimators can be analyzed to identify insufficiencies at different levels, such as (1) input feature, (2) data or (3) the ML model itself. Additionally, we link our finding to required artifacts of safety within the automotive domain, such as unknown unknowns from ISO 21448 or equivalence class as mentioned in ISO/TR 4804. In our case study we analyze and evaluate the results from an explainable, outside-model estimator (i.e., white-box model) by performance evaluation, decision tree visualization, data distribution and input feature correlation. As explainability is key for safety analyses, the utilized model is a random forest, with extensions via boosting and multi-output regression. The model training is based on an introspective data set, optimized for reliable safety estimation. Our results show that technical limitations can be identified via homogeneous data clusters and assigned to a corresponding equivalence class. For unknown unknowns, each level of insufficiency (input, data and model) must be analyzed separately and systematically narrowed down by process of elimination. In our case study we identify "Fog density" as an unknown unknown input feature for the introspective model.
Author(s)
Kurzidem, Iwo  
Fraunhofer-Institut für Kognitive Systeme IKS  
Burton, Simon  
Fraunhofer-Institut für Kognitive Systeme IKS  
Schleiß, Philipp  
Fraunhofer-Institut für Kognitive Systeme IKS  
Mainwork
AISafety-SafeRL 2023, Artificial Intelligence Safety and Safe Reinforcement Learning  
Project(s)
IKS-Aufbauprojekt  
Funder
Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie  
Conference
Joint Workshop on Artificial Intelligence Safety and Safe Reinforcement Learning 2023  
International Joint Conferences on Artificial Intelligence 2023  
Open Access
DOI
10.24406/publica-2147
File(s)
Download (2.31 MB)
Rights
CC BY 4.0: Creative Commons Attribution
Language
English
Fraunhofer-Institut für Kognitive Systeme IKS  
Fraunhofer Group
Fraunhofer-Verbund IUK-Technologie  
Keyword(s)
  • safety analysis

  • safety engineering

  • explainable machine learning

  • outside-model estimator

  • safety validation

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024