• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Assurance Cases as Foundation Stone for Auditing AI-Enabled and Autonomous Systems
 
  • Details
  • Full
Options
2023
Conference Paper
Title

Assurance Cases as Foundation Stone for Auditing AI-Enabled and Autonomous Systems

Title Supplement
Workshop Results and Political Recommendations for Action from the ExamAI Project
Abstract
The European Machinery Directive and related harmonized standards do consider that software is used to generate safety-relevant behavior of the machinery but do not consider all kinds of software. In particular, software based on machine learning (ML) are not considered for the realization of safety-relevant behavior. This limits the introduction of suitable safety concepts for autonomous mobile robots and other autonomous machinery, which commonly depend on ML-based functions. We investigated this issue and the way safety standards define safety measures to be implemented against software faults. Functional safety standards use Safety Integrity Levels (SILs) to define which safety measures shall be implemented. They provide rules for determining the SIL and rules for selecting safety measures depending on the SIL. In this paper, we argue that this approach can hardly be adopted with respect to ML and other kinds of Artificial Intelligence (AI). Instead of simple rules for determining an SIL and applying related measures against faults, we propose the use of assurance cases to argue that the individually selected and applied measures are sufficient in the given case. To get a first rating regarding the feasibility and usefulness of our proposal, we presented and discussed it in a workshop with experts from industry, German statutory accident insurance companies, work safety and standardization commissions, and representatives from various national, European, and international working groups dealing with safety and AI. In this paper, we summarize the proposal and the workshop discussion. Moreover, we check to which extent our proposal is in line with the European AI Act proposal and current safety standardization initiatives addressing AI and Autonomous Systems.
Author(s)
Adler, Rasmus  
Fraunhofer-Institut für Experimentelles Software Engineering IESE  
Klaes, Michael
Fraunhofer-Institut für Experimentelles Software Engineering IESE  
Mainwork
HCI International 2022 - Late Breaking Papers. HCI for Today’s Community and Economy  
Conference
International Conference on Human-Computer Interaction (HCI International) 2022  
DOI
10.1007/978-3-031-18158-0_21
Language
English
Fraunhofer-Institut für Experimentelles Software Engineering IESE  
Keyword(s)
  • AI

  • Assurance cases

  • Autonomous systems

  • Safety

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024