Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Using complementary risk acceptance criteria to structure assurance cases for safety-critical AI components

: Kläs, Michael; Adler, Rasmus; Jöckel, Lisa; Groß, Janek; Reich, Jan

Volltext ()

Espinoza, H.:
Workshop on Artificial Intelligence Safety, AISafety 2021. Proceedings. Online resource : Co-located with the Thirteenth International Joint Conference on Artificial Intelligence (IJCAI 2021), Virtual, August 2021
Online im WWW, 2021 (CEUR Workshop Proceedings 2916)
7 S.
Workshop on Artificial Intelligence Safety (AISafety) <2021, Online>
International Joint Conference on Artificial Intelligence (IJCAI) <30, 2021, Online>
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IESE ()
automatic guided vehicles; civil defense; safety factor

Artificial Intelligence (AI), particularly current Machine Learning approaches, promises new and innovative solutions also for realizing safety-critical functions. Assurance cases can support the potential certification of such AI applications by providing an assessable, structured argument explaining why safety is achieved. Existing proposals and patterns for structuring the safety argument help to structure safety measures, but guidance for explaining in a concrete use case why the safety measures are actually sufficient is limited. In this paper, we investigate this and other challenges and propose solutions. In particular, we propose considering two complementary types of risk acceptance criteria as assurance objectives and provide, for each objective, a structure for the supporting argument. We illustrate our proposal on an excerpt of an automated guided vehicle use case and close with questions triggering further discussions on how to best use assurance cases in the context of AI certification. © 2021 CEUR-WS. All rights reserved.