Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Uncertainty Wrappers for Data-Driven Models

Increase the Transparency of AI/ML-Based Models Through Enrichment with Dependable Situation-Aware Uncertainty Estimates
: Kläs, Michael; Sembach, Lena


Romanovsky, A.:
Computer safety, reliability, and security. 38th International Conference, SAFECOMP 2019. Proceedings : 11-13 September 2019, Turku, Finand
Cham: Springer, 2019 (Lecture Notes in Computer Science 11698)
ISBN: 978-3-030-26600-4
ISBN: 3-030-26600-1
ISBN: 978-3-030-26601-1
International Conference on Computer Safety, Reliability, and Security (SAFECOMP) <38, 2019, Turku>
Bundesministerium für Bildung und Forschung BMBF (Deutschland)
01IS16043E; CrESt
Conference Paper
Fraunhofer IESE ()
Artificial intelligence; Machine learning; Dependability; Safety engineering ; Data quality ; Operational design domain; Model validation

In contrast to established safety-critical software components, we can neither prove nor assume that the outcomes of components containing models based on artificial intelligence (AI) or machine learning (ML) will be correct in any situation. Thus, uncertainty is an inherent part of decision-making when using the outcomes of data-driven models created by AI/ML algorithms. In order to deal with this – especially in the context of safety-related systems – we need to make uncertainty transparent via dependable statistical statements. This paper introduces both a conceptual model and the related mathematical foundation of an uncertainty wrapper solution for data-driven models. The wrapper enriches existing data-driven models such as provided by ML or other AI techniques with case-individual and sound uncertainty estimates. The task of traffic sign recognition is used to illustrate the approach, which considers uncertainty not only in terms of model fit but also in terms of data quality and scope compliance.