Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

A Framework for Building Uncertainty Wrappers for AI/ML-Based Data-Driven Components

: Kläs, Michael; Jöckel, Lisa


Casimiro, António:
Computer Safety, Reliability, and Security. Proceedings : SAFECOMP 2020 Workshops, DECSoS 2020, DepDevOps 2020, USDAI 2020, and WAISE 2020, Lisbon, Portugal, September 15, 2020, virtual conference
Cham: Springer Nature, 2020 (Lecture Notes in Computer Science 12235)
ISBN: 978-3-030-55582-5 (Print)
ISBN: 978-3-030-55583-2 (Online)
International Conference on Computer Safety, Reliability and Security (SafeComp) <39, 2020, Online>
International Workshop on Artificial Intelligence Safety Engineering (WAISE) <3, 2020, Online>
Fraunhofer IESE ()
artificial intelligence; machine learning; safety engineering; data quality; operational design domain; out-of-distribution; dependability

More and more software-intensive systems include components that are data-driven in the sense that they use models based on artificial intelligence (AI) or machine learning (ML). Since the outcomes of such models cannot be assumed to always be correct, related uncertainties must be understood and taken into account when decisions are made using these outcomes. This applies, in particular, if such decisions affect the safety of the system. To date, however, hardly any AI-/ML-based model provides dependable estimates of the uncertainty remaining in its outcomes. In order to address this limitation, we present a framework for encapsulating existing models applied in data-driven components with an uncertainty wrapper in order to enrich the model outcome with a situation-aware and dependable uncertainty statement. The presented framework is founded on existing work on the concept and mathematical foundation of uncertainty wrappers. The application of the framework is illustrated using pedestrian detection as an example, which is a particularly safety-critical feature in the context of autonomous driving. The Brier score and its components are used to investigate how the key aspects of the framework (scoping, clustering, calibration, and confidence limits) can influence the quality of uncertainty estimates.