Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Uncertainty in Machine Learning Applications

A Practice-Driven Classification of Uncertainty
 
: Kläs, Michael; Vollmer, Anna Maria

:

Gallina, B.; Skavhaug, A.; Schoitsch, E.; Bitsch, F.:
Computer Safety, Reliability, and Security: SAFECOMP 2018 Workshops, ASSURE, DECSoS, SASSUR, STRIVE, and WAISE : Västerås, Sweden, September 18, 2018. Proceedings
Cham: Springer International Publishing, 2018 (Lecture Notes in Computer Science 11094)
ISBN: 978-3-319-99229-7
ISBN: 978-3-319-99228-0
ISBN: 978-3-319-99230-3
S.431-438
International Conference on Computer Safety, Reliability, and Security (SAFECOMP) <37, 2018, Västerås>
Bundesministerium für Bildung und Forschung BMBF
01IS16043E; CrESt
Englisch
Konferenzbeitrag
Fraunhofer IESE ()
Artificial intelligence; Dependability; Safety engineering; Data quality; Model validation; Empirical modelling

Abstract
Software-intensive systems that rely on machine learning (ML) and artificial intelligence (AI) are increasingly becoming part of our daily life, e.g., in recommendation systems or semi-autonomous vehicles. However, the use of ML and AI is accompanied by uncertainties regarding their outcomes. Dealing with such uncertainties is particularly important when the actions of these systems can harm humans or the environment, such as in the case of a medical product or self-driving car. To enable a system to make informed decisions when confronted with the uncertainty of embedded AI/ML models and possible safety-related consequences, these models do not only have to provide a defined functionality but must also describe as precisely as possible the likelihood of their outcome being wrong or outside a given range of accuracy. Thus, this paper proposes a classification of major uncertainty sources that is usable and useful in practice: scope compliance, data quality, and model fit. In particular, we highlight the implications of these classes in the development and testing of ML and AI models by establishing links to specific activities during development and testing and means for quantifying and dealing with these different sources of uncertainty.

: http://publica.fraunhofer.de/dokumente/N-518409.html