Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Are you sure? Prediction revision in automated decision-making

: Burkart, Nadia; Robert, Sebastian; Huber, Marco

Fulltext urn:nbn:de:0011-n-5932243 (5.1 MByte PDF)
MD5 Fingerprint: 94b161c8a1ce24740eb76cb7473b9db7
(CC) by-nc
Created on: 24.6.2020

Expert systems 38 (2021), No.1, Art. e12577, 19 pp.
ISSN: 0266-4720
ISSN: 1468-0394
Journal Article, Electronic Publication
Fraunhofer IOSB ()
experiment; explainable ML; interpretability; prediction revision; Automation; decision making; decision support systems; decision trees; deep learning; logistic regression; Versuch; maschinelles Lernen; Prognoseverfahren; Explainable Artificial Intelligence (XAI); K√ľnstliche Intelligenz

With the rapid improvements in machine learning and deep learning, decisions made by automated decision support systems (DSS) will increase. Besides the accuracy of predictions, their explainability becomes more important. The algorithms can construct complex mathematical prediction models. This causes insecurity to the predictions. The insecurity rises the need for equipping the algorithms with explanations. To examine how users trust automated DSS, an experiment was conducted. Our research aim is to examine how participants supported by an DSS revise their initial prediction by four varying approaches (treatments) in a between-subject design study. The four treatments differ in the degree of explainability to understand the predictions of the system. First we used an interpretable regression model, second a Random Forest (considered to be a black box [BB]), third the BB with a local explanation and last the BB with a global explanation. We noticed that all participants improved their predictions after receiving an advice whether it was a complete BB or an BB with an explanation. The major finding was that interpretable models were not incorporated more in the decision process than BB models or BB models with explanations.