Options
2019
Conference Paper
Title
A Study on Trust in Black Box Models and Post-hoc Explanations
Abstract
Machine learning algorithms that construct complex prediction models are increasingly used for decision-making due to their high accuracy, e.g., to decide whether a bank customer should receive a loan or not. Due to the complexity, the models are perceived as black boxes. One approach is to augment the models with post-hoc explainability. In this work, we evaluate three different explanation approaches based on the users' initial trust, the users' trust in the provided explanation, and the established trust in the black box by a within-subject design study.
Open Access
File(s)
Rights
Under Copyright
Language
English