Under CopyrightEl Bekri, NadiaNadiaEl BekriKling, J.J.KlingHuber, M.M.Huber2022-03-1415.05.20202019https://publica.fraunhofer.de/handle/publica/40454810.24406/publica-r-40454810.1007/978-3-030-20055-8_4Machine learning algorithms that construct complex prediction models are increasingly used for decision-making due to their high accuracy, e.g., to decide whether a bank customer should receive a loan or not. Due to the complexity, the models are perceived as black boxes. One approach is to augment the models with post-hoc explainability. In this work, we evaluate three different explanation approaches based on the users' initial trust, the users' trust in the provided explanation, and the established trust in the black box by a within-subject design study.enmachine learningblack boxexplainabilityinterpretabilitytrust004670A Study on Trust in Black Box Models and Post-hoc Explanationsconference paper