• English
  • Deutsch
  • Log In
    Password Login
    or
  • Research Outputs
  • Projects
  • Researchers
  • Institutes
  • Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. A Study on Trust in Black Box Models and Post-hoc Explanations
 
  • Details
  • Full
Options
2019
Conference Paper
Titel

A Study on Trust in Black Box Models and Post-hoc Explanations

Abstract
Machine learning algorithms that construct complex prediction models are increasingly used for decision-making due to their high accuracy, e.g., to decide whether a bank customer should receive a loan or not. Due to the complexity, the models are perceived as black boxes. One approach is to augment the models with post-hoc explainability. In this work, we evaluate three different explanation approaches based on the users' initial trust, the users' trust in the provided explanation, and the established trust in the black box by a within-subject design study.
Author(s)
El Bekri, Nadia
Kling, J.
Huber, M.
Hauptwerk
14th International Conference on Soft Computing Models in Industrial and Environmental Applications, SOCO 2019. Proceedings
Konferenz
International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO) 2019
DOI
10.1007/978-3-030-20055-8_4
File(s)
N-543658.pdf (559.46 KB)
Language
English
google-scholar
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB
Fraunhofer-Institut für Produktionstechnik und Automatisierung IPA
Tags
  • machine learning

  • black box

  • explainability

  • interpretability

  • trust

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Send Feedback
© 2022