• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Artikel
  4. Counterfactual explanations with varying parameter control: Effects on mental-model formation and explanation satisfaction
 
  • Details
  • Full
Options
2026
Journal Article
Title

Counterfactual explanations with varying parameter control: Effects on mental-model formation and explanation satisfaction

Abstract
Algorithms increasingly inform consequential decisions, yet their workings are often opaque. Interactive explanation interfaces are often assumed to enhance comprehension, but prior comparisons to static baselines frequently confound user agency with unequal informational content. We test whether greater agency over explanation-generation parameters improves understanding when informational parity is ensured. In a between-subjects laboratory study (N = 64), non-experts worked with counterfactual explanations of a binary classifier using one of three interface configurations: a fixed condition with no parameter control, a system-randomized condition in which parameter settings varied automatically across generations, or a user-controlled condition in which participants adjusted feature weights and exclusions. All conditions provided equivalent informational content and allowed repeated re-generation of counterfactual sets; only the locus of control over generation parameters differed. Behavioral logs captured exploratory breadth, in-depth exploration, exclusion rate, and interaction time per data point. Outcome measures included objective understanding, self-reported understanding and confidence, explanation satisfaction, and cognitive workload. Parameter control increased exploratory breadth relative to the fixed condition. The system-randomized condition yielded the greatest in-depth exploration and the longest interaction time per data point, whereas the user-controlled condition produced higher exclusion rates. Despite these behavioral differences, objective understanding and explanation satisfaction did not differ between conditions; self-reported understanding was highest in the fixed condition. Mental demand and frustration were highest in the user-controlled condition. Overall, varying the locus of control over generation parameters primarily changed how and how much participants explored counterfactual explanations without improving objective understanding or satisfaction under informational parity, while increasing subjective workload when control was user-driven.
Author(s)
Kölmel, Lena
Becker, Maximilian  
Karlsruhe Institute of Technology
Schwall, Finn
Simula
Hild, Jutta  orcid-logo
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB  
Birnstill, Pascal  
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB  
Deml, Barbara
Beyerer, Jürgen  
Karlsruhe Institute of Technology
Journal
Computers in human behavior reports  
Open Access
File(s)
Download (2.08 MB)
Rights
CC BY 4.0: Creative Commons Attribution
DOI
10.1016/j.chbr.2026.101050
10.24406/publica-8173
Additional link
Full text
Language
English
Fraunhofer-Institut für Optronik, Systemtechnik und Bildauswertung IOSB  
Keyword(s)
  • Human-AI-Interaction

  • Explainable artificial intelligence

  • Interactive explanations

  • Counterfactuals User study

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024