Options
December 29, 2025
Journal Article
Title
Trust Me, I’m Transparent
Title Supplement
Describing AI Systems Using Global Explanations
Abstract
As AI systems enter critical domains and regulatory frameworks demand meaningful transparency, this study examines how structured transparency interfaces affect users trust, perceived competence, and acceptance of various AI systems–such as machine learning or neural networks–across different roles and risk contexts. A vignette-based online study (N = 335) employed a 2 × 2 × 2 mixed factorial design, with user role as a between-subjects factor, and time and system risk level as within-subjects factors. Data were analyzed using robust linear mixed models and path analyses. The transparency interface significantly enhanced perceived competence across all conditions. Trust increased exclusively among operators, while end users showed no trust changes. Acceptance effects were minimal. Path analysis revealed complete mediation through perceived competence for operators and partial mediation for end users. Our findings challenge universal transparency approaches, demonstrating a mediation effect of competence and a moderation effect of user role and risk level. The results inform the design of role-adaptive transparency systems supporting effective human-AI collaboration.
Author(s)
Open Access
File(s)
Rights
CC BY-NC-ND 4.0: Creative Commons Attribution-NonCommercial-NoDerivatives
Language
English