Haedecke, Elena GinaElena GinaHaedeckeAkila, MaramMaramAkilaRüden, Laura vonLaura vonRüden2025-11-112025-11-112026https://publica.fraunhofer.de/handle/publica/49909310.1007/978-3-032-08317-3_12-s2.0-105020239587The complexity of AI systems raises concerns about their trustworthiness. This strongly motivates effective AI assessments to appropriately evaluate and manage potential risks; yet this evaluation process is complicated by the black-box nature of these models. In particular, current explainable AI methods provide local and global insights into model behavior, but face limitations: local methods often lack context, leading to misinterpretation, while global methods oversimplify, sacrificing critical detail. To bridge this gap, we propose the Concept Explanation Clusters (CEC) method. Our methodology connects local explanations to a broader understanding of model behavior by identifying regional clusters of similar cases, where similarities are based on patterns of significant features and input data. This approach allows efficient recognition of such patterns or sub-concepts across the entire dataset. CEC thereby derives global explanations, in terms of human-understandable feature combinations, from the individual local explanations. In this paper, we present our methodology and experimental results by demonstrating the application of CEC to tabular and textual data. We show that CEC enables efficient identification of both frequent and rare decision patterns and thus enables a deeper understanding of model behavior.entrueAI AssessmentExplainabilityHuman-Centered AITrustworthy AIGlobal Properties from Local Explanations with Concept Explanation Clustersconference paper