Options
November 8, 2024
Conference Paper
Title
Towards Linking Local and Global Explanations for AI Assessments with Concept Explanation Clusters
Abstract
Understanding the inner workings of artificial intelligence (AI) systems is important both in the light of regulation (e.g., the EU AI Act), but also to uncover hidden weaknesses. Although local and global explanation methods can support this, a scalable and human-centered combination is required to combine the detail of the former with the latter's efficiency. Therefore, we present our method concept explanation clusters as a step towards explaining (sub-)strategies of the model through human-understandable concepts by identifying clusters in the input data while accounting for model predictions by local explanations. In this way, all the benefits of local explanations can be retained while allowing contextualisation on a larger (i.e., data-global) scale.