Schallner, L.L.SchallnerRabold, J.J.RaboldScholz, O.O.ScholzSchmid, U.U.Schmid2022-03-142022-03-142020https://publica.fraunhofer.de/handle/publica/40858510.1007/978-3-030-43823-4_13End-to-end learning with deep neural networks, such as convolutional neural networks (CNNs), has been demonstrated to be very successful for different tasks of image classification. To make decisions of black-box approaches transparent, different solutions have been proposed. LIME is an approach to explainable AI relying on segmenting images into superpixels based on the Quick-Shift algorithm. In this paper, we present an explorative study of how different superpixel methods, namely Felzenszwalb, SLIC and Compact-Watershed, impact the generated visual explanations. We compare the resulting relevance areas with the image parts marked by a human reference. Results show that image parts selected as relevant strongly vary depending on the applied method. Quick-Shift resulted in the least and Compact-Watershed in the highest correspondence with the reference relevance areas.en621006Effect of superpixel aggregation on explanations in LIME - A case study with biological dataconference paper