• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Quantitative Evaluation of xAI Methods for Multivariate Time Series - A Case Study for a CNN-Based MI Detection Model
 
  • Details
  • Full
Options
July 10, 2024
Conference Paper
Title

Quantitative Evaluation of xAI Methods for Multivariate Time Series - A Case Study for a CNN-Based MI Detection Model

Abstract
This paper presents an evaluation framework for xAI methods that is tailored for multivariate time series data. The framework includes three evaluation approaches encompassing a stability analysis, consistency analysis, and truthfulness analysis. The stability analysis investigates the consistency of explanations provided by a single xAI method for similar inputs. In the truthfulness analysis, the meaningfulness of explanations provided by an xAI method is examined. The consistency analysis assesses the similarity of explanations generated by different xAI methods. We demonstrate the application of these evaluation techniques using a medical use case involving electrocardiogram (ECG) data. Specifically, we evaluate the explanations of two popular xAI methods, LRP and SHAP, for a convolutional neural network (CNN) that detects myocardial infarctions (MI). We will show that LRP and SHAP both provide meaningful explanations for this model, with SHAP being slightly more truthful. On the other hand, our stability analysis will reveal that LRP is more stable than SHAP for the investigated use case. Finally, the consistency analysis will allow us to demonstrate that LRP and SHAP partly disagree in explaining the leads and time intervals most relevant for the MI detection model towards its classification.
Author(s)
Knof, Helene
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Boerger, Michell  
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Tcholtchev, Nikolay Vassilev
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Mainwork
Explainable Artificial Intelligence. Second World Conference, xAI 2024. Proceedings. Pt. IV  
Project(s)
Security and Privacy Accountable Technology Innovations, Algorithms, and machine Learning  
Funder
European Commission  
Conference
World Conference on eXplainable Artificial Intelligence 2024  
DOI
10.1007/978-3-031-63803-9_9
Language
English
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024