• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Post-hoc Saliency Methods Fail to Capture Latent Feature Importance in Time Series Data
 
  • Details
  • Full
Options
2023
Conference Paper
Title

Post-hoc Saliency Methods Fail to Capture Latent Feature Importance in Time Series Data

Abstract
Saliency methods provide visual explainability for deep image processing models by highlighting informative regions in the input images based on feature-wise (pixels) importance scores. These methods have been adopted to the time series domain, aiming to highlight important temporal regions in a sequence. This paper identifies, for the first time, the systematic failure of such methods in the time series domain when underlying patterns (e.g., dominant frequency or trend) are based on latent information rather than temporal regions. The latent feature importance postulation is highly relevant for the medical domain as many medical signals, such as EEG signals or sensor data for gate analysis, are commonly assumed to be related to the frequency domain. To the best of our knowledge, no existing post-hoc explainability method can highlight influential latent information for a classification problem. Hence, in this paper, we frame and analyze the problem of latent feature saliency detection. We first assess the explainability quality of multiple state-of-the-art saliency methods (Integrated Gradients, DeepLift, Kernel SHAP, Lime) on top of various classification methods (LSTM, CNN, LSTM and CNN trained via saliency-guided training) using simulated time series data with underlying temporal or latent space patterns. In conclusion, we identify that Integrated Gradients and DeepLift, if redesigned, could be potential candidates for latent saliency scores.
Author(s)
Schröder, Maresa
Fraunhofer-Institut für Kognitive Systeme IKS  
Zamanian, Alireza
Fraunhofer-Institut für Kognitive Systeme IKS  
Ahmidi, Narges
Fraunhofer-Institut für Kognitive Systeme IKS  
Mainwork
Trustworthy Machine Learning for Healthcare. First International Workshop, TML4H 2023. Proceedings  
Project(s)
IKS-Ausbauprojekt  
Funder
Bayerisches Staatsministerium für Wirtschaft, Landesentwicklung und Energie  
Conference
International Workshop on Trustworthy Machine Learning for Healthcare 2023  
DOI
10.1007/978-3-031-39539-0_10
Language
English
Fraunhofer-Institut für Kognitive Systeme IKS  
Keyword(s)
  • explainability

  • XAI

  • time series classification

  • saliency methods

  • latent feature importance

  • deep learning

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024