• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. Don’t get me wrong: How to apply deep visual interpretations to time series
 
  • Details
  • Full
Options
2025
Journal Article
Title

Don’t get me wrong: How to apply deep visual interpretations to time series

Abstract
The correct interpretation of convolutional models is a hard problem for time series data. While saliency methods promise visual validation of predictions for image and language processing, they fall short when applied to time series. These tend to be less intuitive and represent highly diverse data, such as the tool-use time series dataset. Furthermore, saliency methods often generate varied, conflicting explanations, complicating the reliability of these methods. Consequently, a rigorous objective assessment is necessary to establish trust in them. This paper investigates saliency methods on time series data to formulate recommendations for interpreting convolutional models and implements them on the tool-use time series problem. To achieve this, we first employ nine gradient-, propagation-, or perturbation-based post-hoc saliency methods across six varied and complex real-world datasets. Next, we evaluate these methods using five independent metrics to generate recommendations. Subsequently, we implement a case study focusing on tool-use time series using convolutional classification models. Our results validate our recommendations that indicate that none of the saliency methods consistently outperforms others on all metrics, while some are sometimes ahead. Our insights and step-by-step guidelines allow experts to choose suitable saliency methods for a given model and dataset.
Author(s)
Löffler, Christoffer
Pontificia Universidad Católica de Valparaíso
Lai, Weicheng
Hasso-Plattner-Institut für Softwaresystemtechnik GmbH
Zanca, Dario
Friedrich-Alexander-Universität Erlangen-Nürnberg
Schmidt, Lukas
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Eskofier, Bjoern M.
Friedrich-Alexander-Universität Erlangen-Nürnberg
Mutschler, Christopher  
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Journal
Applied intelligence  
DOI
10.1007/s10489-025-06798-3
Language
English
Fraunhofer-Institut für Integrierte Schaltungen IIS  
Keyword(s)
  • Explainable artificial intelligence

  • Machine learning

  • Saliency methods

  • Time series data

  • Visual interpretation

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024