Options
2021
Conference Paper
Titel
Interpreting Decision-Making Process for Multivariate Time Series Classification
Abstract
Over the last years, several time series classification (TSC) algorithms have been proposed both in traditional machine learning and deep learning domains which have shown remarkable enhancement over the previously published state-of-the-art methods. However, their decision-making processes generally stay as black boxes to the user. Model-agnostic (post-hoc) explainers, such as the state-of-the-art SHAP, are proposed to make the predictions of machine learning models explainable with the presence of well-designed domain mappings. In our paper, we first apply univariate classifiers on the dimensions of multivariate time series data individually. This is a straightforward technique for multivariate time series classification (MTSC). Then, we use state-of-the-art timeXplain framework to interpret the decision making process of the univariate classifiers on the multivariate time series data. With a careful choice of interpretability parameters, we demonstrate that it is possible to obtain explainability for such time series data.