Now showing 1 - 2 of 2
  • Publication
    Validation of XAI Explanations for Multivariate Time Series Classification in the Maritime Domain
    Due to the lack of explanation towards their internal mechanism, state-of-the-art deep learning-based classifiers are often considered as black-box models. For instance, in the maritime domain, models that classify the types of ships based on their trajectories and other features perform well, but give no further explanation for their predictions. To gain the trust of human operators responsible for critical decisions, the reason behind the classification is crucial. In this paper, we introduce explainable artificial intelligence (XAI) approaches to the task of classification of ship types. This supports decision-making by providing explanations in terms of the features contributing the most towards the prediction, along with their corresponding time intervals. In the case of the LIME explainer, we adapt the time-slice mapping technique (LimeforTime), while for Shapley additive explanations (SHAP) and path integrated gradient (PIG), we represent the relevance of each input variable to generate a heatmap as an explanation. In order to validate the XAI results, the existing perturbation and sequence analyses for classifiers of univariate time series data is employed for testing and evaluating the XAI explanations on multivariate time series. Furthermore, we introduce a novel evaluation technique to assess the quality of explanations yielded by the chosen XAI method.
  • Publication
    Supported Decision-Making by Explainable Predictions of Ship Trajectories
    Machine Learning and Deep Learning models make accurate predictions based on a specifically trained task. For instance, models that classify ship vessel types based on their trajectory and other features. This can support human experts while they try to obtain information on the ships, e.g., to control illegal fishing. Besides the support in predicting a certain ship type, there is a need to explain the decision-making behind the classification. For example, which features contributed the most to the classification of the ship type. This paper introduces existing explanation approaches to the task of ship classification. The underlying model is based on a Residual Neural Network. The model was trained on an AIS data set. Further, we illustrate the explainability approaches by means of an explanatory case study and conduct a first experiment with a human expert.