Panati, ChandanaChandanaPanatiWagner, SimonSimonWagnerBrüggenwirth, StefanStefanBrüggenwirth2023-07-112023-07-112022https://publica.fraunhofer.de/handle/publica/44540710.23919/IRS54158.2022.99049892-s2.0-85140452309For predictive analysis and automatic classification, Deep Neural Networks (DNNs) are investigated and visualized. All the DNNs used for Automatic Target Recognition (ATR) have inbuilt feature extraction and classification abilities, but the inner working gets more opaque rendering them a black box as the networks get deeper and more complex. The main goal of this paper is to get a glimpse of what the network perceives in order to classify Moving and Stationary Target Acquisition and Recognition (MSTAR) targets. However, past works have shown that classification of targets was performed solely based on clutter within the MSTAR data. Here we show that the DNN trained on the MSTAR dataset classifies only based on target information and the clutter plays no role in it. To demonstrate this, heatmaps are generated using the Gradient-weighted Class Activation Mapping (Grad-CAM) method to highlight the areas of attention in each input Synthetic Aperture Radar (SAR) image. To further probe into the interpretability of classifiers, reliable post hoc explanation techniques are used such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) to approximate the behaviour of a black box by extracting relationships between feature value and prediction.enAutomatic Target RecognitionConvolutional Neural NetworkDeep Neural NetworksGradient-weighted Class Activation MappingLocal Interpretable Model-Agnostic ExplanationsMoving and Stationary Target Acquisition and RecognitionSHapley Additive exPlanationsSynthetic Aperture RadarFeature Relevance Evaluation using Grad-CAM, LIME and SHAP for Deep Learning SAR Data Classificationconference paper