Options
2024
Bachelor Thesis
Title
Visualization of SHAP Values and Error Reconstructions for Autoencoder Models
Abstract
The goal of this thesis is to enhance the interpretability in machine learning, focusing on autoencoders used in anomaly detection. Autoencoders are a powerful tool that allow identifying anomalies in an unsupervised manner but they often lack transparency in their decision-making process. To adress this, a visualization tool was created, that combines SHapley Additive exPlanations (SHAP) values with the reconstruction errors of the autoencoder. SHAP values quantify each feature's contribution to the model’s output, promising greater interpretability and understanding of the decision-making process while the reconstruction errors measure the autoencoders ability to reconstruct a feature. Through a detailed literature research the current state of the art in the field of anomaly detection with autoencoders and SHAP values is presented, particularly regarding their interpretability. A conceptual framework for visualizing SHAP values and reconstruction errors is developed, aiming to improve the comprehensibility of the models output. The implementation of this visualization tool is then explored, highlighting the technical challenges and solutions. The effectiveness of the tool is then evaluated by a user study, which shows that it has the potential to make the process of detecting anomalies more transparent. In conclusion, this work improves the interpretability of machine learning models in anomaly detection and provides insights into the decision-making processes of autoencoders. It suggests further research to integrate this visualization approach into other models. This work contributes to making complex machine learning models more accessible and interpretable, facilitating their wider application and understanding.
Thesis Note
Darmstadt, TU, Bachelor Thesis, 2024
Language
English