Options
2024
Conference Paper
Title
Assessment of Lime’s Performance in Sentiment Analysis of Sentences with Varying Lengths
Abstract
Local Interpretable Model-agnostic Explanations (LIME) is a widely used technique in the field of explainable Artificial Intelligence (XAI) for natural language processing (NLP) models. It offers valuable insights into blackbox AI models. However, the effectiveness of LIME in providing accurate explanations for short sentences poses a challenge. This potentially limits its usefulness in various natural language processing (NLP) applications. The objective of this study is to evaluate how sentence length affects impacts the interpretability accuracy of LIME in social media sentiment analysis, a domain where short sentences are prevalent. By using a set of similar sentences with varying lengths and a state-of-the-art sentiment model, the study assesses the performance of LIME. The results demonstrate that, despite being semantically similar, shorter sentences yield less reliable explanations due to limited input perturbation for LIME’s sampler. This research intends to draw attention to the limitations of LIME when dealing with short sentences and encourages further exploration to enhance the interpretability of AI models specifically designed for analyzing short texts. Ultimately, this study contributes to promoting transparency and trustworthiness in AI systems across different domains.
Author(s)
Conference