Options
2026
Conference Paper
Title
How Prompting Shapes Decisions: Analyzing LLM Behavior in XAI-Augmented Decision Support Systems
Abstract
Large Language Models (LLMs) become increasingly prevalent in downstream tasks and user-facing applications. As explainable AI (XAI) strives to become more end-user-friendly, utilization of LLMs in XAI increases. However, the consequences of this are unclear. Do LLMs really improve user experience, or are there hidden problems that may limit their applicability? In this paper, we present results of experiments on the decision-making process of LLMs with the goal of evaluating their usefulness for such applications. By providing the LLM with different information and applying different metrics to evaluate their decisions, we present findings that should inform the applicability of LLMs. By analyzing nearly 300,000 prompts, we found that the LLM’s decisions are only minimally influenced by XAI data. Secondly, the LLM’s behavior can be changed significantly through prompting. This suggests that LLM behavior is more sensitive to presentation than to underlying model reliability, raising concerns about its role as a rational arbiter. If our results hold true, we advise caution when utilizing LLMs, especially when facing laypeople.
Author(s)