Zafar, Shanza AliShanza AliZafarCarella, FrancescoFrancescoCarellaMata, NúriaNúriaMata2025-07-022025-07-022025https://publica.fraunhofer.de/handle/publica/489069This paper explores the integration of Large Language Models (LLMs) in safety-critical systems, focusing on risks and mitigation strategies. While LLMs offer potential benefits in information analysis and decision support, their nondeterministic nature requires careful human oversight. Using System-Theoretic Process Analysis (STPA), we identify key risks related to tool behavior and human-LLM interaction. We propose strategies such as validation practices and engineer training to manage these risks. Further empirical research is needed to assess the effectiveness of these approaches in real-world contexts, providing a foundation for the responsible use of LLMs in safety engineering.enlarge language modelLLMsafety-criticalsafety-critical systemssafety assurancetool qualificationsystem-theoretic process analysisExploring the Impact of Large Language Models on Safety-Critical Processesconference paper