Under CopyrightCosta de Araujo, João PauloJoão PauloCosta de AraujoBalu, BalahariBalahariBaluReichmann, EikEikReichmannKelly, JessicaJessicaKellyKugele, StefanStefanKugeleMata, NúriaNúriaMataGrunske, LarsLarsGrunske2024-12-062024-12-112024-12-062024https://publica.fraunhofer.de/handle/publica/479741https://doi.org/10.24406/h-47974110.1109/ISSRE62328.2024.0003410.24406/h-479741We consider the use of concept bottleneck models (CBMs) to enhance safety argumentation for classification tasks in safety-critical systems. When constructing a safety argumentation for Machine Learning (ML) models, there exists a semantic gap between the specified behaviour, given through class labels at training time, and the learnt behaviour, measured through performance metrics. We address this gap by using CBMs, a class of interpretable ML models in which the predictions rely on a set of human-defined concepts. A Goal Structuring Notation (GSN)-based safety assurance case is constructed including such concepts, allowing traceability between the system specification and the behaviour of the model. As a result, a line of safety argumentation is provided that relies on the interpretable model trained to satisfy the specified safety requirements.enconcept bottleneck modelsemantic gapsafetysafety assuranceinterpretabilityApplying Concept-Based Models for Enhanced Safety Argumentationconference paper