Options
2024
Conference Paper
Title
Applying Concept-Based Models for Enhanced Safety Argumentation
Abstract
We consider the use of concept bottleneck models (CBMs) to enhance safety argumentation for classification tasks in safety-critical systems. When constructing a safety argumentation for Machine Learning (ML) models, there exists a semantic gap between the specified behaviour, given through class labels at training time, and the learnt behaviour, measured through performance metrics. We address this gap by using CBMs, a class of interpretable ML models in which the predictions rely on a set of human-defined concepts. A Goal Structuring Notation (GSN)-based safety assurance case is constructed including such concepts, allowing traceability between the system specification and the behaviour of the model. As a result, a line of safety argumentation is provided that relies on the interpretable model trained to satisfy the specified safety requirements.
Author(s)
File(s)
Rights
Use according to copyright law
Language
English