Options
2024
Conference Paper
Title
Uncertainty-Aware Evaluation of Quantitative ML Safety Requirements
Abstract
Various quantitative methods and associated metrics for evaluating safety-related properties of ML functions have been proposed. However, it is often not clear how these metrics relate to safety requirements, how suitable target values can be selected to demonstrate that the safety requirements are met, and with which confidence can the results be used to reason about safety. This paper presents an uncertainty-aware method for using quantitative evidence to evaluate safety requirements of an ML-based function. To achieve this, we make use of Subjective Logic to describe opinions related to properties of the ML function and its associated evidence. We then show how combining these opinions can allow us to reason about our confidence in the statements we make based on this evidence. The approach is illustrated with a practical example and leads to some general observations related to the confidence that can be achieved in safety arguments for ML-based systems based on such evidence.
Conference