CC BY 4.0Eilers, DirkDirkEilersBurton, SimonSimonBurtonSchmoeller da Roza, FelippeFelippeSchmoeller da RozaRoscher, KarstenKarstenRoscher2023-05-022023-05-022023https://publica.fraunhofer.de/handle/publica/441241https://doi.org/10.24406/publica-129110.24406/publica-12912-s2.0-85159259475A number of challenges are associated with the use of machine learning technologies in safety-related applications. These include the difficulty of specifying adequately safe behaviour in complex environments (specification uncertainty), ensuring a predictably safe behaviour under all operating conditions (technical uncertainty) and arguing that the safety goals of the system have been met with sufficient confidence (assurance uncertainty). An assurance argument is therefore required that demonstrates that the effects of these uncertainties do not lead to an unacceptable level of risk during operation. A reinforcement learning model will predict an action in whatever state it is in - even in previously unseen states for which a valid (safe) outcome cannot be determined due to lack of training. Uncertainty estimation is a well understood approach in machine learning to identify states with a high probability of an invalid action due a lack of training experience, thus addressing technical uncertainty. However, the impact of alternative possible predictions which may be equally valid (and represent a safe state) in estimating uncertainty in reinforcement learning is not so clear and to our knowledge, not so well documented in current literature. In this paper we build on work where we investigated uncertainty estimation on simplified scenarios in a gridworld environment. Using model ensemble-based uncertainty estimation we proposed an algorithm based on action count variance to deal with discrete action spaces whilst considering in-distribution action variance calculation to handle the overlap with alternative predictions. The method indicates potentially unsafe states when the agent is near out-of-distribution elements and can distinguish it from overlapping alternative, but equally valid predictions. Here, we present these results within the context of a safety assurance framework and highlight the activities and evidences required to build a convincing safety argument. We show that our previous approach is able to act as an external observer and can fulfil the requirements of an assurance argumentation for systems based on machine learning with ontological uncertainty.enreinforcement learningRLsafe reinforcement learningsafe RLsafetysafety assurancesafety assurance argumentationdistributional shiftuncertaintyuncertainty estimationensemble-based uncertainty estimationout-of-distributionOODout-of-distribution detectionSafety Assurance with Ensemble-based Uncertainty Estimation and overlapping alternative Predictions in Reinforcement Learningconference paper