CC BY 4.0Schmoeller da Roza, FelippeFelippeSchmoeller da RozaHadwiger, SimonSimonHadwigerThorn, IngoIngoThornRoscher, KarstenKarstenRoscher2023-05-022023-05-022023https://publica.fraunhofer.de/handle/publica/441238https://doi.org/10.24406/publica-129010.24406/publica-12902-s2.0-85159331048The necessity of demonstrating that Machine Learning (ML) systems can be safe escalates with the ever-increasing expectation of deploying such systems to solve real-world tasks. While recent advancements in Deep Learning reignited the conviction that ML can perform at the human level of reasoning, the dimensionality and complexity added by Deep Neural Networks pose a challenge to using classical safety verification methods. While some progress has been made towards making verification and validation possible in the supervised learning landscape, works focusing on sequential decision-making tasks are still sparse. A particularly popular approach consists of building uncertainty-aware models, able to identify situations where their predictions might be unreliable. In this paper, we provide evidence obtained in simulation to support that uncertainty estimation can also help to identify scenarios where Reinforcement Learning (RL) agents can cause accidents when facing obstacles semantically different from the ones experienced while learning, focusing on industrial-grade applications. We also discuss the aspects we consider necessary for building a safety assurance case for uncertainty-aware RL models.enuncertainty estimationdistributional shiftreinforcement learningRLfunctional safetysafetysafety assuranceTowards Safety Assurance of Uncertainty-Aware Reinforcement Learning Agentsconference paper