Camilli, M.M.CamilliFelderer, M.M.FeldererGiusti, A.A.GiustiMatt, D.T.D.T.MattPerini, A.A.PeriniRusso, B.B.RussoSusi, A.A.Susi2022-03-152022-03-152021https://publica.fraunhofer.de/handle/publica/41313310.1109/WAIN52551.2021.00014Collaborative AI systems aim at working together with humans in a shared space to achieve a common goal. This setting imposes potentially hazardous circumstances due to contacts that could harm human beings. Thus, building such systems with strong assurances of compliance with requirements domain specific standards and regulations is of greatest importance. Challenges associated with the achievement of this goal become even more severe when such systems rely on machine learning components rather than such as top-down rule-based AI. In this paper, we introduce a risk modeling approach tailored to Collaborative AI systems. The risk model includes goals, risk events and domain specific indicators that potentially expose humans to hazards. The risk model is then leveraged to drive assurance methods that feed in turn the risk model through insights extracted from run-time evidence. Our envisioned approach is described by means of a running example in the domain of Industry 4 .0, where a robotic arm endowed with a visual perception component, implemented with machine learning, collaborates with a human operator for a production-relevant task.enTowards risk modeling for collaborative AIconference paper