Options
2026
Conference Paper
Title
Safe Adversarial Control Through Interaction
Abstract
A significant problem for safety in navigation is managing the interaction with humans. Especially in highly co-located environments, preventing collisions when navigating constrained spaces while mitigating path deviations and velocity reductions remains challenging. By coordinating with humans and proactively shaping interactions through communicating intent, the robot can increase its room to maneuver through bottlenecks. Human-robot interaction heavily relies on machine learning (ML) to interface with humans. As ML functions are complex to certify, previous work on resilience architectures has focused on delineating safety and utility concerns into separate subsystems, thereby removing utility-specific subsystems from the safety-critical path. However, conventional Safety Envelopes overconstrain these subsystems, substantially reducing flexibility and performance gains realized by ML functions. To this end, expanding on our previous work, an architecture is proposed in which a utility-specific subsystem learns via reinforcement learning to shape the interaction with humans to actively evade the intervention of the safety system based on its feedback. By proactively shaping interactions through early coordination with humans, the time scale on which and the constrained state space in which the utility-driving subsystems operate are effectively extended. To evaluate the proposed architecture’s potential, a preliminary simulation experiment is conducted.