Now showing 1 - 4 of 4
No Thumbnail Available
Publication

Potential-based Credit Assignment for Cooperative RL-based Testing of Autonomous Vehicles

2023 , Ayvaz, Utku , Cheng, Chih-Hong , Hao, Shen

While autonomous vehicles (AVs) may perform remarkably well in generic real-life cases, their irrational action in some unforeseen cases leads to critical safety concerns. This paper introduces the concept of collaborative reinforcement learning (RL) to generate challenging test cases for AV planning and decision-making module. One of the critical challenges for collaborative RL is the credit assignment problem, where a proper assignment of rewards to multiple agents interacting in the traffic scenario, considering all parameters and timing, turns out to be non-trivial. In order to address this challenge, we propose a novel potential-based reward-shaping approach inspired by counterfactual analysis for solving the credit-assignment problem. The evaluation in a simulated environment demonstrates the superiority of our proposed approach against other methods using local and global rewards.

No Thumbnail Available
Publication

Formal Specification for Learning-Enabled Autonomous Systems

2022 , Bensalem, Saddek , Cheng, Chih-Hong , Huang, Xiaowei , Katsaros, Panagiotis , Molin, Adam , Nickovic, Dejan , Peled, Doron

The formal specification provides a uniquely readable description of various aspects of a system, including its temporal behavior. This facilitates testing and sometimes automatic verification of the system against the given specification. We present a logic-based formalism for specifying learning-enabled autonomous systems, which involve components based on neural networks. The formalism is based on first-order past time temporal logic that uses predicates for denoting events. We have applied the formalism successfully to two complex use cases.

No Thumbnail Available
Publication

Are Transformers More Robust? Towards Exact Robustness Verification for Transformers

2023 , Liao, Brian Hsuan-Cheng , Cheng, Chih-Hong , Esen, Hasan , Knoll, Alois

As an emerging type of Neural Networks (NNs), Transformers are used in many domains ranging from Natural Language Processing to Autonomous Driving. In this paper, we study the robustness problem of Transformers, a key characteristic as low robustness may cause safety concerns. Specifically, we focus on Sparsemax-based Transformers and reduce the finding of their maximum robustness to a Mixed Integer Quadratically Constrained Programming (MIQCP) problem. We also design two pre-processing heuristics that can be embedded in the MIQCP encoding and substantially accelerate its solving. We then conduct experiments using the application of Land Departure Warning to compare the robustness of Sparsemax-based Transformers against that of the more conventional Multi-Layer-Perceptron (MLP) NNs. To our surprise, Transformers are not necessarily more robust, leading to profound considerations in selecting appropriate NN architectures for safety-critical domain applications.

No Thumbnail Available
Publication

Selected Challenges in ML Safety for Railway

2022-09 , Cheng, Chih-Hong

Neural networks (NN) have been introduced in safety-critical applications from autonomous driving to train inspection. I argue that to close the demo-to-product gap, we need scientifically-rooted engineering methods that can efficiently improve the quality of NN. In particular, I consider a structural approach (via GSN) to argue the quality of neural networks with NN-specific dependability metrics. A systematic analysis considering the quality of data collection, training, testing, and operation allows us to identify many unsolved research questions: (1) Solve the denominator/edge case problem with synthetic data, with quantifiable argumentation (2) Reach the performance target by combining classical methods and data-based methods in vision (3) Decide the threshold (for OoD or any kind) based on the risk appetite (societally accepted risk).