Now showing 1 - 8 of 8
No Thumbnail Available
Publication

Safeguarding Learning-based Control for Smart Energy Systems with Sampling Specifications

2023 , Cheng, Chih-Hong , Gupta, Pragya Kirti , Venkataramanan, Venkatesh Prasad , Hsu, Yun-Fei , Burton, Simon

We study challenges using reinforcement learning in controlling energy systems, where apart from performance requirements, one has additional safety requirements such as avoiding blackouts. We detail how these safety requirements in real-time temporal logic can be strengthened via discretization into linear temporal logic (LTL), such that the satisfaction of the LTL formulae implies the satisfaction of the original safety requirements. The discretization enables advanced engineering methods such as synthesizing shields for safe reinforcement learning as well as formal verification, where for statistical model checking, the probabilistic guarantee acquired by LTL model checking forms a lower bound for the satisfaction of the original real-time safety requirements.

No Thumbnail Available
Publication

Butterfly Effect Attack: Tiny and Seemingly Unrelated Perturbations for Object Detection

2023 , Doan, Nguyen Anh Vu , Yüksel, Arda , Cheng, Chih-Hong

This work aims to explore and identify tiny and seemingly unrelated perturbations of images in object detection that will lead to performance degradation. While tininess can naturally be defined using Lp norms, we characterize the degree of "unrelatedness" of an object by the pixel distance between the occurred perturbation and the object. Triggering errors in prediction while satisfying two objectives can be formulated as a multi-objective optimization problem where we utilize genetic algorithms to guide the search. The result successfully demonstrates that (invisible) perturbations on the right part of the image can drastically change the outcome of object detection on the left. An extensive evaluation reaffirms our conjecture that transformer-based object detection networks are more susceptible to butterfly effects in comparison to single-stage object detection networks such as YOLOv5.

No Thumbnail Available
Publication

Logically Sound Arguments for the Effectiveness of ML Safety Measures

2022 , Cheng, Chih-Hong , Schuster, Tobias , Burton, Simon

We investigate the issues of achieving sufficient rigor in the arguments for the safety of machine learning functions. By considering the known weaknesses of DNN-based 2D bounding box detection algorithms, we sharpen the metric of imprecise pedestrian localization by associating it with the safety goal. The sharpening leads to introducing a conservative post-processor after the standard non-max-suppression as a counter-measure. We then propose a semi-formal assurance case for arguing the effectiveness of the post-processor, which is further translated into formal proof obligations for demonstrating the soundness of the arguments. Applying theorem proving not only discovers the need to introduce missing claims and mathematical concepts but also reveals the limitation of Dempster-Shafer’s rules used in semi-formal argumentation.

No Thumbnail Available
Publication

Potential-based Credit Assignment for Cooperative RL-based Testing of Autonomous Vehicles

2023 , Ayvaz, Utku , Cheng, Chih-Hong , Hao, Shen

While autonomous vehicles (AVs) may perform remarkably well in generic real-life cases, their irrational action in some unforeseen cases leads to critical safety concerns. This paper introduces the concept of collaborative reinforcement learning (RL) to generate challenging test cases for AV planning and decision-making module. One of the critical challenges for collaborative RL is the credit assignment problem, where a proper assignment of rewards to multiple agents interacting in the traffic scenario, considering all parameters and timing, turns out to be non-trivial. In order to address this challenge, we propose a novel potential-based reward-shaping approach inspired by counterfactual analysis for solving the credit-assignment problem. The evaluation in a simulated environment demonstrates the superiority of our proposed approach against other methods using local and global rewards.

No Thumbnail Available
Publication

Prioritizing Corners in OoD Detectors via Symbolic String Manipulation

2022-10 , Cheng, Chih-Hong , Changshun, Wu , Seferis, Emmanouil , Bensalem, Saddek

For safety assurance of deep neural networks (DNNs), out-of-distribution (OoD) monitoring techniques are essential as they filter spurious input that is distant from the training dataset. This paper studies the problem of systematically testing OoD monitors to avoid cases where an input data point is tested as in-distribution by the monitor, but the DNN produces spurious output predictions. We consider the definition of "in-distribution" characterized in the feature space by a union of hyperrectangles learned from the training dataset. Thus the testing is reduced to finding corners in hyperrectangles distant from the available training data in the feature space. Concretely, we encode the abstract location of every data point as a finite-length binary string, and the union of all binary strings is stored compactly using binary decision diagrams (BDDs). We demonstrate how to use BDDs to symbolically extract corners distant from all data points within the training set. Apart from test case generation, we explain how to use the proposed corners to fine-tune the DNN to ensure that it does not predict overly confidently. The result is evaluated over examples such as number and traffic sign recognition.

No Thumbnail Available
Publication

Formal Specification for Learning-Enabled Autonomous Systems

2022 , Bensalem, Saddek , Cheng, Chih-Hong , Huang, Xiaowei , Katsaros, Panagiotis , Molin, Adam , Nickovic, Dejan , Peled, Doron

The formal specification provides a uniquely readable description of various aspects of a system, including its temporal behavior. This facilitates testing and sometimes automatic verification of the system against the given specification. We present a logic-based formalism for specifying learning-enabled autonomous systems, which involve components based on neural networks. The formalism is based on first-order past time temporal logic that uses predicates for denoting events. We have applied the formalism successfully to two complex use cases.

No Thumbnail Available
Publication

Are Transformers More Robust? Towards Exact Robustness Verification for Transformers

2023 , Liao, Brian Hsuan-Cheng , Cheng, Chih-Hong , Esen, Hasan , Knoll, Alois

As an emerging type of Neural Networks (NNs), Transformers are used in many domains ranging from Natural Language Processing to Autonomous Driving. In this paper, we study the robustness problem of Transformers, a key characteristic as low robustness may cause safety concerns. Specifically, we focus on Sparsemax-based Transformers and reduce the finding of their maximum robustness to a Mixed Integer Quadratically Constrained Programming (MIQCP) problem. We also design two pre-processing heuristics that can be embedded in the MIQCP encoding and substantially accelerate its solving. We then conduct experiments using the application of Land Departure Warning to compare the robustness of Sparsemax-based Transformers against that of the more conventional Multi-Layer-Perceptron (MLP) NNs. To our surprise, Transformers are not necessarily more robust, leading to profound considerations in selecting appropriate NN architectures for safety-critical domain applications.

No Thumbnail Available
Publication

Formally Compensating Performance Limitations for Imprecise 2D Object Detection

2022-08-25 , Schuster, Tobias , Seferis, Emmanouil , Burton, Simon , Cheng, Chih-Hong

In this paper, we consider the imperfection within machine learning-based 2D object detection and its impact on safety. We address a special sub-type of performance limitations related to the misalignment of bounding-box predictions to the ground truth: the prediction bounding box cannot be perfectly aligned with the ground truth. We formally prove the minimum required bounding box enlargement factor to cover the ground truth. We then demonstrate that this factor can be mathematically adjusted to a smaller value, provided that the motion planner uses a fixed-length buffer in making its decisions. Finally, observing the difference between an empirically measured enlargement factor and our formally derived worst-case enlargement factor offers an interesting connection between quantitative evidence (demonstrated by statistics) and qualitative evidence (demonstrated by worst-case analysis) when arguing safety-relevant properties of machine learning functions.