Now showing 1 - 2 of 2
No Thumbnail Available
Publication

Are Transformers More Robust? Towards Exact Robustness Verification for Transformers

2023 , Liao, Brian Hsuan-Cheng , Cheng, Chih-Hong , Esen, Hasan , Knoll, Alois

As an emerging type of Neural Networks (NNs), Transformers are used in many domains ranging from Natural Language Processing to Autonomous Driving. In this paper, we study the robustness problem of Transformers, a key characteristic as low robustness may cause safety concerns. Specifically, we focus on Sparsemax-based Transformers and reduce the finding of their maximum robustness to a Mixed Integer Quadratically Constrained Programming (MIQCP) problem. We also design two pre-processing heuristics that can be embedded in the MIQCP encoding and substantially accelerate its solving. We then conduct experiments using the application of Land Departure Warning to compare the robustness of Sparsemax-based Transformers against that of the more conventional Multi-Layer-Perceptron (MLP) NNs. To our surprise, Transformers are not necessarily more robust, leading to profound considerations in selecting appropriate NN architectures for safety-critical domain applications.

No Thumbnail Available
Publication

Formal Specification for Learning-Enabled Autonomous Systems

2022 , Bensalem, Saddek , Cheng, Chih-Hong , Huang, Xiaowei , Katsaros, Panagiotis , Molin, Adam , Nickovic, Dejan , Peled, Doron

The formal specification provides a uniquely readable description of various aspects of a system, including its temporal behavior. This facilitates testing and sometimes automatic verification of the system against the given specification. We present a logic-based formalism for specifying learning-enabled autonomous systems, which involve components based on neural networks. The formalism is based on first-order past time temporal logic that uses predicates for denoting events. We have applied the formalism successfully to two complex use cases.