Now showing 1 - 4 of 4
No Thumbnail Available
Publication

Sensing and Machine Learning for Automotive Perception: A Review

2023 , Pandharipande, Ashish , Cheng, Chih-Hong , Dauwels, Justin , Gurbuz, Sevgi Z. , Ibanez-Guzman, Javier , Li, Guofa , Piazzoni, Andrea , Wang, Pu , Santra, Avik

Automotive perception involves understanding the external driving environment as well as the internal state of the vehicle cabin and occupants using sensor data. It is critical to achieving high levels of safety and autonomy in driving. This paper provides an overview of different sensor modalities like cameras, radars, and LiDARs used commonly for perception, along with the associated data processing techniques. Critical aspects in perception are considered, like architectures for processing data from single or multiple sensor modalities, sensor data processing algorithms and the role of machine learning techniques, methodologies for validating the performance of perception systems, and safety. The technical challenges for each aspect are analyzed, emphasizing machine learning approaches given their potential impact on improving perception. Finally, future research opportunities in automotive perception for their wider deployment are outlined.

No Thumbnail Available
Publication

Intelligent Testing for Autonomous Vehicles - Methods and Tools

2022-09 , Cheng, Chih-Hong

In this talk, I first give a tutorial on some fundamental AI testing methods with their strengths and weaknesses. For testing complex autonomous driving systems, an intelligent combination of basic AI testing techniques makes it possible to generate highly diversified test cases while enabling efficient bug hunting.

No Thumbnail Available
Publication

Statistical Guarantees for Safe 2D Object Detection Post-processing

2023 , Seferis, Emmanouil , Burton, Simon , Kollias, Stefanos , Cheng, Chih-Hong

Safe and reliable object detection is essential for safetycritical applications of machine learning, such as autonomous driving. However, standard object detection methods cannot guarantee their performance during operation. In this work, we leverage conformal prediction in order to provide statistical guarantees for back-box object detection models. Extending prior work, we present a postprocessing methodology that can cover the entire object detection problem (localization, classification, false negatives, detection in videos, etc.), while offering sound safety guarantees on its error rates. We apply our method on state-of-the-art 2D object detection models and measure its efficacy in practice. Moreover, we investigate what happens as the acceptable error rates are pushed towards high safety levels. Overall, the presented methodology offers a practical approach towards safety-aware object detection, and we hope it can pave the way for further research in this area.

No Thumbnail Available
Publication

Are Transformers More Robust? Towards Exact Robustness Verification for Transformers

2023 , Liao, Brian Hsuan-Cheng , Cheng, Chih-Hong , Esen, Hasan , Knoll, Alois

As an emerging type of Neural Networks (NNs), Transformers are used in many domains ranging from Natural Language Processing to Autonomous Driving. In this paper, we study the robustness problem of Transformers, a key characteristic as low robustness may cause safety concerns. Specifically, we focus on Sparsemax-based Transformers and reduce the finding of their maximum robustness to a Mixed Integer Quadratically Constrained Programming (MIQCP) problem. We also design two pre-processing heuristics that can be embedded in the MIQCP encoding and substantially accelerate its solving. We then conduct experiments using the application of Land Departure Warning to compare the robustness of Sparsemax-based Transformers against that of the more conventional Multi-Layer-Perceptron (MLP) NNs. To our surprise, Transformers are not necessarily more robust, leading to profound considerations in selecting appropriate NN architectures for safety-critical domain applications.