Now showing 1 - 7 of 7
  • Publication
    Sensing and Machine Learning for Automotive Perception: A Review
    ( 2023)
    Pandharipande, Ashish
    ;
    ;
    Dauwels, Justin
    ;
    Gurbuz, Sevgi Z.
    ;
    Ibanez-Guzman, Javier
    ;
    Li, Guofa
    ;
    Piazzoni, Andrea
    ;
    Wang, Pu
    ;
    Santra, Avik
    Automotive perception involves understanding the external driving environment as well as the internal state of the vehicle cabin and occupants using sensor data. It is critical to achieving high levels of safety and autonomy in driving. This paper provides an overview of different sensor modalities like cameras, radars, and LiDARs used commonly for perception, along with the associated data processing techniques. Critical aspects in perception are considered, like architectures for processing data from single or multiple sensor modalities, sensor data processing algorithms and the role of machine learning techniques, methodologies for validating the performance of perception systems, and safety. The technical challenges for each aspect are analyzed, emphasizing machine learning approaches given their potential impact on improving perception. Finally, future research opportunities in automotive perception for their wider deployment are outlined.
  • Publication
    Guest Editorial Special Issue on Sensing and Machine Learning for Automotive Perception
    ( 2023)
    Santra, Avik
    ;
    Pandharipande, Ashish
    ;
    Wang, Pu Perry
    ;
    Gurbuz, Sevgi Zubeyde
    ;
    Ibañez-Guzmãn, Javier
    ;
    ;
    Dauwels, Justin
    ;
    Li, Guofa
    There has been tremendous interest in self-driving and advanced driver assistance systems for automotives over the recent past. According to market predictions, achieving advanced levels of autonomous driving may still be significantly far from large-scale commercial deployment. One of the challenges is to obtain reliable environmental perception from onboard automotive sensors, and possibly external sensors, to support safety-critical driving. Automotive perception includes processed and learned information from multimodality sensors like lidar, camera, ultrasonic, and radar. Conventionally, this sensor information has been supporting functions like emergency braking, adaptive cruise control, and self-parking. This Special Issue explores advances in sensors, sensor system architectures, data processing, and machine learning for automotive perception. This Special Issue also aims to bridge the traditional model-based automotive sensing field with the rapidly emerging data-driven field that uses machine learning methods and focuses on feature representation for high-level semantic understanding. Driven by the efforts on automotive sensor hardware platforms and open datasets, vision-inspired deep learning has shown great potential to achieve state-of-the-art performance and yield better results than traditional signal processing methods in multiobject detection and tracking, simultaneous localization and mapping, multimodal sensor fusion, scene understanding, and interference mitigation. This Special Issue highlights advances in machine learning architectures and methods for automotive perception, alongside performance evaluation methodologies and field test results.
  • Publication
    Are Transformers More Robust? Towards Exact Robustness Verification for Transformers
    ( 2023)
    Liao, Brian Hsuan-Cheng
    ;
    ;
    Esen, Hasan
    ;
    Knoll, Alois
    As an emerging type of Neural Networks (NNs), Transformers are used in many domains ranging from Natural Language Processing to Autonomous Driving. In this paper, we study the robustness problem of Transformers, a key characteristic as low robustness may cause safety concerns. Specifically, we focus on Sparsemax-based Transformers and reduce the finding of their maximum robustness to a Mixed Integer Quadratically Constrained Programming (MIQCP) problem. We also design two pre-processing heuristics that can be embedded in the MIQCP encoding and substantially accelerate its solving. We then conduct experiments using the application of Land Departure Warning to compare the robustness of Sparsemax-based Transformers against that of the more conventional Multi-Layer-Perceptron (MLP) NNs. To our surprise, Transformers are not necessarily more robust, leading to profound considerations in selecting appropriate NN architectures for safety-critical domain applications.
  • Publication
    Prioritizing Corners in OoD Detectors via Symbolic String Manipulation
    ( 2022-10) ;
    Changshun, Wu
    ;
    Seferis, Emmanouil
    ;
    Bensalem, Saddek
    For safety assurance of deep neural networks (DNNs), out-of-distribution (OoD) monitoring techniques are essential as they filter spurious input that is distant from the training dataset. This paper studies the problem of systematically testing OoD monitors to avoid cases where an input data point is tested as in-distribution by the monitor, but the DNN produces spurious output predictions. We consider the definition of "in-distribution" characterized in the feature space by a union of hyperrectangles learned from the training dataset. Thus the testing is reduced to finding corners in hyperrectangles distant from the available training data in the feature space. Concretely, we encode the abstract location of every data point as a finite-length binary string, and the union of all binary strings is stored compactly using binary decision diagrams (BDDs). We demonstrate how to use BDDs to symbolically extract corners distant from all data points within the training set. Apart from test case generation, we explain how to use the proposed corners to fine-tune the DNN to ensure that it does not predict overly confidently. The result is evaluated over examples such as number and traffic sign recognition.
  • Publication
    Formally Compensating Performance Limitations for Imprecise 2D Object Detection
    ( 2022-08-25) ;
    Seferis, Emmanouil
    ;
    ;
    In this paper, we consider the imperfection within machine learning-based 2D object detection and its impact on safety. We address a special sub-type of performance limitations related to the misalignment of bounding-box predictions to the ground truth: the prediction bounding box cannot be perfectly aligned with the ground truth. We formally prove the minimum required bounding box enlargement factor to cover the ground truth. We then demonstrate that this factor can be mathematically adjusted to a smaller value, provided that the motion planner uses a fixed-length buffer in making its decisions. Finally, observing the difference between an empirically measured enlargement factor and our formally derived worst-case enlargement factor offers an interesting connection between quantitative evidence (demonstrated by statistics) and qualitative evidence (demonstrated by worst-case analysis) when arguing safety-relevant properties of machine learning functions.
  • Publication
    Logically Sound Arguments for the Effectiveness of ML Safety Measures
    We investigate the issues of achieving sufficient rigor in the arguments for the safety of machine learning functions. By considering the known weaknesses of DNN-based 2D bounding box detection algorithms, we sharpen the metric of imprecise pedestrian localization by associating it with the safety goal. The sharpening leads to introducing a conservative post-processor after the standard non-max-suppression as a counter-measure. We then propose a semi-formal assurance case for arguing the effectiveness of the post-processor, which is further translated into formal proof obligations for demonstrating the soundness of the arguments. Applying theorem proving not only discovers the need to introduce missing claims and mathematical concepts but also reveals the limitation of Dempster-Shafer’s rules used in semi-formal argumentation.
  • Publication
    Formal Specification for Learning-Enabled Autonomous Systems
    ( 2022)
    Bensalem, Saddek
    ;
    ;
    Huang, Xiaowei
    ;
    Katsaros, Panagiotis
    ;
    Molin, Adam
    ;
    Nickovic, Dejan
    ;
    Peled, Doron
    The formal specification provides a uniquely readable description of various aspects of a system, including its temporal behavior. This facilitates testing and sometimes automatic verification of the system against the given specification. We present a logic-based formalism for specifying learning-enabled autonomous systems, which involve components based on neural networks. The formalism is based on first-order past time temporal logic that uses predicates for denoting events. We have applied the formalism successfully to two complex use cases.