Now showing 1 - 10 of 16
  • Publication
    Sensing and Machine Learning for Automotive Perception: A Review
    ( 2023)
    Pandharipande, Ashish
    ;
    ;
    Dauwels, Justin
    ;
    Gurbuz, Sevgi Z.
    ;
    Ibanez-Guzman, Javier
    ;
    Li, Guofa
    ;
    Piazzoni, Andrea
    ;
    Wang, Pu
    ;
    Santra, Avik
    Automotive perception involves understanding the external driving environment as well as the internal state of the vehicle cabin and occupants using sensor data. It is critical to achieving high levels of safety and autonomy in driving. This paper provides an overview of different sensor modalities like cameras, radars, and LiDARs used commonly for perception, along with the associated data processing techniques. Critical aspects in perception are considered, like architectures for processing data from single or multiple sensor modalities, sensor data processing algorithms and the role of machine learning techniques, methodologies for validating the performance of perception systems, and safety. The technical challenges for each aspect are analyzed, emphasizing machine learning approaches given their potential impact on improving perception. Finally, future research opportunities in automotive perception for their wider deployment are outlined.
  • Publication
    Safeguarding Learning-based Control for Smart Energy Systems with Sampling Specifications
    ( 2023) ;
    Gupta, Pragya Kirti
    ;
    Venkataramanan, Venkatesh Prasad
    ;
    Hsu, Yun-Fei
    ;
    We study challenges using reinforcement learning in controlling energy systems, where apart from performance requirements, one has additional safety requirements such as avoiding blackouts. We detail how these safety requirements in real-time temporal logic can be strengthened via discretization into linear temporal logic (LTL), such that the satisfaction of the LTL formulae implies the satisfaction of the original safety requirements. The discretization enables advanced engineering methods such as synthesizing shields for safe reinforcement learning as well as formal verification, where for statistical model checking, the probabilistic guarantee acquired by LTL model checking forms a lower bound for the satisfaction of the original real-time safety requirements.
  • Publication
    Statistical Property Testing for Generative Models
    ( 2023)
    Seferis, Emmanouil
    ;
    ;
    Generative models that produce images, text, or other types of data are recently be equipped with more powerful capabilities. Nevertheless, in some use cases of the generated data (e.g., using it for model training), one must ensure that the synthetic data points satisfy some properties that make them suitable for the intended use. Towards this goal, we present a simple framework to statistically check if the data produced by a generative model satisfy some property with a given confidence level. We apply our methodology to standard image and text-to-image generative models.
  • Publication
    Potential-based Credit Assignment for Cooperative RL-based Testing of Autonomous Vehicles
    ( 2023)
    Ayvaz, Utku
    ;
    ;
    Hao, Shen
    While autonomous vehicles (AVs) may perform remarkably well in generic real-life cases, their irrational action in some unforeseen cases leads to critical safety concerns. This paper introduces the concept of collaborative reinforcement learning (RL) to generate challenging test cases for AV planning and decision-making module. One of the critical challenges for collaborative RL is the credit assignment problem, where a proper assignment of rewards to multiple agents interacting in the traffic scenario, considering all parameters and timing, turns out to be non-trivial. In order to address this challenge, we propose a novel potential-based reward-shaping approach inspired by counterfactual analysis for solving the credit-assignment problem. The evaluation in a simulated environment demonstrates the superiority of our proposed approach against other methods using local and global rewards.
  • Publication
    Guest Editorial Special Issue on Sensing and Machine Learning for Automotive Perception
    ( 2023)
    Santra, Avik
    ;
    Pandharipande, Ashish
    ;
    Wang, Pu Perry
    ;
    Gurbuz, Sevgi Zubeyde
    ;
    Ibañez-Guzmãn, Javier
    ;
    ;
    Dauwels, Justin
    ;
    Li, Guofa
    There has been tremendous interest in self-driving and advanced driver assistance systems for automotives over the recent past. According to market predictions, achieving advanced levels of autonomous driving may still be significantly far from large-scale commercial deployment. One of the challenges is to obtain reliable environmental perception from onboard automotive sensors, and possibly external sensors, to support safety-critical driving. Automotive perception includes processed and learned information from multimodality sensors like lidar, camera, ultrasonic, and radar. Conventionally, this sensor information has been supporting functions like emergency braking, adaptive cruise control, and self-parking. This Special Issue explores advances in sensors, sensor system architectures, data processing, and machine learning for automotive perception. This Special Issue also aims to bridge the traditional model-based automotive sensing field with the rapidly emerging data-driven field that uses machine learning methods and focuses on feature representation for high-level semantic understanding. Driven by the efforts on automotive sensor hardware platforms and open datasets, vision-inspired deep learning has shown great potential to achieve state-of-the-art performance and yield better results than traditional signal processing methods in multiobject detection and tracking, simultaneous localization and mapping, multimodal sensor fusion, scene understanding, and interference mitigation. This Special Issue highlights advances in machine learning architectures and methods for automotive perception, alongside performance evaluation methodologies and field test results.
  • Publication
    Statistical Guarantees for Safe 2D Object Detection Post-processing
    ( 2023)
    Seferis, Emmanouil
    ;
    ;
    Kollias, Stefanos
    ;
    Safe and reliable object detection is essential for safetycritical applications of machine learning, such as autonomous driving. However, standard object detection methods cannot guarantee their performance during operation. In this work, we leverage conformal prediction in order to provide statistical guarantees for back-box object detection models. Extending prior work, we present a postprocessing methodology that can cover the entire object detection problem (localization, classification, false negatives, detection in videos, etc.), while offering sound safety guarantees on its error rates. We apply our method on state-of-the-art 2D object detection models and measure its efficacy in practice. Moreover, we investigate what happens as the acceptable error rates are pushed towards high safety levels. Overall, the presented methodology offers a practical approach towards safety-aware object detection, and we hope it can pave the way for further research in this area.
  • Publication
    Are Transformers More Robust? Towards Exact Robustness Verification for Transformers
    ( 2023)
    Liao, Brian Hsuan-Cheng
    ;
    ;
    Esen, Hasan
    ;
    Knoll, Alois
    As an emerging type of Neural Networks (NNs), Transformers are used in many domains ranging from Natural Language Processing to Autonomous Driving. In this paper, we study the robustness problem of Transformers, a key characteristic as low robustness may cause safety concerns. Specifically, we focus on Sparsemax-based Transformers and reduce the finding of their maximum robustness to a Mixed Integer Quadratically Constrained Programming (MIQCP) problem. We also design two pre-processing heuristics that can be embedded in the MIQCP encoding and substantially accelerate its solving. We then conduct experiments using the application of Land Departure Warning to compare the robustness of Sparsemax-based Transformers against that of the more conventional Multi-Layer-Perceptron (MLP) NNs. To our surprise, Transformers are not necessarily more robust, leading to profound considerations in selecting appropriate NN architectures for safety-critical domain applications.
  • Publication
    Butterfly Effect Attack: Tiny and Seemingly Unrelated Perturbations for Object Detection
    ( 2023)
    Doan, Nguyen Anh Vu
    ;
    Yüksel, Arda
    ;
    This work aims to explore and identify tiny and seemingly unrelated perturbations of images in object detection that will lead to performance degradation. While tininess can naturally be defined using Lp norms, we characterize the degree of "unrelatedness" of an object by the pixel distance between the occurred perturbation and the object. Triggering errors in prediction while satisfying two objectives can be formulated as a multi-objective optimization problem where we utilize genetic algorithms to guide the search. The result successfully demonstrates that (invisible) perturbations on the right part of the image can drastically change the outcome of object detection on the left. An extensive evaluation reaffirms our conjecture that transformer-based object detection networks are more susceptible to butterfly effects in comparison to single-stage object detection networks such as YOLOv5.
  • Publication
    Can Conformal Prediction Obtain Meaningful Safety Guarantees for ML Models?
    ( 2023)
    Seferis, Emmanouil
    ;
    ;
    Conformal Prediction (CP) has been recently proposed as a methodology to calibrate the predictions of Machine Learning (ML) models so that they can output rigorous quantification of their uncertainties. For example, one can calibrate the predictions of an ML model into prediction sets, that guarantee to cover the ground truth class with a probability larger than a specified threshold. In this paper, we study whether CP can provide strong statistical guarantees that would be required in safety-critical applications. Our evaluation on the ImageNet demonstrates that using CP over state-of-the-art models fails to deliver the required guarantees. We corroborate our results by deriving a simple connection between the CP prediction sets and top-k accuracy.
  • Publication
    Deutsche Normungsroadmap Künstliche Intelligenz
    Im Auftrag des Bundesministeriums für Wirtschaft und Klimaschutz haben DIN und DKE im Januar 2022 die Arbeiten an der zweiten Ausgabe der Deutschen Normungsroadmap Künstliche Intelligenz gestartet. In einem breiten Beteiligungsprozess und unter Mitwirkung von mehr als 570 Fachleuten aus Wirtschaft, Wissenschaft, öffentlicher Hand und Zivilgesellschaft wurde damit der strategische Fahrplan für die KI-Normung weiterentwickelt. Koordiniert und begleitet wurden diese Arbeiten von einer hochrangigen Koordinierungsgruppe für KI-Normung und -Konformität. Mit der Normungsroadmap wird eine Maßnahme der KI-Strategie der Bundesregierung umgesetzt und damit ein wesentlicher Beitrag zur "KI - Made in Germany" geleistet. Die Normung ist Teil der KI-Strategie und ein strategisches Instrument zur Stärkung der Innovations- und Wettbewerbsfähigkeit der deutschen und europäischen Wirtschaft. Nicht zuletzt deshalb spielt sie im geplanten europäischen Rechtsrahmen für KI, dem Artificial Intelligence Act, eine besondere Rolle. Die vorliegende Normungsroadmap KI zeigt die Erfordernisse in der Normung auf, formuliert konkrete Empfehlungen und schafft so die Basis, um frühzeitig Normungsarbeiten auf nationaler, insbesondere aber auch auf europäischer und internationaler Ebene, anzustoßen. Damit zahlt sie maßgeblich auf den Artificial Intelligence Act der Europäischen Kommission ein und unterstützt dessen Umsetzung.