Now showing 1 - 3 of 3
  • Publication
    Safeguarding Learning-based Control for Smart Energy Systems with Sampling Specifications
    ( 2023) ;
    Gupta, Pragya Kirti
    ;
    Venkataramanan, Venkatesh Prasad
    ;
    Hsu, Yun-Fei
    ;
    We study challenges using reinforcement learning in controlling energy systems, where apart from performance requirements, one has additional safety requirements such as avoiding blackouts. We detail how these safety requirements in real-time temporal logic can be strengthened via discretization into linear temporal logic (LTL), such that the satisfaction of the LTL formulae implies the satisfaction of the original safety requirements. The discretization enables advanced engineering methods such as synthesizing shields for safe reinforcement learning as well as formal verification, where for statistical model checking, the probabilistic guarantee acquired by LTL model checking forms a lower bound for the satisfaction of the original real-time safety requirements.
  • Publication
    Formally Compensating Performance Limitations for Imprecise 2D Object Detection
    ( 2022-08-25) ;
    Seferis, Emmanouil
    ;
    ;
    In this paper, we consider the imperfection within machine learning-based 2D object detection and its impact on safety. We address a special sub-type of performance limitations related to the misalignment of bounding-box predictions to the ground truth: the prediction bounding box cannot be perfectly aligned with the ground truth. We formally prove the minimum required bounding box enlargement factor to cover the ground truth. We then demonstrate that this factor can be mathematically adjusted to a smaller value, provided that the motion planner uses a fixed-length buffer in making its decisions. Finally, observing the difference between an empirically measured enlargement factor and our formally derived worst-case enlargement factor offers an interesting connection between quantitative evidence (demonstrated by statistics) and qualitative evidence (demonstrated by worst-case analysis) when arguing safety-relevant properties of machine learning functions.
  • Publication
    Logically Sound Arguments for the Effectiveness of ML Safety Measures
    We investigate the issues of achieving sufficient rigor in the arguments for the safety of machine learning functions. By considering the known weaknesses of DNN-based 2D bounding box detection algorithms, we sharpen the metric of imprecise pedestrian localization by associating it with the safety goal. The sharpening leads to introducing a conservative post-processor after the standard non-max-suppression as a counter-measure. We then propose a semi-formal assurance case for arguing the effectiveness of the post-processor, which is further translated into formal proof obligations for demonstrating the soundness of the arguments. Applying theorem proving not only discovers the need to introduce missing claims and mathematical concepts but also reveals the limitation of Dempster-Shafer’s rules used in semi-formal argumentation.