Now showing 1 - 2 of 2
  • Publication
    Can Conformal Prediction Obtain Meaningful Safety Guarantees for ML Models?
    ( 2023)
    Seferis, Emmanouil
    ;
    ;
    Conformal Prediction (CP) has been recently proposed as a methodology to calibrate the predictions of Machine Learning (ML) models so that they can output rigorous quantification of their uncertainties. For example, one can calibrate the predictions of an ML model into prediction sets, that guarantee to cover the ground truth class with a probability larger than a specified threshold. In this paper, we study whether CP can provide strong statistical guarantees that would be required in safety-critical applications. Our evaluation on the ImageNet demonstrates that using CP over state-of-the-art models fails to deliver the required guarantees. We corroborate our results by deriving a simple connection between the CP prediction sets and top-k accuracy.
  • Publication
    Selected Challenges in ML Safety for Railway
    Neural networks (NN) have been introduced in safety-critical applications from autonomous driving to train inspection. I argue that to close the demo-to-product gap, we need scientifically-rooted engineering methods that can efficiently improve the quality of NN. In particular, I consider a structural approach (via GSN) to argue the quality of neural networks with NN-specific dependability metrics. A systematic analysis considering the quality of data collection, training, testing, and operation allows us to identify many unsolved research questions: (1) Solve the denominator/edge case problem with synthetic data, with quantifiable argumentation (2) Reach the performance target by combining classical methods and data-based methods in vision (3) Decide the threshold (for OoD or any kind) based on the risk appetite (societally accepted risk).