Now showing 1 - 2 of 2
  • Publication
    Statistical Property Testing for Generative Models
    ( 2023)
    Seferis, Emmanouil
    ;
    ;
    Generative models that produce images, text, or other types of data are recently be equipped with more powerful capabilities. Nevertheless, in some use cases of the generated data (e.g., using it for model training), one must ensure that the synthetic data points satisfy some properties that make them suitable for the intended use. Towards this goal, we present a simple framework to statistically check if the data produced by a generative model satisfy some property with a given confidence level. We apply our methodology to standard image and text-to-image generative models.
  • Publication
    Can Conformal Prediction Obtain Meaningful Safety Guarantees for ML Models?
    ( 2023)
    Seferis, Emmanouil
    ;
    ;
    Conformal Prediction (CP) has been recently proposed as a methodology to calibrate the predictions of Machine Learning (ML) models so that they can output rigorous quantification of their uncertainties. For example, one can calibrate the predictions of an ML model into prediction sets, that guarantee to cover the ground truth class with a probability larger than a specified threshold. In this paper, we study whether CP can provide strong statistical guarantees that would be required in safety-critical applications. Our evaluation on the ImageNet demonstrates that using CP over state-of-the-art models fails to deliver the required guarantees. We corroborate our results by deriving a simple connection between the CP prediction sets and top-k accuracy.