• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. SafeML: Safety Monitoring of Machine Learning Classifiers Through Statistical Difference Measures
 
  • Details
  • Full
Options
2020
Conference Paper
Title

SafeML: Safety Monitoring of Machine Learning Classifiers Through Statistical Difference Measures

Abstract
Ensuring safety and explainability of machine learning (ML) is a topic of increasing relevance as data-driven applications venture into safety-critical application domains, traditionally committed to high safety standards that are not satisfied with an exclusive testing approach of otherwise inaccessible black-box systems. Especially the interaction between safety and security is a central challenge, as security violations can lead to compromised safety. The contribution of this paper to addressing both safety and security within a single concept of protection applicable during the operation of ML systems is active monitoring of the behavior and the operational context of the data-driven system based on distance measures of the Empirical Cumulative Distribution Function (ECDF). We investigate abstract datasets (XOR, Spiral, Circle) and current security-specific datasets for intrusion detection (CICIDS2017) of simulated network traffic, using distributional shift detection measures including the Kolmogorov-Smirnov, Kuiper, Anderson-Darling, Wasserstein and mixed Wasserstein-Anderson-Darling measures. Our preliminary findings indicate that there is a meaningful correlation between ML decisions and the ECDF-based distances measures of the input features. Thus, they can provide a confidence level that can be used for a) analyzing the applicability of the ML system in a given field (safety/security) and b) analyzing if the field data was maliciously manipulated. (Our preliminary code and results are available at https://github.com/ISorokos/SafeML.)
Author(s)
Aslansefat, Koorosh
Sorokos, Ioannis  
Fraunhofer-Institut für Experimentelles Software Engineering IESE  
Whiting, Declan
Tavakoli Kolagari, Ramin
Papadopoulos, Yiannis
Mainwork
Model-Based Safety and Assessment. 7th International Symposium, IMBSA 2020. Proceedings  
Project(s)
DEIS  
Funder
European Commission  
Conference
International Symposium on Model-Based Safety and Assessment (IMBSA) 2020  
DOI
10.1007/978-3-030-58920-2_13
Language
English
Fraunhofer-Institut für Experimentelles Software Engineering IESE  
Keyword(s)
  • Safety

  • SafeML

  • Machine Learning

  • Deep Learning

  • Artificial Intelligence

  • Statistical difference

  • Domain adaptation

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024