• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. RMF: A Risk Measurement Framework for Machine Learning Models
 
  • Details
  • Full
Options
July 30, 2024
Conference Paper
Title

RMF: A Risk Measurement Framework for Machine Learning Models

Abstract
Machine learning (ML) models are used in many safety- and security-critical applications nowadays. It is therefore important to measure the security of a system that uses ML as a component. This paper focuses on the field of ML, particularly the security of autonomous vehicles. For this purpose, a technical framework will be described, implemented, and evaluated in a case study. Based on ISO/IEC 27004:2016, risk indicators are utilized to measure and evaluate the extent of damage and the effort required by an attacker. It is not possible, however, to determine a single risk value that represents the attacker's effort. Therefore, four different values must be interpreted individually.
Author(s)
Schröder, Jan
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Breier, Jakub
Mainwork
ARES 2024, 19th International Conference on Availability, Reliability & Security. Proceedings  
Conference
International Conference on Availability, Reliability and Security 2024  
Open Access
DOI
10.1145/3664476.3670867
Language
English
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Keyword(s)
  • Adversarial Machine Learning

  • Backdoor Attacks

  • ISO/IEC 27004:2016

  • Machine Learning Security

  • Risk Measurement

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024