Schröder, JanJanSchröderBreier, JakubJakubBreier2024-08-192024-08-192024-07-30https://publica.fraunhofer.de/handle/publica/47385510.1145/3664476.3670867Machine learning (ML) models are used in many safety- and security-critical applications nowadays. It is therefore important to measure the security of a system that uses ML as a component. This paper focuses on the field of ML, particularly the security of autonomous vehicles. For this purpose, a technical framework will be described, implemented, and evaluated in a case study. Based on ISO/IEC 27004:2016, risk indicators are utilized to measure and evaluate the extent of damage and the effort required by an attacker. It is not possible, however, to determine a single risk value that represents the attacker's effort. Therefore, four different values must be interpreted individually.enAdversarial Machine LearningBackdoor AttacksISO/IEC 27004:2016Machine Learning SecurityRisk MeasurementRMF: A Risk Measurement Framework for Machine Learning Modelsconference paper