Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Adversarial Vulnerability of Active Transfer Learning

: Müller, N.M.; Böttinger, K.


Abreu, P.H.:
Advances in Intelligent Data Analysis XIX. 19th International Symposium on Intelligent Data Analysis, IDA 2021. Proceedings : Porto, Portugal, April 26-28, 2021, Online Event
Cham: Springer Nature, 2021 (Lecture Notes in Computer Science 12695)
ISBN: 978-3-030-74250-8 (Print)
ISBN: 978-3-030-74251-5 (Online)
ISBN: 978-3-030-74252-2
International Symposium on Intelligent Data Analysis (IDA) <19, 2021, Online>
Fraunhofer AISEC ()

Two widely used techniques for training supervised machine learning models on small datasets are Active Learning and Transfer Learning. The former helps to optimally use a limited budget to label new data. The latter uses large pre-trained models as feature extractors and enables the design of complex, non-linear models even on tiny datasets. Combining these two approaches is an effective, state-of-the-art method when dealing with small datasets. In this paper, we share an intriguing observation: Namely, that the combination of these techniques is particularly susceptible to a new kind of data poisoning attack: By adding small adversarial noise on the input, it is possible to create a collision in the output space of the transfer learner. As a result, Active Learning algorithms no longer select the optimal instances, but almost exclusively the ones injected by the attacker. This allows an attacker to manipulate the active learner to select and include arbitrary images into the data set, even against an overwhelming majority of unpoisoned samples. We show that a model trained on such a poisoned dataset has a significantly deteriorated performance, dropping from 86% to 34% test accuracy. We evaluate this attack on both audio and image datasets and support our findings empirically. To the best of our knowledge, this weakness has not been described before in literature.