• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Adversarial Vulnerability of Active Transfer Learning
 
  • Details
  • Full
Options
2021
Conference Paper
Title

Adversarial Vulnerability of Active Transfer Learning

Abstract
Two widely used techniques for training supervised machine learning models on small datasets are Active Learning and Transfer Learning. The former helps to optimally use a limited budget to label new data. The latter uses large pre-trained models as feature extractors and enables the design of complex, non-linear models even on tiny datasets. Combining these two approaches is an effective, state-of-the-art method when dealing with small datasets. In this paper, we share an intriguing observation: Namely, that the combination of these techniques is particularly susceptible to a new kind of data poisoning attack: By adding small adversarial noise on the input, it is possible to create a collision in the output space of the transfer learner. As a result, Active Learning algorithms no longer select the optimal instances, but almost exclusively the ones injected by the attacker. This allows an attacker to manipulate the active learner to select and include arbitrary images into the data set, even against an overwhelming majority of unpoisoned samples. We show that a model trained on such a poisoned dataset has a significantly deteriorated performance, dropping from 86% to 34% test accuracy. We evaluate this attack on both audio and image datasets and support our findings empirically. To the best of our knowledge, this weakness has not been described before in literature.
Author(s)
Müller, N.M.
Böttinger, K.
Mainwork
Advances in Intelligent Data Analysis XIX. 19th International Symposium on Intelligent Data Analysis, IDA 2021. Proceedings  
Conference
International Symposium on Intelligent Data Analysis (IDA) 2021  
Open Access
DOI
10.1007/978-3-030-74251-5_10
Additional link
Full text
Language
English
Fraunhofer-Institut für Angewandte und Integrierte Sicherheit AISEC  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024