• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Konferenzschrift
  4. Exploring Adversarial Transferability in Real-World Scenarios
 
  • Details
  • Full
Options
September 2023
Conference Paper
Title

Exploring Adversarial Transferability in Real-World Scenarios

Title Supplement
Understanding and Mitigating Security Risk
Abstract
Deep Neural Networks (DNNs) are known to be vulnerable to artificially generated samples known as adversarial examples. Such adversarial samples aim at generating misclassifications by specifically optimizing input data for a matching perturbation. Interestingly, it can be observed that these adversarial examples are transferable from the source network where they were created to a black-box target network. The transferability property means that attackers are no longer required to have white-box access to models nor bound to query the target model repeatedly to craft an effective attack. Given the rising popularity of the use of DNNs in various domains, it is crucial to understand the vulnerability of these networks to such attacks. In this premise, the thesis intends to study transferability under a more realistic scenario, where source and target models can differ in various aspects like accuracy, capacity, bitwidth, and architecture among others. Furthermore, the goal is to also investigate defensive strategies that can be utilized to minimize the effectiveness of these attacks.
Author(s)
Shrestha, Abhishek
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
Mainwork
DC@KI2023: Proceedings of Doctoral Consortium at KI 2023  
Conference
German Conference on Artificial Intelligence 2023  
DOI
10.18420/ki2023-dc-11
Language
English
Fraunhofer-Institut für Offene Kommunikationssysteme FOKUS  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024