Fraunhofer-Gesellschaft

Publica

Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Exploring adversarial examples. Patterns of one-pixel attacks

 
: Kügler, David; Distergoft, Alexander; Kuijper, Arjan; Mukhopadhyay, Anirban

:

Stoyanov, Danail (Ed.):
Understanding and Interpreting Machine Learning in Medical Image Computing Applications : First International Workshops, MLCN 2018, DLF 2018, and iMIMIC 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16-20, 2018, Proceedings
Cham: Springer International Publishing, 2018 (Lecture Notes in Computer Science 11038)
ISBN: 978-3-030-02627-1 (Print)
ISBN: 978-3-030-02628-8 (Online)
ISBN: 978-3-030-02629-5
S.70-78
International Workshop on Machine Learning in Clinical Neuroimaging (MLCN) <1, 2018, Granada>
International Workshop on Deep Learning Fails (DLF) <1, 2018, Granada>
International Workshop on Interpretability of Machine Intelligence in Medical Image Computing (IMIMIC) <1, 2018, Granada>
International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) <21, 2018, Granada>
Englisch
Konferenzbeitrag
Fraunhofer IGD ()
Convolutional Neural Networks (CNN); deep learning; pattern recognition; feature recognition; attack mechanism; Guiding Theme: Digitized Work; Research Area: Computer vision (CV)

Abstract
Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences in healthcare. Yet such failures are mostly studied in the context of real-world images with calibrated attacks. To demystify the adversarial examples, rigorous studies need to be designed. Unfortunately, complexity of the medical images hinders such study design directly from the medical images. We hypothesize that adversarial examples might result from the incorrect mapping of image space to the low dimensional generation manifold by deep networks. To test the hypothesis, we simplify a complex medical problem namely pose estimation of surgical tools into its barest form. An analytical decision boundary and exhaustive search of the one-pixel attack across multiple image dimensions let us localize the regions of frequent successful one-pixel attacks at the image space.

: http://publica.fraunhofer.de/dokumente/N-518425.html