• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. Comparing Optimization Methods for Deep Learning at the Example of Artistic Style Transfer
 
  • Details
  • Full
Options
2020
Conference Paper
Title

Comparing Optimization Methods for Deep Learning at the Example of Artistic Style Transfer

Abstract
Artistic style transfer is an application of deep learning using convolutional neural networks (CNN). It combines the content of one image with the style of another one using so-called perceptual loss functions. More precisely, the training of the network consists in choosing the weights such that the perceptual loss is minimized. Here, we study the impact of the choice of the optimization method on the final transformation result. Training an artistic style transfer network with several optimization methods commonly used in deep learning, we obtain significantly differing models. In a default parameter setting, we show that Adam, AdaMax, Adam AMSGrad, Nadam, and RMSProp yield better results than AdaDelta, AdaGrad or RProp, both measured by the perceptual loss function and by visual perception. The results of the last three methods strongly depend on the chosen parameters. With a suitable selection, AdaGrad and AdaDelta can achieve results similar to the versions of Adam or RMSProp.
Author(s)
Geng, Alexander
Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau
Moghiseh, Ali  
Fraunhofer Institute for Industrial Mathematics ITWM  
Schladitz, Katja  
Fraunhofer Institute for Industrial Mathematics ITWM  
Redenbach, Claudia
Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau
Mainwork
Forum Bildverarbeitung
Conference
Forum Bildverarbeitung - Image Processing Forum, 2020
Language
English
Fraunhofer Institute for Industrial Mathematics ITWM  
Keyword(s)
  • Convolutional neural network

  • perceptual loss

  • stochastic gradient descent

  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024