Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

On the regularization of Wasserstein GaNs

: Petzka, H.; Fischer, A.; Lukovnikov, D.

Volltext ()

ICLR 2018 Conference Track. 6th International Conference on Learning Representations. Poster papers. Online resource : Vancouver Convention Center, Vancouver, BC, Canada, April 30 - May 3, 2018
Online im WWW, 2018
24 S.
International Conference on Learning Representations (ICLR) <6, 2018, Vancouver>
Konferenzbeitrag, Elektronische Publikation
Fraunhofer IAIS ()

Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data. Convergence problems during training are overcome by Wasserstein GANs which minimize the distance between the model and the empirical distribution in terms of a different metric, but thereby introduce a Lipschitz constraint into the optimization problem. A simple way to enforce the Lipschitz constraint on the class of functions, which can be modeled by the neural network, is weight clipping. Augmenting the loss by a regularization term that penalizes the deviation of the gradient norm of the critic (as a function of the network’s input) from one, was proposed as an alternative that improves training. We present theoretical arguments why using a weaker regularization term enforcing the Lipschitz constraint is preferable. These arguments are supported by experimental results on several data sets.