The recognition performance, or the degree of consistency of the presented Gestalt operations with human perception, can be improved by introducing some parameters. To this end ground truth is needed on a set of representative images; i.e., Gestalten must be marked on each image that should be found. Not only the data should be representative, but also the set of observers, who do the labeling work. And they need to be instructed properly. Given such labeled data the parameter setting can be improved by supervised machine learning. Examples of parameters are weights corresponding to the different laws in the fusion, a preferred distance for the proximity law, tolerance parameters, etc. In the operations defined in the previous chapters continuous assessment functions are used. Most of these functions are differentiable. Here this property is utilized in order to perform gradient descent optimization on the parameter setting, i.e., learning with each labeled example a little bit. Other possibilities include collecting statistics over features or feature deviations of found positive and negative example Gestalten. Then the parameter setting is similar to estimating parametrized distributions based on empirical statistics.