Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Evaluation of fusion methods for gamma-divergence-based neural network ensembles

: Knauer, U.; Backhaus, A.; Seiffert, U.


Dorigo, M. ; Institute of Electrical and Electronics Engineers -IEEE-; IEEE Computational Intelligence Society:
IEEE Symposium Series on Computational Intelligence, SSCI 2015. Proceedings : 7-10 December 2015, Cape Town, South Africa
Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2015
ISBN: 978-1-4799-7560-0 (Print)
ISBN: 978-1-4799-7561-7
Symposium Series on Computational Intelligence (SSCI) <2015, Cape Town>
Symposium on Computational Intelligence and Data Mining (CIDM) <6, 2015, Cape Town>
Conference Paper
Fraunhofer IFF ()
10-fold cross validation; adaptive boosting; ensemble classification; Euclidean distance; fusion methods; fuzzy templates; hyperspectral image classification; hyperspectral imaging; image classification; image fusion; learning (artificial intelligence); majority voting classifier; Neurons; Prototypes; radial basis function networks; random forest; SCANN algorithm; training; γ-divergence distance metric; γ-divergence-based neural network ensembles

A significant increase in the accuracy of hyperspectral image classification has been achieved by using ensembles of radial basis function networks trained with different number of neurons and different distance metrics. Best results have been obtained with γ-divergence distance metrics. In this paper, previous work is extended by evaluation of different approaches for the fusion of the multiple real-valued classifier outputs into a crisp ensemble classification result. The evaluation is done by 10-fold cross-validation. The obtained results show that an additional gain in classification accuracy can be achieved by selecting the appropriate fusion algorithm. Second, the SCANN algorithm and Fuzzy Templates are identified as the best performing fusion methods with respect to the complete ensemble of base classifiers. For several subsets of classifiers Majority Voting yields similar results while other simple combiners perform worse. Trainable combiners based on Adaptive Boosting and Random Forest are ranked among the top methods.