Options
2023
Conference Paper
Title
Deep Self-Supervised Hyperspectral-Lidar Fusion for Land Cover Classification
Abstract
The task of Land Cover Classification (LCC) is a central activity because it serves as an instrument for decision-making processes. During the last few years, efforts were made to fuse Hyperspectral (HS) and Light Detection and Ranging (LiDAR) data for creating proficient classifiers. This fusion enables high-resolution classifications on scenes with spectrally similar categories. Additionally, its resolution mainly involves the combination of image-level features acquired during the fusion and subsequent classification. However, another perspective to solve this challenge is using self-supervised features to learn single-modal classifiers and then fusing their individual decisions. The current method approaches the alternative above by resolving several self-supervised tasks, using their weights in individual classifiers, and then combining their outcomes to solve the LCC. It starts by training a Siamese network that applies implicit contrastive learning on augmented views of HS data to learn the semantics of similarities. Then, the method learns two Denoising Autoencoders (DAEs) separately to remove noise from artificially corrupted HS and LiDAR patches. The denoising allows the networks to extract the most relevant image-level features. Subsequently, three instances of a ResNet50-based classifier employ the individually gained self-supervised features to train with a fraction of the available labels. The Decision Fusion Module (DFM) takes each learned classifier’s weights and fuses their individual decisions to compute the final classification. The validation employs two benchmark datasets. Experiments show that the learned self-supervised representations support the method to achieve proficient classification results.
Author(s)