Options
2024
Conference Paper
Title
Evaluation of Self-Supervised Learning Techniques for Non-Parametric Few-Shot Hyperspectral-Lidar Classification
Abstract
Diverse engineering disciplines rely on highly detailed, up-to-date thematic maps for daily decision-making. Over the last decade, researchers have approached the land cover classification using supervised deep learning, requiring many labels per category. Labeling is costly, error-prone, and challenging to scale for the ever-growing remote sensing data. Self-supervised learning emerged to learn feature representations on unlabeled datasets, facilitating, for instance, the resolution of few-shot downstream tasks using prior acquired knowledge through transfer learning. Since highly detailed maps often rely on hyperspectral and LiDAR data, it is necessary to quantify the potential of recent self-supervised learning techniques to learn multimodal representations that facilitate accurate few-shot hyperspectral-LiDAR classifications. The current work occupies that gap and compares the representation learning ability of four modern self-supervised learning strategies. It first implements modality-specific encoders for individually handling hyperspectral and rasterized LiDAR data. It then couples each regarded method's architecture on top of the encoders, building pseudo-Siamese networks whose objectives are specific to each learning strategy. It then implements a multi-level feature fusion to combine learned features at different depth levels. Ultimately, it performs non-parametric classifications using the k-nearest neighbors and the support vector machine classifiers to assign categories to joint features at test time. Experiments show that the SimSiam-based method learned the most discriminative features across the studied datasets, achieving consistent classifications at four labeling levels.