Hölzemann, HenryHenryHölzemannFiolka, TorstenTorstenFiolka2025-05-192025-05-192025https://publica.fraunhofer.de/handle/publica/48768810.1109/WACV61041.2025.006802-s2.0-105003626910Accurate self-localization of unmanned aerial systems (UAS) is needed to reduce their dependency on global navigation satellite systems (GNSS). Image retrieval techniques comparing aerial images with a reference database can be used for visual localization (VL). But the search space may be vast and a full search not feasible on a small UAS. In this work, we propose a novel solution that divides the reference database into smaller clusters based on the semantic content of images. To this end, we generate and make use of a dataset for semantic segmentation of aerial image captures. By characterizing scenes and objects in images semantically, retrieval-based systems are able to differentiate images and scenes efficiently. Using a divide-and-conquer approach, images with similar semantics are matched within smaller partial databases. This technique leads to reduced search times and approaches VL as a feasible solution for UAS localization in large-scale outdoor environments.enfalseSemantic Clustering of Image Retrieval Databases used for Visual Localizationconference paper