Ramesh, Avinash NitturAvinash NitturRameshGiovanneschi, FabioFabioGiovanneschiGonzalez Huici, Maria AntoniaMaria AntoniaGonzalez Huici2023-08-222023-08-222023https://publica.fraunhofer.de/handle/publica/44851810.1109/WACV56688.2023.005772-s2.0-85149049933Depth completion is the task of generating dense depth images from sparse depth measurements, e.g., LiDARs. Existing unguided approaches fail to recover dense depth images with sharp object boundaries due to depth bleeding, especially from extremely sparse measurements. State-of-the-art guided approaches require additional processing for spatial and temporal alignment of multi-modal inputs, and sophisticated architectures for data fusion, making them non-trivial for customized sensor setup. To address these limitations, we propose an unguided approach based on U-Net that is invariant to sparsity of inputs. Boundary consistency in reconstruction is explicitly enforced through auxiliary learning on a synthetic dataset with dense depth and depth contour images as targets, followed by fine-tuning on a real-world dataset. With our network architecture and simple implementation approach, we achieve competitive results among unguided approaches on KITTI benchmark and show that the reconstructed image has sharp boundaries and is robust even towards extremely sparse LiDAR measurements.en3D computer visionAlgorithms: Machine learning architecturesand algorithms (including transfer)formulationsImage recognition and understanding (object detection, categorization, segmentation, scene modeling, visual reasoning)SIUNet: Sparsity Invariant U-Net for Edge-Aware Depth Completionconference paper