Now showing 1 - 7 of 7
  • Publication
    Real-time dense 3D reconstruction from monocular video data captured by low-cost UAVs
    ( 2021) ; ;
    Weinmann, Martin
    Real-time 3D reconstruction enables fast dense mapping of the environment which benefits numerous applications, such as navigation or live evaluation of an emergency. In contrast to most real-time capable approaches, our method does not need an explicit depth sensor. Instead, we only rely on a video stream from a camera and its intrinsic calibration. By exploiting the self-motion of the unmanned aerial vehicle (UAV) flying with oblique view around buildings, we estimate both camera trajectory and depth for selected images with enough novel content. To create a 3D model of the scene, we rely on a three-stage processing chain. First, we estimate the rough camera trajectory using a simultaneous localization and mapping (SLAM) algorithm. Once a suitable constellation is found, we estimate depth for local bundles of images using a Multi-View Stereo (MVS) approach and then fuse this depth into a global surfel-based model. For our evaluation, we use 55 video sequences with diverse settings, consisting of both synthetic and real scenes. We evaluate not only the generated reconstruction but also the intermediate products and achieve competitive results both qualitatively and quantitatively. At the same time, our method can keep up with a 30 fps video for a resolution of 768 × 448 pixels.
  • Publication
    Incorporating interferometric coherence into LULC classification of airborne PolSAR-images using fully convolutional networks
    ( 2020) ;
    Weinmann, Martin
    ;
    Inspired by the application of state-of-the-art Fully Convolutional Networks (FCNs) for the semantic segmentation of high-resolution optical imagery, recent works transfer this methodology successfully to pixel-wise land use and land cover (LULC) classification of PolSAR data. So far, mainly single PolSAR images are included in the FCN-based classification processes. To further increase classification accuracy, this paper presents an approach for integrating interferometric coherence derived from co-registered image pairs into a FCN-based classification framework. A network based on an encoder-decoder structure with two separated encoder branches is presented for this task. It extracts features from polarimetric backscattering intensities on the one hand and interferometric coherence on the other hand. Based on a joint representation of the complementary features pixel-wise classification is performed. To overcome the scarcity of labelled SAR data for training and testing, annotations are generated automatically by fusing available LULC products. Experimental evaluation is performed on high-resolution airborne SAR data, captured over the German Wadden Sea. The results demonstrate that the proposed model produces smooth and accurate classification maps. A comparison with a single-branch FCN model indicates that the appropriate integration of interferometric coherence enables the improvement of classification performance.
  • Publication
    Automatic Generation of Training Data for Land Use and Land Cover Classification by Fusing Heterogeneous Data Sets
    ( 2020) ;
    Weinmann, Martin
    ;
    Weidner, Uwe
    ;
    ;
    Nowadays, automatic classification of remote sensing data can efficiently produce maps of land use and land cover, which provide an essential source of information in the field of environmental sciences. Most state-of-the-art algorithms use supervised learning methods that require a large amount of annotated training data. In order to avoid time-consuming manual labelling, we propose a method for the automatic annotation of remote sensing data that relies on available land use and land cover information. Using the example of automatic labelling of SAR data, we show how the Dempster-Shafer evidence theory can be used to fuse information from different land use and land cover products into one training data set. Our results confirm that the combination of information from OpenStreetMap, CORINE Land Cover 2018, Global Surface Water and the SAR data itself leads to reliable class assignments, and that this combination outperforms each considered single land use and land cover product.
  • Publication
    Self-Supervised Learning for Monocular Depth Estimation from Aerial Imagery
    ( 2020) ; ;
    Weinmann, Martin
    ;
    Hinz, Stefan
    Supervised learning based methods for monocular depth estimation usually require large amounts of extensively annotated training data. In the case of aerial imagery, this ground truth is particularly difficult to acquire. Therefore, in this paper, we present a method for self-supervised learning for monocular depth estimation from aerial imagery that does not require annotated training data. For this, we only use an image sequence from a single moving camera and learn to simultaneously estimate depth and pose information. By sharing the weights between pose and depth estimation, we achieve a relatively small model, which favors real-time application. We evaluate our approach on three diverse datasets and compare the results to conventional methods that estimate depth maps based on multi-view geometry. We achieve an accuracy d1:25 of up to 93.5 %. In addition, we have paid particular attention to the generalization of a trained model to unknown data and the self-improving capabilities of our approach. We conclude that, even though the results of monocular depth estimation are inferior to those achieved by conventional methods, they are well suited to provide a good initialization for methods that rely on image matching or to provide estimates in regions where image matching fails, e.g. occluded or texture-less regions.
  • Publication
    Superpoints in RANSAC planes: A new approach for ground surface extraction exemplified on point classification and context-aware reconstruction
    ( 2020) ; ;
    Lucks, Lukas
    ;
    Weinmann, Martin
    In point clouds obtained from airborne data, the ground points have traditionally been identified as local minima of the altitude. Subsequently, the 2.5D digital terrain models have been computed by approximation of a smooth surfaces from the ground points. But how can we handle purely 3D surfaces of cultural heritage monuments covered by vegetation or Alpine overhangs, where trees are not necessarily growing in bottom-to-top direction? We suggest a new approach based on a combination of superpoints and RANSAC implemented as a filtering procedure, which allows efficient handling of large, challenging point clouds without necessity of training data. If training data is available, covariance-based features, point histogram features, and dataset-dependent features as well as combinations thereof are applied to classify points. Results achieved with a Random Forest classifier and non-local optimization using Markov Random Fields are analyzed for two challenging datasets: an airborne laser scan and a photogrammetrically reconstructed point cloud. As an application, surface reconstruction from the thus cleaned point sets is demonstrated.
  • Publication
    Deep Cross-Domain Building Extraction for Selective Depth Estimation from Oblique Aerial Imagery
    ( 2018) ;
    Thiel, Laurenz
    ;
    Weinmann, Martin
    With the technological advancements of aerial imagery and accurate 3d reconstruction of urban environments, more and more attention has been paid to the automated analyses of urban areas. In our work, we examine two important aspects that allow online analysis of building structures in city models given oblique aerial image sequences, namely automatic building extraction with convolutional neural networks (CNNs) and selective real-time depth estimation from aerial imagery. We use transfer learning to train the Faster R-CNN method for real-time deep object detection, by combining a large ground-based dataset for urban scene understanding with a smaller number of images from an aerial dataset. We achieve an average precision (AP) of about 80% for the task of building extraction on a selected evaluation dataset. Our evaluation focuses on both dataset-specific learning and transfer learning. Furthermore, we present an algorithm that allows for multi-view depth estimation from aerial image sequences in real-time. We adopt the semi-global matching (SGM) optimization strategy to preserve sharp edges at object boundaries. In combination with the Faster R-CNN, it allows a selective reconstruction of buildings, identified with regions of interest (RoIs), from oblique aerial imagery.
  • Publication
    Improved UAV-borne 3D mapping by fusing optical and laserscanner data
    ( 2013)
    Jutzi, Boris
    ;
    Weinmann, Martin
    ;
    In this paper, a new method for fusing optical and laserscanner data is presented for improved UAV-borne 3D mapping. We propose to equip an unmanned aerial vehicle (UAV) with a small platform which includes two sensors: a standard low-cost digital camera and a lightweight Hokuyo UTM-30LX-EW laserscanning device (210 g without cable). Initially, a calibration is carried out for the utilized devices. This involves a geometric camera calibration and the estimation of the position and orientation offset between the two sensors by lever-arm and bore-sight calibration. Subsequently, a feature tracking is performed through the image sequence by considering extracted interest points as well as the projected 3D laser points. These 2D results are fused with the measured laser distances and fed into a bundle adjustment in order to obtain a Simultaneous Localization and Mapping (SLAM). It is demonstrated that an improvement in terms of precision for the pose estimation is derived by fusing optical and laserscanner data.