Instance segmentation of deadwood objects in combined optical and elevation data using convolutional neural networks
Due to an increased occurrence frequency of drought events and pest infestations, large amounts of deadwood are a current issue in temperate forests. Accurate monitoring of deadwood and analysis of its spatial and temporal distribution is, therefore, more important than ever, as it facilitates faster response to pest outbreaks or increased risk of forest fires. As highlighted by previous studies, state-of-the-art remote sensing platforms, such as UAVs, provide great synergies for deadwood monitoring in combination with machine- and deep-learning approaches of computer vision. Key challenges that remain are the acquisition of sufficient amounts of labeled data for model training and identifying deadwood on the single-tree level, which is required to estimate the deadwood volume in an area. The presented work demonstrates how it is possible to obtain very accurate instance segmentation in the combined RGB and elevation domain and with limited training data. A high-performance Mask R-CNN model was trained to map standing and lying deadwood instances in German forests, achieving outstanding results with an overall accuracy of 92.4% and a mean average precision of 43.4%. To compensate for the possibly insufficient amount of annotated images, we performed experiments with a semi-supervised active learning pipeline. Here, each time after the model predicted a batch of new data, only the instances that achieved a high prediction score were added to the pool of training data to re-compute the model for the next iteration step. Even though the application of the fully supervised approach led to superior results, overall, this study proves that the proposed method can reliably map individual deadwood objects. The approach not only represents an end-to-end framework for image annotation, model acquisition, and large-scale mapping of deadwood, but also is adoptable with reasonable effort to solve similar problems in the future.