Böge, MelanieMelanieBögeBulatov, DimitriDimitriBulatovLucks, LukasLukasLucks2022-03-142022-03-142020https://publica.fraunhofer.de/handle/publica/40720810.1007/978-3-030-41590-7_21According to the United States National Centers for Environmental Information (NCEI), 2017 was one of the most expensive year of losses due to numerous weather and climate disaster events. To reduce the expenditures handling insurance claims and interactive adjustment of losses, automatic methods analyzing post-disaster images of large areas are increasingly being employed. In our work, roof damage analysis was carried out from high-resolution aerial images captured after a devastating hurricane. We compared the performance of a conventional (Random Forest) classifier, which operates on superpixels and relies on sophisticated, hand-crafted features, with two Convolutional Neural Networks (CNN) for semantic image segmentation, namely, SegNet and DeepLabV3+. The results vary greatly, depending on the complexity of the roof shapes. In case of homogeneous shapes, the results of all three methods are comparable and promising. For complex roof structures the results show that the CNN based approaches perform slightly better than the conventional classifier; the performance of the latter one is, however, most predictable depending on the amount of training data and most successful in the case this amount is low. On the building level, all three classifiers perform comparable well. However, an important prerequisite for accurate damage grading of each roof is its correct delineation. To achieve it, a procedure on multi-modal registration has been developed and summarized in this work. It allows adjusting freely available GIS data with actual image data and it showed a robust performance even in case of severely destroyed buildings.endamage detectionSuperpixelsDeepLabV3+feature extraction004670Localization and Grading of Building Roof Damages in High-Resolution Aerial Imagesconference paper