Now showing 1 - 8 of 8
  • Publication
    Automatic scoring of Rhizoctonia crown and root rot affected sugar beet fields from orthorectified UAV images using Machine Learning
    ( 2024)
    Ispizua Yamati, Facundo Ramón
    ;
    ;
    Barreto Alcántara, Abel Andree
    ;
    Bömer, Jonas
    ;
    Laufer, Daniel
    ;
    ;
    Mahlein, Anne-Katrin
    Rhizoctonia crown and root rot (RCRR), caused by Rhizoctonia solani, can cause severe yield and quality losses in sugar beet. The most common strategy to control the disease is the development of resistant varieties. In the breeding process, field experiments with artificial inoculation are carried out to evaluate the performance of genotypes and varieties. The phenotyping process in breeding trials requires constant monitoring and scoring by skilled experts. This work is time demanding and shows bias and heterogeneity according to the experience and capacity of each individual person. Optical sensors and artificial intelligence have demonstrated a great potential to achieve higher accuracy than human raters and the possibility to standardize phenotyping applications. A workflow combining red-green-blue (RGB) and multispectral imagery coupled to an unmanned aerial vehicle (UAV), and machine learning techniques was applied to score diseased plants and plots affected by RCRR. Georeferenced annotation of UAV orthorectified images. With the annotated images, five convolutional neural networks were trained to score individual plants. The training was carried out with different image analysis strategies and data augmentation, respectively. The custom convolutional neural network trained from scratch together with a pre-trained MobileNet showed the best precision in scoring RCRR (0.73 to 0.85). The average per plot of spectral information was used to score plots, and the benefit of adding the information obtained from the score of individual plants was compared. For this purpose, machine learning models were trained together with data management strategies, and the best-performing model was chosen. A combined pipeline of Random Forest and k-Nearest neighbors have shown the best weighted precision (0.67). This research provides a reliable workflow for detecting and scoring RCRR based on aerial imagery. RCRR is often distributed heterogeneously in trial plots, therefore, considering the information from individual plants of the plots showed a significant improvement of UAV based automated monitoring routines.
  • Publication
    Bounding open space risk with decoupling autoencoders in open set recognition
    One-vs-Rest (OVR) classification aims to distinguish a single class of interest (COI) from other classes. The concept of novelty detection and robustness to dataset shift becomes crucial in OVR when the scope of the rest class is extended from the classes observed during training to unseen and possibly unrelated classes, a setting referred to as open set recognition (OSR). In this work, we propose a novel architecture, namely decoupling autoencoder (DAE), which provides a proven upper bound on the open space risk and minimizes open space risk via a dedicated training routine. Our method is benchmarked within three different scenarios, each isolating different aspects of OSR, namely plain classification, outlier detection, and dataset shift. The results conclusively show that DAE achieves robust performance across all three tasks. This level of cross-task robustness is not observed for any of the seven potent baselines from the OSR, OVR, outlier detection, and ensembling domain which, apart from ATA (Lübbering et al., From imbalanced classification to supervised outlier detection problems: adversarially trained auto encoders. In: Artificial neural networks and machine learning-ICANN 2020, 2020), tend to fail on either one of the tasks. Similar to DAE, ATA is based on autoencoders and facilitates the reconstruction error to predict the inlierness of a sample. However unlike DAE, it does not provide any uncertainty scores and therefore lacks rudimentary means of interpretation. Our adversarial robustness and local stability results further support DAE's superiority in the OSR setting, emphasizing its applicability in safety-critical systems.
  • Publication
    Agricultural plant cataloging and establishment of a data framework from UAV-based crop images by computer vision
    ( 2022) ;
    Ispizua Yamati, F.R.
    ;
    Kierdorf, J.
    ;
    Roscher, R.
    ;
    Mahlein, A.K.
    ;
    Background: Unmanned aerial vehicle (UAV)-based image retrieval in modern agriculture enables gathering large amounts of spatially referenced crop image data. In large-scale experiments, however, UAV images suffer from containing a multitudinous amount of crops in a complex canopy architecture. Especially for the observation of temporal effects, this complicates the recognition of individual plants over several images and the extraction of relevant information tremendously. Results: In this work, we present a hands-on workflow for the automatized temporal and spatial identification and individualization of crop images from UAVs abbreviated as "cataloging"based on comprehensible computer vision methods. We evaluate the workflow on 2 real-world datasets. One dataset is recorded for observation of Cercospora leaf spot - a fungal disease - in sugar beet over an entire growing cycle. The other one deals with harvest prediction of cauliflower plants. The plant catalog is utilized for the extraction of single plant images seen over multiple time points. This gathers a large-scale spatiotemporal image dataset that in turn can be applied to train further machine learning models including various data layers. Conclusion: The presented approach improves analysis and interpretation of UAV data in agriculture significantly. By validation with some reference data, our method shows an accuracy that is similar to more complex deep learning-based recognition techniques. Our workflow is able to automatize plant cataloging and training image extraction, especially for large datasets.
  • Publication
    Towards Intelligent Food Waste Prevention: An Approach Using Scalable and Flexible Harvest Schedule Optimization with Evolutionary Algorithms
    In times of climate change, growing world population, and the resulting scarcity of resources, efficient and economical usage of agricultural land is increasingly important and challenging at the same time. To avoid disadvantages of monocropping for soil and environment, it is advisable to practice intercropping of various plant species whenever possible. However, intercropping is challenging as it requires a balanced planting schedule due to individual cultivation time frames. Maintaining a continuous harvest throughout the season is important as it reduces logistical costs and related greenhouse gas emissions, and can also help to reduce food waste. Motivated by the prevention of food waste, this work proposes a flexible optimization method for a full harvest season of large crop ensembles that complies with given economical and environmental constraints. Our approach applies evolutionary algorithms and we further combine our evolution strategy with a sophisticated hierarchical loss function and adaptive mutation rate. We thus transfer the multi-objective into a pseudo-single-objective optimization problem, for which we obtain faster and better solutions than those of conventional approaches.
  • Publication
    Informed Machine Learning for Industry
    Deep neural networks have pushed the boundaries of artificial intelligence but their training requires vast amounts of data and high performance hardware. While truly digitised companies easily cope with these prerequisites, traditional industries still often lack the kind of data or infrastructures the current generation of end-to-end machine learning depends on. The Fraunhofer Center for Machine Learning therefore develops novel solutions which are informed by expert knowledge. These typically require less training data and are more transparent in their decision-making processes.
  • Publication
    Detecting and correcting spelling errors in high-quality Dutch Wikipedia text
    ( 2018)
    Beeksma, M.
    ;
    Gompel, M. van
    ;
    Kunneman, F.
    ;
    Onrust, L.
    ;
    Regnerus, B.
    ;
    Vinke, D.
    ;
    Brito, Eduardo
    ;
    ;
    For the CLIN28 shared task, we evaluated systems for spelling correction of high-quality text. The task focused on detecting and correcting spelling errors in Dutch Wikipedia pages. Three teams took part in the task. We compared the performance of their systems to that of a baseline system, the Dutch spelling corrector Valkuil. We evaluated the systems' performance in terms of F1 score. Although two of the three participating systems performed well in the task of correcting spelling errors, error detection proved to be a challenging task, and without exception resulted in a high false positive rate. Therefore, the F1 score of the baseline was not improved upon. This paper elaborates on each team's approach to the task, and discusses the overall challenges of correcting high-quality text.
  • Publication
    Automated identification of sugar beet diseases using smartphones
    ( 2018)
    Hallau, L.
    ;
    ;
    Klatt, B.
    ;
    Kleinhenz, B.
    ;
    Klein, T.
    ;
    Kuhn, C.
    ;
    Röhrig, M.
    ;
    ; ;
    Mahlein, A.-K.
    ;
    Steiner, U.
    ;
    Oerke, E.-C.
    Cercospora leaf spot (CLS) poses a high economic risk to sugar beet production due to its potential to greatly reduce yield and quality. For successful integrated management of CLS, rapid and accurate identification of the disease is essential. Diagnosis on the basis of typical visual symptoms is often compromised by the inability to differentiate CLS symptoms from similar symptoms caused by other foliar pathogens of varying significance, or from abiotic stress. An automated detection and classification of CLS and other leaf diseases, enabling a reliable basis for decisions in disease control, would be an alternative to visual as well as molecular and serological methods. This paper presents an algorithm based on a RGB‐image database captured with smartphone cameras for the identification of sugar beet leaf diseases. This tool combines image acquisition and segmentation on the smartphone and advanced image data processing on a server, based on texture features using colour, intensity and gradient values. The diseases are classified using a support vector machine with radial basis function kernel. The algorithm is suitable for binary‐class and multi‐class classification approaches, i.e. the separation between diseased and non‐diseased, and the differentiation among leaf diseases and non‐infected tissue. The classification accuracy for the differentiation of CLS, ramularia leaf spot, phoma leaf spot, beet rust and bacterial blight was 82%, better than that of sugar beet experts classifying diseases from images. However, the technology has not been tested by practitioners. This tool can be adapted to other crops and their diseases and may contribute to improved decision‐making in integrated disease control.
  • Publication
    Can Computers Learn from the Aesthetic Wisdom of the Crowd?
    The social media revolution has led to an abundance of image and video data on the Internet. Since this data is typically annotated, rated, or commented upon by large communities, it provides new opportunities and challenges for computer vision. Social networking and content sharing sites seem to hold the key to the integration of context and semantics into image analysis. In this paper, we explore the use of social media in this regard. We present empirical results obtained on a set of 127,593 images with 3,741,176 tag assignments that were harvested from Flickr, a photo sharing site. We report on how users tag and rate photos and present an approach towards automatically recognizing the aesthetic appeal of images using confidence-based classifiers to alleviate effects due to ambiguously labeled data. Our results indicate that user generated content allows for learning about aesthetic appeal. In particular, established low-level image features seem to enable the recognition of beauty. A reliable recognition of unseemliness, on the other hand, appears to require more elaborate high-level analysis.