Now showing 1 - 7 of 7
No Thumbnail Available
Publication

Synthetic data generation for the continuous development and testing of autonomous construction machinery

2023 , Schuster, Alexander , Hagmanns, Raphael , Sonji, Iman , Löcklin, Andreas , Petereit, Janko , Ebert, Christof , Weyrich, Michael

The development and testing of autonomous systems require sufficient meaningful data. However, generating suitable scenario data is a challenging task. In particular, it raises the question of how to narrow down what kind of data should be considered meaningful. Autonomous systems are characterized by their ability to cope with uncertain situations, i.e. complex and unknown environmental conditions. Due to this openness, the definition of training and test scenarios cannot be easily specified. Not all relevant influences can be sufficiently specified with requirements in advance, especially for unknown scenarios and corner cases, and therefore the "right" data, balancing quality and efficiency, is hard to generate. This article discusses the challenges of automated generation of 3D scenario data. We present a training and testing loop that provides a way to generate synthetic camera and Lidar data using 3D simulated environments. Those can be automatically varied and modified to support a closed-loop system for deriving and generating datasets that can be used for continuous development and testing of autonomous systems.

No Thumbnail Available
Publication

PAISE® - process model for AI systems engineering

2022 , Hasterok, Constanze , Stompe, Janina

The application of artificial-intelligence-(AI)-based methods within the context of complex systems poses new challenges within the product life cycle. The process model for AI systems engineering, PAISE®, addresses these challenges by combining approaches from the disciplines of systems engineering, software development and data science. The general approach builds on a component-wise development of the overall system including an AI component. This allows domain specific development processes to be parallelized. At the same time, component dependencies are tested within interdisciplinary checkpoints, thus resulting in a refinement of component specifications.

No Thumbnail Available
Publication

Optimal multispectral sensor confgurations through machine learning for cognitive agriculture

2021 , Becker, Florian , Backhaus, Andreas , Johrden, Felix , Flitter, Merle

Hyperspectral sensor systems play a key role in the automation of work processes in the farming industry. Non-invasive measurements of plants allow for an assessment of the vitality and health state and can also be used to classify weeds or infected parts of a plant. However, one major downside of hyperspectral cameras is that they are not very cost-effective. In this paper, we show, that for specific tasks, multispectral systems with only a fraction of the wavelength bands and costs of a hyperspectral system can lead to promising results for regression and classification tasks. We conclude that for the ongoing automation efforts in the context of cognitive agriculture reduced multispectral systems are a viable alternative.

No Thumbnail Available
Publication

Explainable AI: Introducing trust and comprehensibility to AI engineering

2022 , Burkart, Nadia , Danilo Brajovic , Huber, Marco F.

Machine learning (ML) rapidly gains increasing interest due to the continuous improvements in performance. ML is used in many different applications to support human users. The representational power of ML models allows solving difficult tasks, while making them impossible to be understood by humans. This provides room for possible errors and limits the full potential of ML, as it cannot be applied in critical environments. In this paper, we propose employing Explainable AI (xAI) for both model and data set refinement, in order to introduce trust and comprehensibility. Model refinement utilizes xAI for providing insights to inner workings of an ML model, for identifying limitations and for deriving potential improvements. Similarly, xAI is used in data set refinement to detect and resolve problems of the training data.

No Thumbnail Available
Publication

ROBDEKON - competence center for decontamination robotics

2022 , Woock, Philipp , Petereit, Janko , Frey, Christian , Beyerer, Jürgen

There are still many hazardous tasks that humans perform in their daily work. This is of great concern for the remediation of contaminated sites, for the dismantling of nuclear power plants, or for the handling of hazardous materials. The competence center ROBDEKON was founded to concentrate expertise and coordinate research activities regarding decontamination robotics in Germany. It serves as a national technology hub for the decontamination needs of various stakeholders. A major scientific goal of ROBDEKON is the development of (semi-)autonomous robotic systems to remove humans from work environments that are potentially hazardous to health.

No Thumbnail Available
Publication

Validation of XAI Explanations for Multivariate Time Series Classification in the Maritime Domain

2022 , Veerappa, Manjunatha , Anneken, Mathias , Burkart, Nadia , Huber, Marco

Due to the lack of explanation towards their internal mechanism, state-of-the-art deep learning-based classifiers are often considered as black-box models. For instance, in the maritime domain, models that classify the types of ships based on their trajectories and other features perform well, but give no further explanation for their predictions. To gain the trust of human operators responsible for critical decisions, the reason behind the classification is crucial. In this paper, we introduce explainable artificial intelligence (XAI) approaches to the task of classification of ship types. This supports decision-making by providing explanations in terms of the features contributing the most towards the prediction, along with their corresponding time intervals. In the case of the LIME explainer, we adapt the time-slice mapping technique (LimeforTime), while for Shapley additive explanations (SHAP) and path integrated gradient (PIG), we represent the relevance of each input variable to generate a heatmap as an explanation. In order to validate the XAI results, the existing perturbation and sequence analyses for classifiers of univariate time series data is employed for testing and evaluating the XAI explanations on multivariate time series. Furthermore, we introduce a novel evaluation technique to assess the quality of explanations yielded by the chosen XAI method.

No Thumbnail Available
Publication

Are you sure? Prediction revision in automated decision-making

2021 , Burkart, Nadia , Robert, Sebastian , Huber, Marco

With the rapid improvements in machine learning and deep learning, decisions made by automated decision support systems (DSS) will increase. Besides the accuracy of predictions, their explainability becomes more important. The algorithms can construct complex mathematical prediction models. This causes insecurity to the predictions. The insecurity rises the need for equipping the algorithms with explanations. To examine how users trust automated DSS, an experiment was conducted. Our research aim is to examine how participants supported by an DSS revise their initial prediction by four varying approaches (treatments) in a between-subject design study. The four treatments differ in the degree of explainability to understand the predictions of the system. First we used an interpretable regression model, second a Random Forest (considered to be a black box [BB]), third the BB with a local explanation and last the BB with a global explanation. We noticed that all participants improved their predictions after receiving an advice whether it was a complete BB or an BB with an explanation. The major finding was that interpretable models were not incorporated more in the decision process than BB models or BB models with explanations.