Now showing 1 - 10 of 48
  • Publication
    KI-basierte Fehleranalyse für die Instandhaltung erneuerbarer Erzeugung
    ( 2023-12-21)
    Rajendran, Ajay
    ;
    Mit der Dekarbonisierung der Energielandschaft ändert sich auch die Struktur der Anlagenparks. Der Anteil kleinerer dezentraler Anlagen, die nur remote gesteuert werden, nimmt zu. Damit ergeben sich auch neue Herausforderungen für die O&M-Strategie. Eine konventionelle Anlage mit einigen hundert MW installierter Leistung hat 2000-3000 Messwerte, die Informationen über den Zustand der Anlage und ihrer Komponenten beinhalten und die Hinweise auf Verschleiß und mögliche Störungen geben können. Eine Photovoltaik-Anlage mit 3MWp hat oft ebenfalls 3000 Messungen. Ein PV-Portfolio mit einigen hundert MWp erzeugt also alle 5 min oder 10 min eine Flut von Messwerten in der Größenordnung von einer Million Werte. Die Überwachung dieses Portfolios muss in der Regel remote durch ein kleines Team ohne permanente Präsenz vor Ort erfolgen. Softwarelösungen, die 24/7 im Onlinebetrieb Messwerte überwachen und frühzeitig und belastbar Veränderungen im Anlagenverhalten erkennen, tragen schon bei der Betriebsführung konventioneller Anlagen zur Effizienzsteigerung und Reduktion der Instandhaltungskosten bei. In einer erneuerbaren, stärker dezentralen, dekarbonisierten Erzeugungslandschaft wird die Bedeutung dieser Werkzeuge wegen der Messwertflut weiter zunehmen. Iqony und das Fraunhofer Institut IAIS haben daher Werkzeuge entwickelt, um Betriebsführung und Instandhaltung von erneuerbaren Anlagen mit Methoden der künstlichen Intelligenz zu unterstützen. Dabei wurden sowohl Verfahren, die im Kontext konventioneller Erzeugung bekannt sind, auf die Anforderungen erneuerbarer Erzeugung übertragen als auch neue Ansätze erarbeitet.
  • Publication
    Model-driven Approach for integrated Design and Process Planning of Fiber Composite Aerostructures
    ( 2023-10)
    Holland, Maximilian
    ;
    Paul, Nathalie
    ;
    ;
    Elsafty, Hossam Fawzy Mohamed
    ;
    ;
    Geinitz, Steffen
    Automation of design and planning activities is a key enabler for integrated product development and design optimization, especially for complex airframe structures. For this purpose, a model-driven approach for the automated generation of process models for the manufacturing of structural components is developed. The approach is implemented as a graph-based design language, where models are stored as semantic graphs, and model transformations are realized through graph transformation rules. Key elements of the implementation are discussed, including classes, object patterns and graph transformations for modeling the product and the process planning. The generated process models cover manufacturing tasks, requirements and resources. Moreover, the duration of manufacturing tasks is computed with individual task models based on the input design information. A scheduling algorithm is integrated to find an optimal manufacturing sequence considering limited production resources. Sequence optimization is formulated as a constraint satisfaction problem. All steps are integrated into a fully automated program, so that it is possible to estimate the production lead time of complex fiber composite structures considering constraints regarding technology, resources and sequence. It is shown that process planning models can be generated and analyzed based on variable design information and static manufacturing knowledge. Thereby, the presented research contributes to integrated product development and design optimization of fiber composite structures.
  • Publication
    Reinforcement Learning for Segmented Manufacturing
    ( 2023)
    Paul, Nathalie
    ;
    ; ;
    Fetz, Maximilian Elias
    ;
    ;
    The manufacturing of large components is, in comparison to small components, cost intensive. This is due to the sheer size of the components and the limited scalability in number of produced items. To take advantage of the effects of small component production we segment the large components into smaller parts and schedule the production of these parts on regular-sized machine tools. We propose to apply and adapt recent developments in reinforcement learning in combination with heuristics to efficiently solve the resulting segmentation and assignment problem. In particular, we solve the assignment problem up to a factor of 8 faster and only a few percentages less accurate than a classic solver from operations research.
  • Publication
    A Quantitative Human-Grounded Evaluation Process for Explainable ML
    ( 2022) ;
    Müller, Sebastian
    ;
    Methods from explainable machine learning are increasingly applied. However, evaluation of these methods is often anecdotal and not systematic. Prior work has identified properties of explanation quality and we argue that evaluation should be based on them. In this work, we provide an evaluation process that follows the idea of property testing. The process acknowledges the central role of the human, yet argues for a quantitative approach for the evaluation. We find that properties can be divided into two groups, one to ensure trustworthiness, the other to assess comprehensibility. Options for quantitative property tests are discussed. Future research should focus on the standardization of testing procedures.
  • Publication
    Relational Pattern Benchmarking on the Knowledge Graph Link Prediction Task
    Knowledge graphs (KGs) encode facts about the world in a graph data structure where entities, represented as nodes, connect via relationships, acting as edges. KGs are widely used in Machine Learning, e.g., to solve Natural Language Processing based tasks. Despite all the advancements in KGs, they plummet when it comes to completeness. Link Prediction based on KG embeddings targets the sparsity and incompleteness of KGs. Available datasets for Link Prediction do not consider different graph patterns, making it difficult to measure the performance of link prediction models on different KG settings. This paper presents a diverse set of pragmatic datasets to facilitate flexible and problem-tailored Link Prediction and Knowledge Graph Embeddings research. We define graph relational patterns, from being entirely inductive in one set to being transductive in the other. For each dataset, we provide uniform evaluation metrics. We analyze the models over our datasets to compare the models capabilities on a specific dataset type. Our analysis of datasets over state-of-the-art models provides a better insight into the suitable parameters for each situation, optimizing the KG-embedding-based systems.
  • Publication
    Small Data in NLU: Proposals towards a Data-Centric Approach
    ( 2021)
    Zarcone, Alessandra
    ;
    ;
    Domain-specific voice assistants often suffer from the problem of data scarcity. Publicly available, annotated datasets are in short supply and rarely fit the domain and the language required by a specific use case. Insufficient attention to data quality can generally be problematic when it comes to training and evaluation. The Computational Linguistics (CL) community has gained expertise and developed best practices for high-quality data annotation and collection as well as for for qualitative data analysis. However, the recent model-centric focus in AI and ML has not created ideal conditions for a fruitful collaboration with CL and the more data-centric fields of NLP to tackle data quality issues. We showcase principles and methods from CL / NLP research, which can potentially guide the development of data-centric NLU for domain-specific voice assistants - but have been typically overlooked by common practices in ML / AI. Those principles can potentially be of help to shape data-centric practices also for other domains. We argue that paying more attention to data quality and domain specificity can go a long way in improving the NLU components of todays voice assistants.
  • Publication
    Validation of Simulation-Based Testing: Bypassing Domain Shift with Label-to-Image Synthesis
    ( 2021) ;
    Brito, Eduardo
    ;
    ; ;
    Schmidt, Nico M.
    ;
    Schlicht, Peter
    ;
    Schneider, Jan David
    ;
    Hüger, Fabian
    ;
    Rottmann, Matthias
    ;
    ;
    Many machine learning applications can benefit from simulated data for systematic validation - in particular if real-life data is difficult to obtain or annotate. However, since simulations are prone to domain shift w.r.t. real-life data, it is crucial to verify the transferability of the obtained results. We propose a novel framework consisting of a generative label-to-image synthesis model together with different transferability measures to inspect to what extent we can transfer testing results of semantic segmentation models from synthetic data to equivalent real-life data. With slight modifications, our approach is extendable to, e.g., general multi-class classification tasks. Grounded on the transferability analysis, our approach additionally allows for extensive testing by incorporat ing controlled simulations. We validate our approach empirically on a semantic segmentation task on driving scenes. Transferability is tested using correlation analysis of IoU and a learned discriminator. Although the latter can distinguish between real-life and synthetic tests, in the former we observe surprisingly strong correlations of 0.7 for both cars and pedestrians.
  • Publication
    Bayesian Optimization for Min Max Optimization
    A solution that is only reliable under favourable conditions is hardly a safe solution. Min Max Optimization is an approach that returns optima that are robust against worst case conditions. We propose algorithms that perform Min Max Optimization in a setting where the function that should be optimized is not known a priori and hence has to be learned by experiments. Therefore we extend the Bayesian Optimization setting, which is tailored to maximization problems, to Min Max Optimization problems. While related work extends the two acquisition functions Expected Improvement and Gaussian Process Upper Confidence Bound; we extend the two acquisition functions Entropy Search and Knowledge Gradient. These acquisition functions are able to gain knowledge about the optimum instead of just looking for points that are supposed to be optimal. In our evaluation we show that these acquisition functions allow for better solutions - converging faster to the optimum than the benchmark settings.
  • Publication
    Characteristics of Monte Carlo Dropout in Wide Neural Networks
    ( 2020)
    Sicking, Joachim
    ;
    ; ; ;
    Fischer, Asja
    Monte Carlo (MC) dropout is one of the state-of-the-art approaches for uncertainty estimation in neural networks (NNs). It has been interpreted as approximately performing Bayesian inference. Based on previous work on the approximation of Gaussian processes by wide and deep neural networks with random weights, we study the limiting distribution of wide untrained NNs under dropout more rigorously and prove that they as well converge to Gaussian processes for fixed sets of weights and biases. We sketch an argument that this property might also hold for infinitely wide feed-forward networks that are trained with (full-batch) gradient descent. The theory is contrasted by an empirical analysis in which we find correlations and non-Gaussian behaviour for the pre-activations of finite width NNs. We therefore investigate how (strongly) correlated pre-activations can induce non-Gaussian behavior in NNs with strongly correlated weights.
  • Publication
    Towards Map-Based Validation of Semantic Segmentation Masks
    ( 2020) ; ;
    Hueger, Fabian
    ;
    Schneider, Jan David
    ;
    Artificial intelligence for autonomous driving must meet strict requirements on safety and robustness. We propose to validate machine learning models for self-driving vehicles not only with given ground truth labels, but also with additional a-priori knowledge. In particular, we suggest to validate the drivable area in semantic segmentation masks using given street map data. We present first results, which indicate that prediction errors can be uncovered by map-based validation.