• English
  • Deutsch
  • Log In
    Password Login
    Research Outputs
    Fundings & Projects
    Researchers
    Institutes
    Statistics
Repository logo
Fraunhofer-Gesellschaft
  1. Home
  2. Fraunhofer-Gesellschaft
  3. Scopus
  4. Evaluation of AI methods for increasing system reliability based on small data sets Bewertung von KI-Methoden zur Erhöhung der Systemzuverlässigkeit auf Basis kleiner Datensätzen
 
  • Details
  • Full
Options
2025
Journal Article
Title

Evaluation of AI methods for increasing system reliability based on small data sets Bewertung von KI-Methoden zur Erhöhung der Systemzuverlässigkeit auf Basis kleiner Datensätzen

Abstract
A high failure rate of products in application can be traced back to errors that already occur in the production and assembly process of components and systems. Product quality and ultimately product and system reliability therefore depend primarily on knowledge of and measures to reduce errors in the production and assembly processes. To ensure high-quality products, the solution approaches presented here involve implementing self-learning algorithms in the form of machine learning (ML) in the process, in particular to remove humans and their fallibility from the process. For example, the common defects in the process can be quantified and process-relevant parameters can be predicted based on AI ensemble methods. The aim of using AI is to learn more effectively and quickly, so that high rejection rates, such as in additive manufacturing, tend towards zero. In principle, simple artificial neural networks (ANN) can be trained for active learning. In addition, more complex AI ensemble methods can be used that ultimately provide predictions for state or error probabilities and can be used to evaluate process uncertainty. However, in many technical applications, the training data required to teach ANNs is limited to a small data set, so that deep learning (DL) methods are not applicable. The challenge in technical applications is therefore the selection and validation of machine learning methods that deliver good results based on small data sets. To increase the information contained in a limited data set, for example, existing knowledge from literature sources or small test data sets can be introduced via knowledge transfer (KT). In addition, k-fold cross-validation and approaches to synthetic data generation can be used in combination with ANNs to expand the database. In our own work, various activation functions, weight initializations and gradient-based optimizers were compared and evaluated with the aim of minimizing computing time and maximizing effectiveness for small data sets. Various regression methods, such as Gaussian Process Regression (GPR), were tested and evaluated in terms of their applicability to small data sets. Batch normalization, regularization and dropout methods were implemented to prevent overfitting and to improve the robustness and generalization capabilities of the processes. In the last step, combination models, so-called model ensemble methods, were used, including bagging, boosting, stacking and voting, which help to improve the performance of a prediction model. The results show that the loss value of a prediction model can be significantly reduced by using and combining ML methods
Author(s)
Slomski-Vetter, Elena
Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF  
Wenzel, Sören
Technische Universität Darmstadt
Journal
VDI Berichte
Language
German
Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF  
  • Cookie settings
  • Imprint
  • Privacy policy
  • Api
  • Contact
© 2024