Now showing 1 - 4 of 4
  • Publication
    Framework for Data and AI Lifecycle. Research Project REMORA
    ( 2022) ; ;
    Sawczuk da Silva, Alexandre
    ;
    There is a lot of potential for Artificial Intelligence (AI) in the industrial domain to improve services and production. The Fraunhofer IKS is developing a framework for the data and AI life cycle to support every life cycle stage from AI development through data processing to data analysis. It aims at meeting the challenges of deploying AI in the industrial domain in order to enable continuous, automated, and dynamic AI applications.
  • Publication
    Framework for Data and AI Life Cycle. Research Project REMORA
    ( 2022) ; ;
    Sawczuk da Silva, Alexandre
    ;
    In the Industrie 4.0, an increasing amount of data is generated through intelligent interconnection of machines and processes. This data can be leveraged to generate knowledge - through Artificial Intelligence (AI) - to improve production and services. However, it is not sufficient to simply integrate AI. A continuous data and AI life cycle needs to be secured. The single life cycle stages (from data acquisition to AI development to data analysis) need to be executed flexibly and (semi-) automatically. The Fraunhofer IKS is developing a framework to enable and facilitate the flexible and continuous operation of AI in the Industrie 4.0. The aim is to support and automate AI development, integration, and operation, while reducing the effort for the user.
  • Publication
    Towards a Self-Adaptive Architecture for Federated Learning of Industrial Automation Systems
    Emerging Industry 4.0 architectures deploy data driven applications and artificial intelligence services across multiple locations under varying ownership, and require specific data protection and privacy considerations to not expose confidential data to third parties. For this reason, federated learning provides a framework for optimizing machine learning models in single manufacturing facilities without requiring access to training data. In this paper, we propose a self-adaptive architecture for federated learning of industrial automation systems. Our approach considers the involved entities on the different levels of abstraction of an industrial ecosystem. To achieve the goal of global model optimization and reduction of communication cycles, each factory internally trains the model in a self-adaptive manner and sends it to the centralized cloud server for global aggregation. We model a multi-assignment optimization problem by dividing the dataset into a number of subsets equal to the number of devices. Each device chooses the right subset to optimize the model at each local iteration. Our initial analysis shows the convergence property of the algorithm on a training dataset with different numbers of factories and devices. Moreover, these results demonstrate higher model accuracy with our self-adaptive architecture than the federated averaging approach for the same number of communication cycles.
  • Publication
    Benchmarking Uncertainty Estimation Methods for Deep Learning with Safety-Related Metrics
    Deep neural networks generally perform very well on giving accurate predictions, but they often lack in recognizing when these predictions may be wrong. This absence of awareness regarding the reliability of given outputs is a big obstacle in deploying such models in safety-critical applications. There are certain approaches that try to address this problem by designing the models to give more reliable values for their uncertainty. However, even though the performance of these models are compared to each other in various ways, there is no thorough evaluation comparing them in a safety-critical context using metrics that are designed to describe trade-offs between performance and safe system behavior. In this paper we attempt to fill this gap by evaluating and comparing several state-of-the-art methods for estimating uncertainty for image classifcation with respect to safety-related requirements and metrics that are suitable to describe the models performance in safety-critical domains. We show the relationship of remaining error for predictions with high confidence and its impact on the performance for three common datasets. In particular, Deep Ensembles and Learned Confidence show high potential to significantly reduce the remaining error with only moderate performance penalties.