Now showing 1 - 10 of 60
  • Publication
    Towards a Self-Adaptive Architecture for Federated Learning of Industrial Automation Systems
    Emerging Industry 4.0 architectures deploy data driven applications and artificial intelligence services across multiple locations under varying ownership, and require specific data protection and privacy considerations to not expose confidential data to third parties. For this reason, federated learning provides a framework for optimizing machine learning models in single manufacturing facilities without requiring access to training data. In this paper, we propose a self-adaptive architecture for federated learning of industrial automation systems. Our approach considers the involved entities on the different levels of abstraction of an industrial ecosystem. To achieve the goal of global model optimization and reduction of communication cycles, each factory internally trains the model in a self-adaptive manner and sends it to the centralized cloud server for global aggregation. We model a multi-assignment optimization problem by dividing the dataset into a number of subsets equal to the number of devices. Each device chooses the right subset to optimize the model at each local iteration. Our initial analysis shows the convergence property of the algorithm on a training dataset with different numbers of factories and devices. Moreover, these results demonstrate higher model accuracy with our self-adaptive architecture than the federated averaging approach for the same number of communication cycles.
  • Publication
    Dependable and Efficient Cloud-Based Safety-Critical Applications by Example of Automated Valet Parking
    ( 2021) ;
    Shekhada, Dhavalkumar
    ;
    ; ;
    Ishigooka, Tasuku
    ;
    Otsuka, Satoshi
    ;
    Mizuochi, Mariko
    Future embedded systems and services will be seamlessly connected and will interact on all levels with the infrastructure and cloud. For safety-critical applications this means that it is not sufficient to ensure dependability in a single embedded system, but it is necessary to cover the complete service chain including all involved embedded systems as well as involved services running in the edge or the cloud. However, for the development of such Cyber-Physical Systems-of-Systems (CPSoS) engineers must consider all kinds of dependability requirements. For example, it is not an option to ensure safety by impeding reliability or availability requirements. In fact, it is the engineers' task to optimize the CPSoS' performance without violating any safety goals. In this paper, we identify the main challenges of developing CPSoS based on several industrial use cases and present our novel approach for designing cloud-based safety-critical applications with optimized performance by the example of an automated valet parking system. The evaluation shows that our monitoring and recovery solution ensures a superior performance in comparison to current methods, while meeting the system's safety demands in case of connectivity-related faults.
  • Publication
    DevOps for Developing Cyber-Physical Systems
    (Fraunhofer IKS, 2021) ; ;
    Rothe, Johannes
    ;
    Tenorth, Moritz
    ;
    In the age of digitalization, the success or failure of a product depends on bug-free and feature-rich software. Driven by consumer expectations and competition between vendors, software can no longer be delivered as-is but needs to be continuously supported and updated for a period of time. In large and complex projects, this can be a challenging task, which many IT companies are approaching with the state-of-the-art software development process DevOps. For companies manufacturing high-tech products, software is also becoming ever more critical, and companies are struggling with handling the complexity of long-term software support. The adoption of modern development processes such as DevOps is challenging, as the real-world environment in which the systems operate induces challenges and requirements that are unique to each product and company. Once they are addressed, however, DevOps has the potential to deliver more sophisticated products with minimal software errors, thus increasing the value provided to customers and giving the company a considerable competitive advantage.
  • Publication
    Safe Interaction of Automated Forklifts and Humans at Blind Corners in a Warehouse with Infrastructure Sensors
    ( 2021) ; ; ;
    Ishigooka, Tasuku
    ;
    Otsuka, Satoshi
    ;
    Mizuochi, Mariko
    Co-working and interaction of automated systems and humans in a warehouse is a significant challenge of progressing industrial systems' autonomy. Especially, blind corners pose a critical scenario, in which infrastructure-based sensors can provide more safety. The automation of vehicles is usually tied to an argument on improved safety. However, current standards still rely on the awareness of humans to avoid collisions, which is limited at corners with occlusion. Based on the examination of blind corner scenarios in a warehouse, we derive the relevant critical situations. We propose an architecture that uses infrastructure sensors to prevent human-robot collisions at blind corners with respect to automated forklifts. This includes a safety critical function using wireless communication, which sporadically might be unavailable or disturbed. Therefore, the proposed architecture is able to mitigate these faults and gracefully degrades performance if required. Within our extensive evaluation, we use a warehouse simulation to verify our approach and to estimate the impact on an automated forklift's performance.
  • Publication
    Enhanced System Awareness as Basis for Resilience of Autonomous Vehicles
    The transition to autonomous driving and increasing automation of cars requires these systems to take correct decisions in very complex situations. For this, the understanding of a vehicle system's own capabilities and the environmental context is crucial. We introduce our approach of enhancing the system awareness of vehicles to handle changes gracefully, while optimizing the overall performance. Based on a system health management the available capabilities of the distributed vehicle system can be determined. By taking into account the environment in the form of so-called operational domains at run-time, self- and context-awareness can be established providing a situation picture to which the system can adapt. We developed a service contract based solution to trigger degradations or find optimal configurations, while not endangering safety goals. Our approach is evaluated in an intersection scenario, where we can highlight the advantages of enhanced system awareness to optimize an autonomous vehicles performance.
  • Publication
    Machine Learning Methods for Enhanced Reliable Perception of Autonomous Systems
    (Fraunhofer IKS, 2021)
    Henne, Maximilian
    ;
    ; ;
    In our modern life, automated systems are already omnipresent. The latest advances in machine learning (ML) help with increasing automation and the fast-paced progression towards autonomous systems. However, as such methods are not inherently trustworthy and are being introduced into safety-critical systems, additional means are needed. In autonomous driving, for example, we can derive the main challenges when introducing ML in the form of deep neural networks (DNNs) for vehicle perception. DNNs are overconfident in their predictions and assume high confidence scores in the wrong situations. To counteract this, we have introduced several techniques to estimate the uncertainty of the results of DNNs. In addition, we present what are known as out-of-distribution detection methods that identify unknown concepts that have not been learned beforehand, thus helping to avoid making wrong decisions. For the task of reliably detecting objects in 2D and 3D, we will outline further methods. To apply ML in the perception pipeline of autonomous systems, we propose using the supplementary information from these methods for more reliable decision-making. Our evaluations with respect to safety-related metrics show the potential of this approach. Moreover, we have applied these enhanced ML methods and newly developed ones to the autonomous driving use case. In variable environmental conditions, such as road scenarios, light, or weather, we have been able to enhance the reliability of perception in automated driving systems. Our ongoing and future research is on further evaluating and improving the trustworthiness of ML methods to use them safely and to a high level of performance in various types of autonomous systems, ranging from vehicles to autonomous mobile robots, to medical devices.
  • Publication
    Benchmarking Uncertainty Estimation Methods for Deep Learning with Safety-Related Metrics
    Deep neural networks generally perform very well on giving accurate predictions, but they often lack in recognizing when these predictions may be wrong. This absence of awareness regarding the reliability of given outputs is a big obstacle in deploying such models in safety-critical applications. There are certain approaches that try to address this problem by designing the models to give more reliable values for their uncertainty. However, even though the performance of these models are compared to each other in various ways, there is no thorough evaluation comparing them in a safety-critical context using metrics that are designed to describe trade-offs between performance and safe system behavior. In this paper we attempt to fill this gap by evaluating and comparing several state-of-the-art methods for estimating uncertainty for image classifcation with respect to safety-related requirements and metrics that are suitable to describe the models performance in safety-critical domains. We show the relationship of remaining error for predictions with high confidence and its impact on the performance for three common datasets. In particular, Deep Ensembles and Learned Confidence show high potential to significantly reduce the remaining error with only moderate performance penalties.
  • Publication
    Towards Dynamic Safety Management for Autonomous Systems
    Safety assurance of autonomous systems is one of the current key challenges of safety engineering. Given the specific characteristics of autonomous systems, we need to deal with many uncertainties making it difficult or even impossible to predict the system's behaviour in all potential operational situations. Simply using established static safety approaches would result in very strict worst-case assumptions making the development of autonomous systems at reasonable costs impossible. This paper therefore introduces the idea of dynamic safety management. Using dynamic safety management enables a system to assess its safety and to self-optimize its performance at runtime. Considering the current risk related to the actual context at runtime instead of being bound to strict worst-case assumptions provides the essential basis for the development of safe and yet cost-efficient autonomous systems.
  • Publication
    Managing Uncertainty of AI-based Perception for Autonomous Systems
    ( 2019)
    Henne, Maximilian
    ;
    ;
    With the advent of autonomous systems, machine perception is a decisive safety-critical part to make such systems become reality. However, presently used AI-based perception does not meet the required reliability for usage in real-world systems beyond prototypes, as for autonomous cars. In this work, we describe the challenge of reliable perception for autonomous systems. Furthermore, we identify methods and approaches to quantify the uncertainty of AI-based perception. Along with dynamic management of the safety, we show a path to how uncertainty information can be utilized for the perception, so that it will meet the high dependability demands of life-critical autonomous systems.
  • Publication
    Resumption of runtime verification monitors: Method, approach and application
    ( 2018) ; ;
    Bauer, Bernhard
    Runtime verification checks if the behavior of a system under observation in a certain run satisfies a given correctness property. While a positive description of the system's behavior is often available from specification, it contains no information for the monitor how it should continue in case the system deviates from this behavior. If the monitor does not resume its operation in the right way, test coverage will be unnecessarily low or further observations are misclassified. To close this gap, we present a new method for extending state-based runtime monitors in an automated way, called resumption. Therefore, this paper examines how runtime verification monitors based on a positive behavior description can be resumed to find all detectable deviations instead of reporting only invalid traces. Moreover, we examine when resumption can be applied successfully and we present alternative resumption algorithms. Using an evaluation framework, their precision and recall for detecting different kinds of deviations are compared. While the algorithm seeking expected behavior for resumption works very well in all evaluated cases, the framework can also be used to find the best suited resumption extension for a specific application scenario. Further, two real world application scenarios are introduced where resumption has been successfully applied.