Now showing 1 - 3 of 3
  • Publication
    Combining the Advantages of On- and Offline Industrial Robot Programming
    ( 2019)
    Guhl, Jan
    ;
    Nikoleizig, Sylvio
    ;
    ; ;
    Classic off- and online programming approaches offer different advantages but cannot provide intuitive programming as it would be needed for frequent reconfiguration of industrial robotic cells and their programs. While a simulation and therewith a collision control can be used offline, highest precision can be achieved by using online teach-in. We propose a system that combines the advantages of both offline and online programming. By creating a programming framework that is independent of in- and output devices we allow the use of augmented as well as virtual reality. Thus, enabling the user to choose for a programming technique that best suits his needs during different stages of programming. Bringing together both methods allows us to not only drastically reduce programming time but simultaneously increase intuitiveness of human robot interaction.
  • Publication
    Enabling Human-Robot-Interaction via Virtual and Augmented Reality in Distributed Control Systems
    Production and assembly lines are nowadays transforming into flexible and interconnected cells due to rapidly changing production demands. Changes are, for example, varying locations and poses for the processed work pieces and tools as well as the involved machinery like industrial robots. Even a variation in the combination or sequence of different production steps is possible. In case of older involved machines the task of reconfiguration and calibration can be time consuming. This may lead, in addition with the expected upcoming shortage of highly skilled workers, to future challenges, especially for small and medium sized enterprises. One possibility to address these challenges is to use distributed or cloud-based control for the participating machines. These approaches allow the use of more sophisticated and therefore in most cases computationally heavier algorithms than offered by classic monolithic controls. Those algorithms range from simple visual servoing applications to more complex scenarios, like sampling-based path planning in a previously 3D-reconstructed robot cell. Moving the computation of the machine's reactions physically and logically away from the machine control complicates the supervision and verification of the computed robot paths and trajectories. This poses a potential threat to the machine's operator since he/she is hardly capable of predicting or controlling the robot's movements. To overcome this drawback, this paper presents a system which allows the user to interact with industrial robot and other cyber physical systems via augmented and virtual reality. Captured topics in this paper are the architecture and concept for the distributed system, first implementation details and promising results obtained by using a Microsoft HoloLens and other visualization devices.
  • Publication
    Intuitive robot programming through environment perception, augmented reality simulation and automated program verification
    ( 2018)
    Wassermann, Jonas
    ;
    ;
    The increasing complexity of products and machines as well as short production cycles with small lot sizes present great challenges to production industry. Both, the programming of industrial robots in online mode using hand-held control devices or in offline mode using text-based programming requires specific knowledge of robotics and manufacturer-dependent robot control systems. In particular for small and medium-sized enterprises the machine control software needs to be easy, intuitive and usable without time-consuming learning steps, even for employees with no in-depth knowledge of information technology. To simplify the programming of application programs for industrial robots, we extended a cloud-based, task-oriented robot control system with environment perception and plausibility check functions. For the environment perception a depth camera and pointcloud processing hardware were installed. We detect objects located in the robot's workspace by pointcloud processing with ROS and the PCL and add them to the augmented reality user interface of the robot control. The combination of process knowledge from task-oriented application programming and information about available workpieces from automated image processing enables a plausibility check and verification of the robot program before execution. After a robot program has been approved by the plausibility check, it is tested in an augmented reality simulation for collisions with the detected objects before deployment to the physical robot hardware. Experiments were carried out to evaluate the effectiveness of the developed extensions and confirmed their functionality.