Now showing 1 - 10 of 24
  • Publication
    Extending Reward-based Hierarchical Task Network Planning to Partially Observable Environments
    ( 2024)
    Mannucci, Tommaso
    ;
    ;
    Rapid, recent developments in robotic applications demand feasible task planning algorithms capable of handling large search spaces. Hierarchical task network (HTN) planning complies with such demand by extending classical planning with task decomposition. Recent advances have extended HTN planners to include the use of reward functions, increasing their flexibility. Nonetheless, such planners assume a fully observable environment, which is often violated in realistic domains. This work contributes to this challenge by presenting POST-HTN, a tree-search based solver which accounts for partial observable environments. A qualitative comparison of POST-HTN with the PC-SHOP HTN solver is given in multiple domains, such as industrial inspection, which is executed on a mobile robot in the real world.
  • Publication
    Cooperative Automated Driving for Bottleneck Scenarios in Mixed Traffic
    ( 2023-06)
    Baumann, M.V.
    ;
    ;
    Buck, H.S.
    ;
    Deml, Barbara
    ;
    Ehrhardt, Sofie
    ;
    ; ;
    Lauer, Martin
    ;
    ; ;
    Stiller, Christoph
    ;
    Vortisch, Peter
    ;
    Connected automated vehicles (CAV), which incorporate vehicle-to-vehicle (V2V) communication into their motion planning, are expected to provide a wide range of benefits for individual and overall traffic flow. A frequent constraint or required precondition is that compatible CAVs must already be available in traffic at high penetration rates. Achieving such penetration rates incrementally before providing ample benefits for users presents a chicken-and-egg problem that is common in connected driving development. Based on the example of a cooperative driving function for bottleneck traffic flows (e.g. at a roadblock), we illustrate how such an evolutionary, incremental introduction can be achieved under transparent assumptions and objectives. To this end, we analyze the challenge from the perspectives of automation technology, traffic flow, human factors and market, and present a principle that 1) accounts for individual requirements from each domain; 2) provides benefits for any penetration rate of compatible CAVs between 0 % and 100 % as well as upward-compatibility for expected future developments in traffic; 3) can strictly limit the negative effects of cooperation for any participant and 4) can be implemented with close-to-market technology. We discuss the technical implementation as well as the effect on traffic flow over a wide parameter spectrum for human and technical aspects.
  • Publication
    Ghost: getting invisibly from position A to position B
    The Ghost GR Vision 60 V5 robot is a walking robot from Ghost Robotics, which has extensive sensor and autonomy capabilities. The autonomy functions are provided by Fraunhofer IOSB’s Algorithm Toolbox (ATB) and enable the robot to perform localization, obstacle mapping, path planning, and path control. While Ghost autonomously moves from position A to position B, the environment is captured by laser scanners. Normally, the shortest path between the two points is taken. But what happens if this path can be seen by enemy observers and the walking robot is disturbed or, in the worst case, destroyed? At Fraunhofer IOSB, a fusion of the sensed environment based on laser scanners, existing height models and a coverage map was implemented on the walking robot’s system. This enables the robot to navigate tactically: It will - whenever possible - actively and autonomously seek cover in the shelter of houses and other objects on its way to the given target point.
  • Publication
    An autonomous crawler excavator for hazardous environments
    As part of ROBDEKON, a 24-ton crawler excavator was equipped with sensors and a digital actuation interface as a technology demonstrator which features autonomy capabilities. The system architecture includes algorithms for localization, perception, mapping, planning, and control. The system is capable of tasks like autonomous driving to a target location, excavation of a predefined area to a given depth, and autonomous loading of an autonomously approaching transport vehicle. To ensure safety, collision avoidance based on 360° perception is always active during autonomous operation. This article presents the concept and implementation of the excavator’s autonomy functionality.
  • Publication
    Workspace monitoring and planning for safe mobile manipulation
    In order to enable physical human-robot interaction where humans and (mobile) manipulators share their workspace and work together, robots have to be equipped with important capabilities to guarantee human safety. The robots have to recognize possible collisions with the human co-worker and react anticipatorily by adapting their motion to avert dangerous situations while they are executing their task. Therefore, methods have been developed that allow to monitor the workspace of mobile manipulators using multiple depth sensors to gather information about the robot environment. This encompasses both3D information about obstacles in the close robot surroundings and the prediction of obstacle motions in the entire monitored space. Based on this information, a collision-free robot motion is planned and during the execution the robot continuously reacts to unforeseen dangerous situations by adapting its planned motion, slowing down or stopping. For the demonstration of a manufacturing scenario, the developed methods have been implemented on a prototypical mobile manipulator. The algorithms handle both robot platform and manipulator in a uniform manner so that an overall optimization of the path and of the collision avoidance behavior is possible. By integrating the monitoring, planning, and interaction control components, the task of grasping, placing and delivering objects to humans in a shared workspace is demonstrated.
  • Publication
    A Cooperative HCI Assembly Station with Dynamic Projections
    This paper presents a cooperative human-computer interaction (HCI) assembly station which is assisting a worker during a manual assembly process. The worker's identity, body pose and height is determined to provide individualized assistance like a robot arm which is holding workpieces in a position which is ergonomic for the worker. A second robot arm is equipped with a camera and projector to precisely project information directly on the workpiece. Safe and intuitive human-robot collaboration is achieved by means of workspace monitoring, force detection, compliant control, and hand-guiding. A distinctive feature of this assembly station is that new assembly steps and documentation can be added interactively directly at the workstation during an assembly by interacting with the projected GUI through hand gestures. This paper will detail an assembly station that was developed in a laboratory environment.
  • Publication
    Generic Convoying Functionality for Autonomous Vehicles in Unstructured Outdoor Environments
    Autonomously following a leading vehicle is a major step towards fully autonomous vehicles. The contribution of this work consists in the development, implementation, and validation of two following modes: 1) Exact following: accurate compliance with the reference path. 2) Flexible following: tolerate deviation from the reference path in order to avoid obstacles. The proposed method can easily be integrated into existing frameworks for autonomous vehicles. Therefore our approach is flexible enough to be applied to a large variety of different vehicles. To demonstrate the feasibility of our approach an experimental validation is carried out on two autonomous vehicles with major differences in kinematics, weight, and size: A cross-country wheelchair and an off-road truck. Both exact and flexible following have been successfully demonstrated in unstructured outdoor environments.
  • Publication
    Real-time hyperspectral stereo processing for the generation of 3D depth information
    We present a local stereo matching method for hyperspectral camera data, allowing multiple usage of camera hardware and imaging data such as for object classification or spectral analysis and multichannel input to the correspondence problem. The matching process combines correlation-based similarity measures for pixel windows utilizing all 16 spectral channels followed by a consistency check for disparity selection. We evaluate stereo-processing methods focusing on effectiveness and runtime of the processing on a CPU and analyze parallelization possibilities. Based on the results of the evaluation on the CPU, we implement the optimized stereo matching for images with 16 channels on a graphics processing unit (GPU) utilizing the Compute Unified Device Architecture (CUDA). The parallel processing of the calculation steps to obtain the disparity image on the GPU achieves more than 27× speed up, resulting in calculation and post-processing of hyperspectral images with 8 - 13 Hz, depending on the selection of maximum disparity. The 3D reconstruction achieves a mean square error of 0.0267 m 2 in distance measurements from 5 - 10 m2.
  • Publication
    Algorithmen-Toolbox für autonome mobile Robotersysteme
    Schwere Arbeitsmaschinen werden häufig in Umgebungen eingesetzt, die für den Menschen erhebliche Gesundheitsrisiken bergen. Ziel aktueller Forschungsaktivitäten am Fraunhofer IOSB ist es, Baumaschinen mit Autonomiefähigkeiten auszustatten, sodass diese selbstständig in der Gefahrenzone agieren können. Hierfür wurde eine Algorithmen-Toolbox für autonome Robotersysteme entwickelt, die Komponenten beinhaltet, die sich von der Umgebungswahrnehmung über die Aufgaben- und Bewegungsplanung bis hin zur konkreten Nutzfunktionsdurchführung erstrecken. Um die Forschungsergebnisse evaluieren zu können, wurde ein Technologiedemonstrator aufgebaut, der in der Lage ist, selbstständig kontaminierte Erdschichten abzutragen. ...
  • Publication
    A non-invasive cyberrisk in cooperative driving
    ( 2017)
    Bapp, F.
    ;
    Becker, J.
    ;
    ;
    Doll, J.
    ;
    Filsinger, Max
    ;
    ;
    Hubschneider, C.
    ;
    Lauber, A.
    ;
    Müller-Quade, J.
    ;
    Pauli, M.
    ;
    ;
    Salscheider, O.
    ;
    Rosenhahn, B.
    ;
    Ruf, Miriam
    ;
    Stiller, C.
    ;
    ;
    Ziehn, J.
    This paper presents a hacking risk arising in fully automated cooperative driving. As opposed to common cyber risk scenarios, this scenario does not require internal access to an automated car at all, and is therefore largely independent of current on-board malware protection. A hacker uses a wireless mobile device, for example a hacked smartphone, to send vehicle to- vehicle (V2V) signals from a human-driven car, masquerading it as a fully-automated, cooperating vehicle. It deliberately engages only in high-risk cooperative maneuvers with other cars, in which the unwitting human driver is expected to perform a specific maneuver to avoid collisions with other vehicles. As the human driver is unaware of the planned maneuver, he fails to react as expected by the other vehicles; depending on the situation, a severe collision risk can ensue. We propose a vision-based countermeasure that only requires state-of-the-art equipment for fully-automated vehicles, and assures that such an attack without internal access to an automated car is impossible.