Options
2022
Journal Article
Title
Hierarchical Learning for Model Predictive Collision Avoidance
Abstract
Recent progress in model predictive control (MPC) has shown great potential to control complex nonlinear systems in real-time. However, if parts of the controlled system cannot be modeled exactly by differential equations, the performance of MPC can decrease significantly. This paper approaches this problem by combining MPC with deep reinforcement learning (DRL) to a hierarchical control system, which is applied to control the motion of an autonomous vehicle. While the DRL algorithm is responsible for the decision-making with regard to obstacles on the street, the model predictive controller deals with the nonlinear dynamics of the vehicle. To this end, the vehicle dynamics are modeled by differential equations and the decision-making problem is modeled as a Markov decision process (MDP). The decisions are considered in the optimization problem of the controller, whose cost function, in turn, is considered in the reward function of the MDP. The performance of the hierarchical vehicle control is evaluated in scenarios with static and moving obstacles. Furthermore, it is examined whether adding information about the predicted trajectory to the state space of the MDP can increase the convergence speed.
Author(s)