Under CopyrightGottschalk, SimonSimonGottschalk2022-03-0724.2.20212021https://publica.fraunhofer.de/handle/publica/28344210.24406/publica-fhg-283442In this thesis, we contribute to new directions within Reinforcement Learning, which are important for many practical applications such as the control of biomechanical models. We deepen the mathematical foundations of Reinforcement Learning by deriving theoretical results inspired by classical optimal control theory. In our derivations, Deep Reinforcement Learning serves as our starting point. Based on its working principle, we derive a new type of Reinforcement Learning framework by replacing the neural network by a suitable ordinary differential equation. Coming up with profound mathematical results within this differential equation based framework turns out to be a challenging research task, which we address in this thesis. Especially the derivation of optimality conditions takes a central role in our investigation. We establish new optimality conditions tailored to our specific situation and analyze a resulting gradient based approach. Finally, we illustrate the power, working principle and versatility of this approach by performing control tasks in the context of a navigation in the two dimensional plane, robot motions, and actuations of a human arm model.enneural networks & fuzzy systemsmachine learningprobability & statisticsDeep Reinforcement Learningoptimal controlnecessary optimality conditionsmachine learningapplied mathematicsoptimizationMathematikerInformatikerData Scientists003006519Differential Equation Based Framework for Deep Reinforcement Learningdoctoral thesis