Hier finden Sie wissenschaftliche Publikationen aus den Fraunhofer-Instituten.

Guided Reinforcement Learning via Sequence Learning

: Ramamurthy, Rajkumar; Sifa, Rafet; Lübbering, Max; Bauckhage, Christian


Farkaš, I. ; European Neural Network Society:
Artificial Neural Networks and Machine Learning - ICANN 2020. Proceedings. Pt.II : 29th International Conference on Artificial Neural Networks, Bratislava, Slovakia, September 15-18, 2020
Cham: Springer Nature, 2020 (Lecture Notes in Computer Science 12397)
ISBN: 978-3-030-61615-1 (Print)
ISBN: 978-3-030-61616-8 (Online)
International Conference on Artificial Neural Networks (ICANN) <29, 2020, Online>
Fraunhofer IAIS ()
reinforcement learning; exploration; novelty search; representation learning; sequence learning

Applications of Reinforcement Learning (RL) suffer from high sample complexity due to sparse reward signals and inadequate exploration. Novelty Search (NS) guides as an auxiliary task, in this regard to encourage exploration towards unseen behaviors. However, NS suffers from critical drawbacks concerning scalability and generalizability since they are based off instance learning. Addressing these challenges, we previously proposed a generic approach using unsupervised learning to learn representations of agent behaviors and use reconstruction losses as novelty scores. However, it considered only fixed-length sequences and did not utilize sequential information of behaviors. Therefore, we here extend this approach by using sequential auto-encoders to incorporate sequential dependencies. Experimental results on benchmark tasks show that this sequence learning aids exploration outperforming previous novelty search methods.