Benchmarking Reinforcement Learning Techniques for Autonomous Navigation
Abstract
Deep reinforcement learning (RL) has broughtmany successes for autonomous robot navigation. However,there still exists important limitations that prevent real-worlduse of RL-based navigation systems. For example, most learningapproaches lack safety guarantees; and learned navigationsystems may not generalize well to unseen environments.Despite a variety of recent learning techniques to tackle thesechallenges in general, a lack of an open-source benchmarkand reproducible learning methods specifically for autonomousnavigation makes it difficult for roboticists to choose whatlearning methods to use for their mobile robots and for learningresearchers to identify current shortcomings of general learningmethods for autonomous navigation. In this paper, we identifyfour major desiderata of applying deep RL approaches forautonomous navigation: (D1) reasoning under uncertainty, (D2)safety, (D3) learning from limited trial-and-error data, and (D4)generalization to diverse and novel environments. Then, weexplore four major classes of learning techniques with thepurpose of achieving one or more of the four desiderata:memory-based neural network architectures (D1), safe RL (D2),model-based RL (D2, D3), and domain randomization (D4). Bydeploying these learning techniques in a new open-source large-scale navigation benchmark and real-world environments, weperform a comprehensive study aimed at establishing to whatextent can these techniques achieve these desiderata for RL-based navigation systems
Venue
ICRA 2023
Date
2023