Skip to content

Skill-Critic: Refining Learned Skills for Hierarchical Reinforcement Learning

Abstract

Hierarchical reinforcement learning (RL) can accelerate long-horizon decision-making by temporally abstracting a policy into multiple levels. Promising results in sparse reward environments have been seen with skills , i.e. sequences of primitive actions. Typically, a skill latent space and policy are discovered from offline data. However, the resulting low-level policy can be unreliable due to low-coverage demonstrations or distribution shifts. As a solution, we propose the Skill-Critic algorithm to fine-tune the low-level policy in conjunction with high-level skill selection. Our Skill-Critic algorithm optimizes both the low-level and high-level policies; these policies are initialized and regularized by the latent space learned from offline demonstrations to guide the parallel policy optimization. We validate Skill-Critic in multiple sparse-reward RL environments, including a new sparse-reward autonomous racing task in Gran Turismo Sport. The experiments show that Skill-Critic's low-level policy fine-tuning and demonstration-guided regularization are essential for good performance.

View PDF

Authors

  • Ce Hao*
  • Catherine Weaver*
  • Chen Tang*
  • Kenta Kawamoto
  • Masayoshi Tomizuka*
  • Wei Zhan*

*External Authors

Venue

RAL 2024

Date

2024

Share

Related Publications

Join Us on the Cutting-Edge of AI Innovation