ProtoCRL: Prototype-based Network for Continual Reinforcement Learning
Abstract
The purpose of continual reinforcement learning is to train an agent on a sequence of tasks such that it learns the ones that appear later in the sequence while retaining the
ability to perform the tasks that appeared earlier. Experience replay is a popular method used to make the agent remember previous tasks, but its effectiveness strongly relies on
the selection of experiences to store. Kompella et al. (2023) proposed organizing the experience replay buffer into partitions, each storing transitions leading to a rare but
crucial event, such that these key experiences get revisited more often during training.
However, the method is sensitive to the manual selection of event states. To address this issue, we introduce ProtoCRL, a prototype-based architecture leveraging a variational
Gaussian mixture model to automatically discover effective event states and build the associated partitions in the experience replay buffer. The proposed approach is tested
on a sequence of MiniGrid environments, demonstrating the agent’s ability to adapt and learn new skills incrementally.
Venue
RLC-25
Date
2025