* External authors




An Imitation from Observation Approach to Transfer Learning with Dynamics Mismatch

Siddharth Desai*

Ishan Durugkar*

Haresh Karnan*

Garrett Warnell*

Josiah Hanna*

Peter Stone

* External authors




We examine the problem of transferring a policy learned in a source environment to a target environment with different dynamics, particularly in the case where it is critical to reduce the amount of interaction with the target environment during learning. This problem is particularly important in sim-to-real transfer because simulators inevitably model real-world dynamics imperfectly. In this paper, we show that one existing solution to this transfer problem-- grounded action transformation --is closely related to the problem of imitation from observation (IfO): learning behaviors that mimic the observations of behavior demonstrations. After establishing this relationship, we hypothesize that recent state-of-the-art approaches from the IfO literature can be effectively repurposed for grounded transfer learning. To validate our hypothesis we derive a new algorithm -- generative adversarial reinforced action transformation (GARAT) -- based on adversarial imitation from observation techniques. We run experiments in several domains with mismatched dynamics, and find that agents trained with GARAT achieve higher returns in the target environment compared to existing black-box transfer methods.

Related Publications

Symbolic State Space Optimization for Long Horizon Mobile Manipulation Planning.

International Conference on Intelligent Robots and Systems, 2023
Xiaohan Zhang*, Yifeng Zhu*, Yan Ding*, Yuqian Jiang*, Yuke Zhu*, Peter Stone, Shiqi Zhang*

In existing task and motion planning (TAMP) research, it is a common assumption that experts manually specify the state space for task-level planning. A welldeveloped state space enables the desirable distribution of limited computational resources between task planning an…

Event Tables for Efficient Experience Replay

CoLLAs, 2023
Varun Kompella, Thomas Walsh, Samuel Barrett, Peter R. Wurman, Peter Stone

Experience replay (ER) is a crucial component of many deep reinforcement learning (RL) systems. However, uniform sampling from an ER buffer can lead to slow convergence and unstable asymptotic behaviors. This paper introduces Stratified Sampling from Event Tables (SSET), whi…

Composing Efficient, Robust Tests for Policy Selection

UAI, 2023
Dustin Morrill, Thomas Walsh, Daniel Hernandez, Peter R. Wurman, Peter Stone

Modern reinforcement learning systems produce many high-quality policies throughout the learning process. However, to choose which policy to actually deploy in the real world, they must be tested under an intractable number of environmental conditions. We introduce RPOSST, a…

  • HOME
  • Publications
  • An Imitation from Observation Approach to Transfer Learning with Dynamics Mismatch


Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.