Authors
- Dustin Morrill
- Thomas Walsh
- Daniel Hernandez
- Peter R. Wurman
- Peter Stone
Venue
- UAI 2023
Date
- 2023
Composing Efficient, Robust Tests for Policy Selection
Daniel Hernandez
Peter R. Wurman
UAI 2023
2023
Abstract
Modern reinforcement learning systems produce many high-quality policies throughout the learning process. However, to choose which policy to actually deploy in the real world, they must be tested under an intractable number of environmental conditions. We introduce RPOSST, an algorithm to select a small set of test cases from a larger pool based on a relatively small number of sample evaluations. RPOSST treats the test case selection problem as a two-player game and optimizes a solution with provable k-of-N robustness, bounding the error relative to a test that used all the test cases in the pool. Empirical results demonstrate that RPOSST finds a small set of test cases that identify high quality policies in a toy one-shot game, poker datasets, and a high-fidelity racing simulator.
Related Publications
In existing task and motion planning (TAMP) research, it is a common assumption that experts manually specify the state space for task-level planning. A welldeveloped state space enables the desirable distribution of limited computational resources between task planning an…
Experience replay (ER) is a crucial component of many deep reinforcement learning (RL) systems. However, uniform sampling from an ER buffer can lead to slow convergence and unstable asymptotic behaviors. This paper introduces Stratified Sampling from Event Tables (SSET), whi…
Finding a best response policy is a central objective in game theory and multi-agent learning, with modern population-based training approaches employing reinforcement learning algorithms as best response oracles to improve play against candidate opponents (typically previou…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.