Authors
- W. Bradley Knox*
- Stephane Hatgis-Kessell*
- Sigurdur Orn Adalgeirsson*
- Serena Booth*
- Anca Dragan*
- Peter Stone
- Scott Niekum*
* External authors
Venue
- AAAI 2024
Date
- 2024
Learning Optimal Advantage from Preferences and Mistaking it for Reward
W. Bradley Knox*
Stephane Hatgis-Kessell*
Sigurdur Orn Adalgeirsson*
Serena Booth*
Anca Dragan*
Scott Niekum*
* External authors
AAAI 2024
2024
Abstract
We consider algorithms for learning reward functions from human preferences over pairs of trajectory segments---as used in reinforcement learning from human feedback (RLHF)---including those used to fine tune ChatGPT and other contemporary language models. Most recent work on such algorithms assumes that human preferences are generated based only upon the reward accrued within those segments, which we call their partial return function. But if this assumption is false because people base their preferences on information other than partial return, then what type of function is their algorithm learning from preferences? We argue that this function is better thought of as an approximation of the optimal advantage function, not as a partial return function as previously believed.
Related Publications
Having explored an environment, intelligent agents should be able to transfer their knowledge to most downstream tasks within that environment. Referred to as ``zero-shot learning," this ability remains elusive for general-purpose reinforcement learning algorithms. While rec…
Scaling up the model size and computation has brought consistent performance improvements in supervised learning. However, this lesson often fails to apply to reinforcement learning (RL) because training the model on non-stationary data easily leads to overfitting and unstab…
Deep reinforcement learning has achieved superhuman racing performance in high-fidelity simulators like Gran Turismo 7 (GT7). It typically utilizes global features that require instrumentation external to a car, such as precise localization of agents and opponents, limiting …
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.