Authors
- W. Bradley Knox*
- Sigurdur Orn Adalgeirsson*
- Serena Booth*
- Anca Dragan*
- Peter Stone
- Scott Niekum*
* External authors
Venue
- AAAI-24
Date
- 2024
Learning Optimal Advantage from Preferences and Mistaking it for Reward.
W. Bradley Knox*
Sigurdur Orn Adalgeirsson*
Serena Booth*
Anca Dragan*
Scott Niekum*
* External authors
AAAI-24
2024
Abstract
We consider algorithms for learning reward functions from human preferences over pairs of trajectory segments, as used in reinforcement learning from human feedback (RLHF). Most recent work assumes that human preferences are generated based only upon the reward accrued within those segments, or their partial return. Recent work casts doubt on the validity of this assumption, proposing an alternative preference model based upon regret. We investigate the consequences of assuming preferences are based upon partial return when they actually arise from regret. We argue that the learned function is an approximation of the optimal advantage function,
Related Publications
Recent advances in CV and NLP have been largely driven by scaling up the number of network parameters, despite traditional theories suggesting that larger networks are prone to overfitting. These large networks avoid overfitting by integrating components that induce a simpli…
This work introduces a robotics platform which embeds a conversational AI agent in an embodied system for natural language understanding and intelligent decision-making for service tasks; integrating task planning and human-like conversation. The agent is derived from a larg…
A hallmark of intelligent agents is the ability to learn reusable skills purely from unsupervised interaction with the environment. However, existing unsupervised skill discovery methods often learn entangled skills where one skill variable simultaneously influences many ent…
JOIN US
Shape the Future of AI with Sony AI
We want to hear from those of you who have a strong desire
to shape the future of AI.