Skip to content

Learning Optimal Advantage from Preferences and Mistaking it for Reward.

Abstract

We consider algorithms for learning reward functions from human preferences over pairs of trajectory segments, as used in reinforcement learning from human feedback (RLHF). Most recent work assumes that human preferences are generated based only upon the reward accrued within those segments, or their partial return. Recent work casts doubt on the validity of this assumption, proposing an alternative preference model based upon regret. We investigate the consequences of assuming preferences are based upon partial return when they actually arise from regret. We argue that the learned function is an approximation of the optimal advantage function,

View PDF

Authors

  • W. Bradley Knox*
  • Sigurdur Orn Adalgeirsson*
  • Serena Booth*
  • Anca Dragan*
  • Peter Stone
  • Scott Niekum*

*External Authors

Venue

AAAI-24

Date

2024

Share

Related Publications

Join Us on the Cutting-Edge of AI Innovation