Authors

* External authors

Venue

Date

Share

Rethinking Social Robot Navigation: Leveraging the Best of Two Worlds

Amir Hossain Raj*

Zichao Hu*

Haresh Karnan*

Rohan Chandra*

Amirreza Payandeh*

Luisa Mao*

Peter Stone

Joydeep Biswas*

Xuesu Xiao*

* External authors

ICRA-24

2024

Abstract

Empowering robots to navigate in a socially compliant manner is essential for the acceptance of robots moving in human-inhabited environments. Previously, roboticists have developed geometric navigation systems with decades of empirical validation to achieve safety and efficiency. However, the many complex factors of social compliance make geometric navigation systems hard to adapt to social situations, where no amount of tuning enables them to be both safe (people are too unpredictable) and efficient (the frozen robot problem). With recent advances in deep learning approaches, the common reaction has been to entirely discard these classical navigation systems and start from scratch, building a completely new learning-based social navigation planner. In this work, we find that this reaction is unnecessarily extreme: using a large-scale real-world social navigation dataset, SCAND, we find that geometric systems can produce trajectory plans that align with the human demonstrations in a large number of social situations. We, therefore, ask if we can rethink the social robot navigation problem by leveraging the advantages of both geometric and learning-based methods. We validate this hybrid paradigm through a proof-of-concept experiment, in which we develop a hybrid planner that switches between geometric and learning-based planning. Our experiments on both SCAND and two physical robots show that the hybrid planner can achieve better social compliance compared to using either the geometric or learning-based approach alone.

Related Publications

Human-Interactive Robot Learning: Definition, Challenges, and Recommendations

THRI, 2025
Kim Baraka, Ifrah Idrees, Taylor Kessler Faulkner, Erdem Biyik, Serena Booth*, Mohamed Chetouani, Daniel Grollman, Akanksha Saran, Emmanuel Senft, Silvia Tulli, Anna-Lisa Vollmer, Antonio Andriella, Helen Beierling, Tiffany Horter, Jens Kober, Isaac Sheidlower, Matthew Taylor, Sanne van Waveren, Xuesu Xiao*

Robot learning from humans has been proposed and researched for several decades as a means to enable robots to learn new skills or adapt existing ones to new situations. Recent advances in artificial intelligence, including learning approaches like reinforcement learning and…

ProtoCRL: Prototype-based Network for Continual Reinforcement Learning

RLC, 2025
Michela Proietti*, Peter R. Wurman, Peter Stone, Roberto Capobianco

The purpose of continual reinforcement learning is to train an agent on a sequence of tasks such that it learns the ones that appear later in the sequence while retaining theability to perform the tasks that appeared earlier. Experience replay is a popular method used to mak…

Automated Reward Design for Gran Turismo

NeurIPS, 2025
Michel Ma, Takuma Seno, Kaushik Subramanian, Peter R. Wurman, Peter Stone, Craig Sherstan

When designing reinforcement learning (RL) agents, a designer communicates the desired agent behavior through the definition of reward functions - numerical feedback given to the agent as reward or punishment for its actions. However, mapping desired behaviors to reward func…

  • HOME
  • Publications
  • Rethinking Social Robot Navigation: Leveraging the Best of Two Worlds

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.