Skip to content

Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog

Abstract

In this work, we present methods for using human-robot dialog to improve language understanding for a mobile robot agent. The agent parses natural language to underlying semantic meanings and uses robotic sensors to create multi-modal models of perceptual concepts like red and heavy. The agent can be used for showing navigation routes, delivering objects to people, and relocating objects from one location to another. We use dialog clarification questions both to understand commands and to generate additional parsing training data. The agent employs opportunistic active learning to select questions about how words relate to objects, improving its understanding of perceptual concepts. We evaluated this agent on Amazon Mechanical Turk. After training on data induced from conversations, the agent reduced the number of dialog questions it asked while receiving higher usability ratings. Additionally, we demonstrated the agent on a robotic platform, where it learned new perceptual concepts on the fly while completing a real-world task.

View PDF

Authors

  • Jesse Thomason*
  • Aishwarya Padmakumar*
  • Jivko Sinapov*
  • Nick Walker*
  • Yuqian Jiang*
  • Harel Yedidsion*
  • Justin Hart*
  • Peter Stone
  • Raymond J. Mooney*

*External Authors

Venue

IJCAI-2021, The Journal of Artificial Intelligence Research

Date

2021

Share

Related Publications

Join Us on the Cutting-Edge of AI Innovation