1. HomeRobot: Open-Vocabulary Mobile Manipulation
- Author
-
Yenamandra, Sriram, Ramachandran, Arun, Yadav, Karmesh, Wang, Austin, Khanna, Mukul, Gervet, Theophile, Yang, Tsung-Yen, Jain, Vidhi, Clegg, Alexander William, Turner, John, Kira, Zsolt, Savva, Manolis, Chang, Angel, Chaplot, Devendra Singh, Batra, Dhruv, Mottaghi, Roozbeh, Bisk, Yonatan, and Paxton, Chris
- Subjects
FOS: Computer and information sciences ,Computer Science - Robotics ,Artificial Intelligence (cs.AI) ,Computer Science - Artificial Intelligence ,Computer Vision and Pattern Recognition (cs.CV) ,Computer Science - Computer Vision and Pattern Recognition ,Robotics (cs.RO) - Abstract
HomeRobot (noun): An affordable compliant robot that navigates homes and manipulates a wide range of objects in order to complete everyday tasks. Open-Vocabulary Mobile Manipulation (OVMM) is the problem of picking any object in any unseen environment, and placing it in a commanded location. This is a foundational challenge for robots to be useful assistants in human environments, because it involves tackling sub-problems from across robotics: perception, language understanding, navigation, and manipulation are all essential to OVMM. In addition, integration of the solutions to these sub-problems poses its own substantial challenges. To drive research in this area, we introduce the HomeRobot OVMM benchmark, where an agent navigates household environments to grasp novel objects and place them on target receptacles. HomeRobot has two components: a simulation component, which uses a large and diverse curated object set in new, high-quality multi-room home environments; and a real-world component, providing a software stack for the low-cost Hello Robot Stretch to encourage replication of real-world experiments across labs. We implement both reinforcement learning and heuristic (model-based) baselines and show evidence of sim-to-real transfer. Our baselines achieve a 20% success rate in the real world; our experiments identify ways future research work improve performance. See videos on our website: https://ovmm.github.io/., 35 pages, 20 figures, 8 tables
- Published
- 2023