Back to Search Start Over

Semantically Controllable Augmentations for Generalizable Robot Learning

Authors :
Chen, Zoey
Mandi, Zhao
Bharadhwaj, Homanga
Sharma, Mohit
Song, Shuran
Gupta, Abhishek
Kumar, Vikash
Publication Year :
2024

Abstract

Generalization to unseen real-world scenarios for robot manipulation requires exposure to diverse datasets during training. However, collecting large real-world datasets is intractable due to high operational costs. For robot learning to generalize despite these challenges, it is essential to leverage sources of data or priors beyond the robot's direct experience. In this work, we posit that image-text generative models, which are pre-trained on large corpora of web-scraped data, can serve as such a data source. These generative models encompass a broad range of real-world scenarios beyond a robot's direct experience and can synthesize novel synthetic experiences that expose robotic agents to additional world priors aiding real-world generalization at no extra cost. In particular, our approach leverages pre-trained generative models as an effective tool for data augmentation. We propose a generative augmentation framework for semantically controllable augmentations and rapidly multiplying robot datasets while inducing rich variations that enable real-world generalization. Based on diverse augmentations of robot data, we show how scalable robot manipulation policies can be trained and deployed both in simulation and in unseen real-world environments such as kitchens and table-tops. By demonstrating the effectiveness of image-text generative models in diverse real-world robotic applications, our generative augmentation framework provides a scalable and efficient path for boosting generalization in robot learning at no extra human cost.<br />Comment: Accepted for publication by IJRR. First 3 authors contributed equally. Last 3 authors advised equally

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.00951
Document Type :
Working Paper