Back to Search Start Over

BayRnTune: Adaptive Bayesian Domain Randomization via Strategic Fine-tuning

Authors :
Huang, Tianle
Sontakke, Nitish
Kumar, K. Niranjan
Essa, Irfan
Nikolaidis, Stefanos
Hong, Dennis W.
Ha, Sehoon
Publication Year :
2023

Abstract

Domain randomization (DR), which entails training a policy with randomized dynamics, has proven to be a simple yet effective algorithm for reducing the gap between simulation and the real world. However, DR often requires careful tuning of randomization parameters. Methods like Bayesian Domain Randomization (Bayesian DR) and Active Domain Randomization (Adaptive DR) address this issue by automating parameter range selection using real-world experience. While effective, these algorithms often require long computation time, as a new policy is trained from scratch every iteration. In this work, we propose Adaptive Bayesian Domain Randomization via Strategic Fine-tuning (BayRnTune), which inherits the spirit of BayRn but aims to significantly accelerate the learning processes by fine-tuning from previously learned policy. This idea leads to a critical question: which previous policy should we use as a prior during fine-tuning? We investigated four different fine-tuning strategies and compared them against baseline algorithms in five simulated environments, ranging from simple benchmark tasks to more complex legged robot environments. Our analysis demonstrates that our method yields better rewards in the same amount of timesteps compared to vanilla domain randomization or Bayesian DR.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.10606
Document Type :
Working Paper