Back to Search Start Over

Genie: Generative Interactive Environments

Authors :
Bruce, Jake
Dennis, Michael
Edwards, Ashley
Parker-Holder, Jack
Shi, Yuge
Hughes, Edward
Lai, Matthew
Mavalankar, Aditi
Steigerwald, Richie
Apps, Chris
Aytar, Yusuf
Bechtle, Sarah
Behbahani, Feryal
Chan, Stephanie
Heess, Nicolas
Gonzalez, Lucy
Osindero, Simon
Ozair, Sherjil
Reed, Scott
Zhang, Jingwei
Zolna, Konrad
Clune, Jeff
de Freitas, Nando
Singh, Satinder
Rocktäschel, Tim
Publication Year :
2024

Abstract

We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundation world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model. Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain-specific requirements typically found in the world model literature. Further the resulting learned latent action space facilitates training agents to imitate behaviors from unseen videos, opening the path for training generalist agents of the future.<br />Comment: https://sites.google.com/corp/view/genie-2024/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.15391
Document Type :
Working Paper