Back to Search Start Over

Actra: Optimized Transformer Architecture for Vision-Language-Action Models in Robot Learning

Authors :
Ma, Yueen
Chi, Dafeng
Wu, Shiguang
Liu, Yuecheng
Zhuang, Yuzheng
Hao, Jianye
King, Irwin
Publication Year :
2024

Abstract

Vision-language-action models have gained significant attention for their ability to model trajectories in robot learning. However, most existing models rely on Transformer models with vanilla causal attention, which we find suboptimal for processing segmented multi-modal sequences. Additionally, the autoregressive generation approach falls short in generating multi-dimensional actions. In this paper, we introduce Actra, an optimized Transformer architecture featuring trajectory attention and learnable action queries, designed for effective encoding and decoding of segmented vision-language-action trajectories in robot imitation learning. Furthermore, we devise a multi-modal contrastive learning objective to explicitly align different modalities, complementing the primary behavior cloning objective. Through extensive experiments conducted across various environments, Actra exhibits substantial performance improvement when compared to state-of-the-art models in terms of generalizability, dexterity, and precision.

Subjects

Subjects :
Computer Science - Robotics

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.01147
Document Type :
Working Paper