Back to Search Start Over

Tensor Attention Training: Provably Efficient Learning of Higher-order Transformers

Authors :
Gu, Jiuxiang
Liang, Yingyu
Shi, Zhenmei
Song, Zhao
Zhou, Yufa
Publication Year :
2024

Abstract

Tensor Attention, a multi-view attention that is able to capture high-order correlations among multiple modalities, can overcome the representational limitations of classical matrix attention. However, the $\Omega(n^3)$ time complexity of tensor attention poses a significant obstacle to its practical implementation in transformers, where $n$ is the input sequence length. In this work, we prove that the backward gradient of tensor attention training can be computed in almost linear $n^{1+o(1)}$ time, the same complexity as its forward computation under a bounded entries assumption. We provide a closed-form solution for the gradient and propose a fast computation method utilizing polynomial approximation methods and tensor algebraic tricks. Furthermore, we prove the necessity and tightness of our assumption through hardness analysis, showing that slightly weakening it renders the gradient problem unsolvable in truly subcubic time. Our theoretical results establish the feasibility of efficient higher-order transformer training and may facilitate practical applications of tensor attention architectures.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.16411
Document Type :
Working Paper