Back to Search Start Over

Rethinking Addressing in Language Models via Contexualized Equivariant Positional Encoding

Authors :
Zhu, Jiajun
Wang, Peihao
Cai, Ruisi
Lee, Jason D.
Li, Pan
Wang, Zhangyang
Publication Year :
2024

Abstract

Transformers rely on both content-based and position-based addressing mechanisms to make predictions, but existing positional encoding techniques often diminish the effectiveness of position-based addressing. Many current methods enforce rigid patterns in attention maps, limiting the ability to model long-range dependencies and adapt to diverse tasks. Additionally, most positional encodings are learned as general biases, lacking the specialization required for different instances within a dataset. To address this, we propose con$\textbf{T}$extualized equivari$\textbf{A}$nt $\textbf{P}$osition $\textbf{E}$mbedding ($\textbf{TAPE}$), a novel framework that enhances positional embeddings by incorporating sequence content across layers. TAPE introduces dynamic, context-aware positional encodings, overcoming the constraints of traditional fixed patterns. By enforcing permutation and orthogonal equivariance, TAPE ensures the stability of positional encodings during updates, improving robustness and adaptability. Our method can be easily integrated into pre-trained transformers, offering parameter-efficient fine-tuning with minimal overhead. Extensive experiments shows that TAPE achieves superior performance in language modeling, arithmetic reasoning, and long-context retrieval tasks compared to existing positional embedding techniques.<br />Comment: Code is available at https://github.com/VITA-Group/TAPE

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.00712
Document Type :
Working Paper