Back to Search Start Over

Rope to Nope and Back Again: A New Hybrid Attention Strategy

Authors :
Yang, Bowen
Venkitesh, Bharat
Talupuru, Dwarak
Lin, Hangyu
Cairuz, David
Blunsom, Phil
Locatelli, Acyr
Publication Year :
2025

Abstract

Long-context large language models (LLMs) have achieved remarkable advancements, driven by techniques like Rotary Position Embedding (RoPE) (Su et al., 2023) and its extensions (Chen et al., 2023; Liu et al., 2024c; Peng et al., 2023). By adjusting RoPE parameters and incorporating training data with extended contexts, we can train performant models with considerably longer input sequences. However, existing RoPE-based methods exhibit performance limitations when applied to extended context lengths. This paper presents a comprehensive analysis of various attention mechanisms, including RoPE, No Positional Embedding (NoPE), and Query-Key Normalization (QK-Norm), identifying their strengths and shortcomings in long-context modeling. Our investigation identifies distinctive attention patterns in these methods and highlights their impact on long-context performance, providing valuable insights for architectural design. Building on these findings, we propose a novel architectural based on a hybrid attention mechanism that not only surpasses conventional RoPE-based transformer models in long context tasks but also achieves competitive performance on benchmarks requiring shorter context lengths.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.18795
Document Type :
Working Paper