Back to Search Start Over

SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs

Authors :
Bai, Zhenyu
Dangi, Pranav
Li, Huize
Mitra, Tulika
Publication Year :
2024

Abstract

Efficiently supporting long context length is crucial for Transformer models. The quadratic complexity of the self-attention computation plagues traditional Transformers. Sliding window-based static sparse attention mitigates the problem by limiting the attention scope of the input tokens, reducing the theoretical complexity from quadratic to linear. Although the sparsity induced by window attention is highly structured, it does not align perfectly with the microarchitecture of the conventional accelerators, leading to suboptimal implementation. In response, we propose a dataflow-aware FPGA-based accelerator design, SWAT, that efficiently leverages the sparsity to achieve scalable performance for long input. The proposed microarchitecture is based on a design that maximizes data reuse by using a combination of row-wise dataflow, kernel fusion optimization, and an input-stationary design considering the distributed memory and computation resources of FPGA. Consequently, it achieves up to 22$\times$ and 5.7$\times$ improvement in latency and energy efficiency compared to the baseline FPGA-based accelerator and 15$\times$ energy efficiency compared to GPU-based solution.<br />Comment: Accepeted paper for DAC'22

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.17025
Document Type :
Working Paper