1. Clustering in pure-attention hardmax transformers and its role in sentiment analysis
- Author
-
Alcalde, Albert, Fantuzzi, Giovanni, and Zuazua, Enrique
- Subjects
Computer Science - Computation and Language ,Computer Science - Machine Learning ,Mathematics - Dynamical Systems ,Statistics - Machine Learning ,68T07, 68T50 - Abstract
Transformers are extremely successful machine learning models whose mathematical properties remain poorly understood. Here, we rigorously characterize the behavior of transformers with hardmax self-attention and normalization sublayers as the number of layers tends to infinity. By viewing such transformers as discrete-time dynamical systems describing the evolution of points in a Euclidean space, and thanks to a geometric interpretation of the self-attention mechanism based on hyperplane separation, we show that the transformer inputs asymptotically converge to a clustered equilibrium determined by special points called leaders. We then leverage this theoretical understanding to solve sentiment analysis problems from language processing using a fully interpretable transformer model, which effectively captures `context' by clustering meaningless words around leader words carrying the most meaning. Finally, we outline remaining challenges to bridge the gap between the mathematical analysis of transformers and their real-life implementation., Comment: 23 pages, 10 figures, 1 table. Funded by the European Union (Horizon Europe MSCA project ModConFlex, grant number 101073558). Accompanying code available at: https://github.com/DCN-FAU-AvH/clustering-hardmax-transformers
- Published
- 2024