Back to Search Start Over

ULTra: Unveiling Latent Token Interpretability in Transformer Based Understanding

Authors :
Hosseini, Hesam
Mighan, Ghazal Hosseini
Afzali, Amirabbas
Amini, Sajjad
Houmansadr, Amir
Publication Year :
2024

Abstract

Transformers have revolutionized Computer Vision (CV) and Natural Language Processing (NLP) through self-attention mechanisms. However, due to their complexity, their latent token representations are often difficult to interpret. We introduce a novel framework that interprets Transformer embeddings, uncovering meaningful semantic patterns within them. Based on this framework, we demonstrate that zero-shot unsupervised semantic segmentation can be performed effectively without any fine-tuning using a model pre-trained for tasks other than segmentation. Our method reveals the inherent capacity of Transformer models for understanding input semantics and achieves state-of-the-art performance in semantic segmentation, outperforming traditional segmentation models. Specifically, our approach achieves an accuracy of 67.2 % and an mIoU of 32.9 % on the COCO-Stuff dataset, as well as an mIoU of 51.9 % on the PASCAL VOC dataset. Additionally, we validate our interpretability framework on LLMs for text summarization, demonstrating its broad applicability and robustness.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.12589
Document Type :
Working Paper