Back to Search Start Over

Do Transformer Modifications Transfer Across Implementations and Applications?

Authors :
Narang, Sharan
Chung, Hyung Won
Tay, Yi
Fedus, William
Fevry, Thibault
Matena, Michael
Malkan, Karishma
Fiedel, Noah
Shazeer, Noam
Lan, Zhenzhong
Zhou, Yanqi
Li, Wei
Ding, Nan
Marcus, Jake
Roberts, Adam
Raffel, Colin
Source :
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing.
Publication Year :
2021
Publisher :
Association for Computational Linguistics, 2021.

Abstract

The research community has proposed copious modifications to the Transformer architecture since it was introduced over three years ago, relatively few of which have seen widespread adoption. In this paper, we comprehensively evaluate many of these modifications in a shared experimental setting that covers most of the common uses of the Transformer in natural language processing. Surprisingly, we find that most modifications do not meaningfully improve performance. Furthermore, most of the Transformer variants we found beneficial were either developed in the same codebase that we used or are relatively minor changes. We conjecture that performance improvements may strongly depend on implementation details and correspondingly make some recommendations for improving the generality of experimental results.<br />Comment: To appear at EMNLP 2021 as a conference paper

Details

Database :
OpenAIRE
Journal :
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Accession number :
edsair.doi.dedup.....dc39898f138086314a450062f66d2404
Full Text :
https://doi.org/10.18653/v1/2021.emnlp-main.465