Back to Search Start Over

pNLP-Mixer: an Efficient all-MLP Architecture for Language

Authors :
Fusco, Francesco
Pascual, Damian
Staar, Peter
Antognini, Diego
Publication Year :
2022

Abstract

Large pre-trained language models based on transformer architecture have drastically changed the natural language processing (NLP) landscape. However, deploying those models for on-device applications in constrained devices such as smart watches is completely impractical due to their size and inference cost. As an alternative to transformer-based architectures, recent work on efficient NLP has shown that weight-efficient models can attain competitive performance for simple tasks, such as slot filling and intent classification, with model sizes in the order of the megabyte. This work introduces the pNLP-Mixer architecture, an embedding-free MLP-Mixer model for on-device NLP that achieves high weight-efficiency thanks to a novel projection layer. We evaluate a pNLP-Mixer model of only one megabyte in size on two multi-lingual semantic parsing datasets, MTOP and multiATIS. Our quantized model achieves 99.4% and 97.8% the performance of mBERT on MTOP and multi-ATIS, while using 170x fewer parameters. Our model consistently beats the state-of-the-art of tiny models (pQRNN), which is twice as large, by a margin up to 7.8% on MTOP.<br />Comment: Accepted at ACL 2023 (industry). 8 pages, 2 figures, 4 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2202.04350
Document Type :
Working Paper