Back to Search Start Over

Towards Linguistically-Aware and Language-Independent Tokenization for Large Language Models (LLMs)

Authors :
Rahman, Abrar
Bowlin, Garry
Mohanty, Binit
McGunigal, Sean
Publication Year :
2024

Abstract

This paper presents a comprehensive study on the tokenization techniques employed by state-of-the-art large language models (LLMs) and their implications on the cost and availability of services across different languages, especially low resource languages. The analysis considers multiple LLMs, including GPT-4 (using cl100k_base embeddings), GPT-3 (with p50k_base embeddings), and DaVinci (employing r50k_base embeddings), as well as the widely used BERT base tokenizer. The study evaluates the tokenization variability observed across these models and investigates the challenges of linguistic representation in subword tokenization. The research underscores the importance of fostering linguistically-aware development practices, especially for languages that are traditionally under-resourced. Moreover, this paper introduces case studies that highlight the real-world implications of tokenization choices, particularly in the context of electronic health record (EHR) systems. This research aims to promote generalizable Internationalization (I18N) practices in the development of AI services in this domain and beyond, with a strong emphasis on inclusivity, particularly for languages traditionally underrepresented in AI applications.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.03568
Document Type :
Working Paper