151. An unsupervised lexical normalization for Roman Hindi and Urdu sentiment analysis
- Author
-
Kamran Shafi, Daryl Essam, Muhammad Kamran Malik, and Khawar Mehmood
- Subjects
Normalization (statistics) ,Computer science ,InformationSystems_INFORMATIONSTORAGEANDRETRIEVAL ,Word error rate ,02 engineering and technology ,Library and Information Sciences ,Management Science and Operations Research ,computer.software_genre ,030507 speech-language pathology & audiology ,03 medical and health sciences ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Transliteration ,Hindi ,business.industry ,Sentiment analysis ,language.human_language ,Computer Science Applications ,ComputingMethodologies_PATTERNRECOGNITION ,language ,Text normalization ,020201 artificial intelligence & image processing ,Artificial intelligence ,Urdu ,0305 other medical science ,business ,computer ,Encoder ,Natural language processing ,Information Systems - Abstract
Text normalization is the task of transforming lexically variant words to their canonical forms. The importance of text normalization becomes apparent while developing natural language processing applications. This paper proposes a novel technique called Transliteration based Encoding for Roman Hindi/Urdu text Normalization (TERUN). TERUN utilizes the linguistic aspects of Roman Hindi/Urdu to transform lexically variant words to their canonical forms. It consists of three interlinked modules: transliteration based encoder, filter module and hash code ranker. The encoder generates all possible hash-codes for a single Roman Hindi/Urdu word. The next component filters the irrelevant codes, while the third module ranks the filtered hash-codes based on their relevance. The aim of this study is not only to normalize the text but to also examine its impact on text classification. Hence, baseline classification accuracies were computed on a dataset of 11,000 non-standardized Roman Hindi/Urdu sentiment analysis reviews using different machine learning algorithms. The dataset was then standardized using TERUN and other established phonetic algorithms, and the classification accuracies were recomputed. The cross-scheme comparison showed that TERUN outperformed all the phonetic algorithms and significantly reduced the error rate from the baseline. TERUN was then enhanced from a corpus specific to a corpus independent text normalization technique. To this end, a parallel corpus of 50,000 Urdu and Roman Hindi/Urdu words was manually tagged using a set of comprehensive annotation guidelines. Also, different phonetic algorithms and TERUN were intrinsically evaluated using a dataset of 20,000 lexically variant words. The results clearly showed the superiority of TERUN over well-known phonetic algorithms.
- Published
- 2020