1. Word Embedding Methods for Word Representation in Deep Learning for Natural Language Processing.
- Author
-
Hussen Wadud, Md. Anwar, Mridha, M. F., and Rahman, Mohammad Motiur
- Subjects
- *
NATURAL language processing , *DEEP learning , *PYTHON programming language , *BENGALI language , *LINGUISTIC context - Abstract
Natural Language Processing (NLP) deals with analysing, understanding and generating languages likes human. One of the challenges of NLP is training computers to understand the way of learning and using a language as human. Every training session consists of several types of sentences with different context and linguistic structures. Meaning of a sentence depends on actual meaning of main words with their correct positions. Same word can be used as a noun or adjective or others based on their position. In NLP, Word Embedding is a powerful method which is trained on large collection of texts and encoded general semantic and syntactic information of words. Choosing a right word embedding generates more efficient result than others. Most of the papers used pretrained word embedding vector in deep learning for NLP processing. But, the major issue of pretrained word embedding vector is that it can‟t use for all types of NLP processing. In this paper, a local word embedding vector formation process have been proposed and shown a comparison between pretrained and local word embedding vectors for Bengali language. The Keras framework is used in Python for local word embedding implementation and analysis section of this paper shows proposed model produced 87.84% accuracy result which is better than fastText pretrained word embedding vectors accuracy 86.75%. Using this proposed method NLP researchers of Bengali language can easily build the specific word embedding vectors for word representation in Natural Language Processing. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF