169 results on '"Parsing"'
Search Results
2. A Deep-Learning Approach to Single Sentence Compression
- Author
-
Sahoo, Deepak, Pujari, Sthita Pragyan, Shandeelaya, Arunav Pratap, Balabantaray, Rakesh Chandra, Bansal, Jagdish Chand, Series Editor, Deep, Kusum, Series Editor, Nagar, Atulya K., Series Editor, Jacob, I. Jeena, editor, Kolandapalayam Shanmugam, Selvanayaki, editor, and Bestak, Robert, editor
- Published
- 2022
- Full Text
- View/download PDF
3. Constructing of Semantically Dependent Patterns Based on SpaCy and StanfordNLP Libraries
- Author
-
Okhapkin, Valentin P., Okhapkina, Elena P., Iskhakova, Anastasia O., Iskhakov, Andrey Y., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Singh, Pradeep Kumar, editor, Veselov, Gennady, editor, Vyatkin, Valeriy, editor, Pljonkin, Anton, editor, Dodero, Juan Manuel, editor, and Kumar, Yugal, editor
- Published
- 2021
- Full Text
- View/download PDF
4. A Rule-Based Parsing for Bangla Grammar Pattern Detection
- Author
-
Saha Prapty, Aroni, Rifat Anwar, Md., Azharul Hasan, K. M., Bansal, Jagdish Chand, Series Editor, Deep, Kusum, Series Editor, Nagar, Atulya K., Series Editor, and Uddin, Mohammad Shorif, editor
- Published
- 2021
- Full Text
- View/download PDF
5. Part-of-Speech Annotation
- Author
-
Dash, Niladri Sekhar and Dash, Niladri Sekhar
- Published
- 2021
- Full Text
- View/download PDF
6. XML Parsing Technique
- Author
-
Wang, Chao, Angrisani, Leopoldo, Series Editor, Arteaga, Marco, Series Editor, Panigrahi, Bijaya Ketan, Series Editor, Chakraborty, Samarjit, Series Editor, Chen, Jiming, Series Editor, Chen, Shanben, Series Editor, Chen, Tan Kay, Series Editor, Dillmann, Rüdiger, Series Editor, Duan, Haibin, Series Editor, Ferrari, Gianluigi, Series Editor, Ferre, Manuel, Series Editor, Hirche, Sandra, Series Editor, Jabbari, Faryar, Series Editor, Jia, Limin, Series Editor, Kacprzyk, Janusz, Series Editor, Khamis, Alaa, Series Editor, Kroeger, Torsten, Series Editor, Liang, Qilian, Series Editor, Martín, Ferran, Series Editor, Ming, Tan Cher, Series Editor, Minker, Wolfgang, Series Editor, Misra, Pradeep, Series Editor, Möller, Sebastian, Series Editor, Mukhopadhyay, Subhas, Series Editor, Ning, Cun-Zheng, Series Editor, Nishida, Toyoaki, Series Editor, Pascucci, Federica, Series Editor, Qin, Yong, Series Editor, Seng, Gan Woon, Series Editor, Speidel, Joachim, Series Editor, Veiga, Germano, Series Editor, Wu, Haitao, Series Editor, Zhang, Junjie James, Series Editor, Yang, Chao-Tung, editor, Pei, Yan, editor, and Chang, Jia-Wei, editor
- Published
- 2020
- Full Text
- View/download PDF
7. Parts of Speech Tagging for Punjabi Language Using Supervised Approaches
- Author
-
Kaur Jolly, Simran, Agrawal, Rashmi, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Solanki, Vijender Kumar, editor, Hoang, Manh Kha, editor, Lu, Zhonghyu (Joan), editor, and Pattnaik, Prasant Kumar, editor
- Published
- 2020
- Full Text
- View/download PDF
8. A Shallow Parsing Model for Hindi Using Conditional Random Field
- Author
-
Asopa, Sneha, Asopa, Pooja, Mathur, Iti, Joshi, Nisheeth, Kacprzyk, Janusz, Series Editor, Bhattacharyya, Siddhartha, editor, Hassanien, Aboul Ella, editor, Gupta, Deepak, editor, Khanna, Ashish, editor, and Pan, Indrajit, editor
- Published
- 2019
- Full Text
- View/download PDF
9. Detection of Bad Smell Code for Software Refactoring
- Author
-
Regulwar, Ganesh B., Tugnayat, R. M., Kacprzyk, Janusz, Series Editor, Saini, H. S., editor, Sayal, Rishi, editor, Govardhan, A., editor, and Buyya, Rajkumar, editor
- Published
- 2019
- Full Text
- View/download PDF
10. Single-Sentence Compression Using SVM
- Author
-
Sahoo, Deepak, Balabantaray, Rakesh Chandra, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Nayak, Janmenjoy, editor, Abraham, Ajith, editor, Krishna, B. Murali, editor, Chandra Sekhar, G. T., editor, and Das, Asit Kumar, editor
- Published
- 2019
- Full Text
- View/download PDF
11. Parsing in Nepali Language Using Linear Programming Problem
- Author
-
Yajnik, Archit, Bhutia, Furkim, Borah, Samarjeet, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Kalita, Jugal, editor, Balas, Valentina Emilia, editor, Borah, Samarjeet, editor, and Pradhan, Ratika, editor
- Published
- 2019
- Full Text
- View/download PDF
12. Proposed Framework for Stochastic Parsing of Myanmar Language
- Author
-
Aung, Myintzu Phyo, Aung, Ohnmar, Hlaing, Nan Yu, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Zin, Thi Thi, editor, and Lin, Jerry Chun-Wei, editor
- Published
- 2019
- Full Text
- View/download PDF
13. Problems and Issues in Parsing Manipuri Text
- Author
-
Nirmal, Yumnam, Sharma, Utpal, Kacprzyk, Janusz, Series Editor, Mandal, J. K., editor, Saha, Goutam, editor, Kandar, Debdatta, editor, and Maji, Arnab Kumar, editor
- Published
- 2018
- Full Text
- View/download PDF
14. A Crawler–Parser-Based Approach to Newspaper Scraping and Reverse Searching of Desired Articles
- Author
-
Aich, Ankit, Dutta, Amit, Chakraborty, Aruna, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Satapathy, Suresh Chandra, editor, Tavares, Joao Manuel R.S., editor, Bhateja, Vikrant, editor, and Mohanty, J. R., editor
- Published
- 2018
- Full Text
- View/download PDF
15. A Survey of Design Techniques for Conversational Agents
- Author
-
Ramesh, Kiran, Ravishankaran, Surya, Joshi, Abhishek, Chandrasekaran, K., Barbosa, Simone Diniz Junqueira, Series editor, Chen, Phoebe, Series editor, Filipe, Joaquim, Series editor, Kotenko, Igor, Series editor, Sivalingam, Krishna M., Series editor, Washio, Takashi, Series editor, Yuan, Junsong, Series editor, Zhou, Lizhu, Series editor, Kaushik, Saroj, editor, Gupta, Daya, editor, Kharb, Latika, editor, and Chahal, Deepak, editor
- Published
- 2017
- Full Text
- View/download PDF
16. A Naïve Bayes Based Machine Learning Approach and Application Tools Comparison Based on Telephone Conversations
- Author
-
Lin, Shu-Chiang, Prasetio, Murman Dwi, Persada, Satria Fadil, Nadlifatin, Reny, Lin, Yi-Kuei, editor, Tsao, Yu-Chung, editor, and Lin, Shi-Woei, editor
- Published
- 2013
- Full Text
- View/download PDF
17. GEN2VCF: a converter for human genome imputation output format to VCF format
- Author
-
Shin, Dong Mun, Hwang, Mi Yeong, Kim, Bong-Jo, Ryu, Keun Ho, and Kim, Young Jin
- Published
- 2020
- Full Text
- View/download PDF
18. Photo-Realistic Virtual Try-On with Enhanced Warping Module
- Author
-
Antony Alisha, G Sreenu, Sebastian Subin, C. V Amaldev, N. G Resmi, and D. A Aysha Dilna
- Subjects
Parsing ,Computer science ,business.industry ,Transfer (computing) ,Inpainting ,Computer vision ,Artificial intelligence ,Image warping ,business ,computer.software_genre ,Pose ,computer ,Image (mathematics) - Abstract
An image-based virtual try-on system focuses on virtually transferring a clothing item to a given person. In most of the approaches, garment transfer on a target image involves human parsing with pose estimation generating warped cloth, followed by an inpainting module. The generated output depends on the attributes of the final and intermediate stages. In our paper, we organized a relative study of methods adopted on different existing stages to bring up a better solution. We conduct our studies with reference to a state-of-the-art try-on model named adaptive content generating and preserving network (ACGPN). ACGPN transfers reference cloth to the target person and gives photo-realistic try-on results but fails when there is large dissimilarity in reference person image and cloth image which arises due to errors in the warping module. We propose an improved ACGPN model, with a key-points based warping module to improve the result.
- Published
- 2021
- Full Text
- View/download PDF
19. A Comprehensive Review on Text to Indian Sign Language Translation Systems
- Author
-
Sanket Rathi, Rishabh Shetty, Kashish Shah, and Kamal Mistry
- Subjects
Semantic analysis (linguistics) ,Parsing ,Computer science ,Lexical analysis ,Sign (semiotics) ,Sign language ,computer.software_genre ,computer ,Sentence ,Linguistics ,Avatar ,Meaning (linguistics) - Abstract
Language is the primary means of communication used by every individual. It is a tool to express greater ideas of ideas and emotions. It shapes thoughts and carries meanings. Indian Sign Language (ISL) used by the Deaf Community in India, does have linguistic constituents and structural properties. The area of computer science and linguistics, dealing with the relationship between computers and human language, is natural language processing. Through lexical analysis, syntax analysis, semantic analysis, processing discourses, pragmatic analysis, it processes the data. In determining the meaning of a sentence, it is critical to analyze the syntactic structure. In this paper, current computer sign language translators are considered and their pros and cons are identified and discussed. The general approaches followed by the systems are discussed. A new approach for construction of sign languages is proposed, thus resulting in increase in the accuracy of the system in translating the input phrases.
- Published
- 2021
- Full Text
- View/download PDF
20. The Survey on Handwritten Mathematical Expressions Recognition
- Author
-
Vinay Kukreja, Sakshi, and Chetan Sharma
- Subjects
ComputingMethodologies_PATTERNRECOGNITION ,Parsing ,Computer science ,Scripting language ,business.industry ,Deep learning ,Artificial intelligence ,computer.software_genre ,business ,computer ,Witness ,Natural language processing ,Domain (software engineering) - Abstract
Recognition of handwritten mathematical expressions is one of the challenging problems that has been studied and researched for a long. Many techniques have been researched to build a system that produces high-performance mathematical expression recognition systems. Plentiful research has been done on handwritten mathematical text though it is symbols and or two-dimensional mathematical expressions. The amount of research witness the origins and trends of several techniques and displays the shift of trendline of recognition techniques from grammar-driven or parsing based methods to machine learning and deep learning models. Remarkable research has been done in the last few years in this domain of handwritten mathematical scripts. In this paper, a survey of various research efforts has been compiled to develop and enhance handwritten mathematical expressions. Recognition systems of handwritten math text are presented. Comparative analysis in terms of classification technique used, datasets, and their accuracies have been performed.
- Published
- 2021
- Full Text
- View/download PDF
21. A Detailed Analysis of Word Sense Disambiguation Algorithms and Approaches for Indian Languages
- Author
-
Archana Sachindeo Maurya and Promila Bahadur
- Subjects
Structure (mathematical logic) ,Parsing ,Machine translation ,Process (engineering) ,Computer science ,media_common.quotation_subject ,Meaning (non-linguistic) ,Ambiguity ,computer.software_genre ,Etymology ,computer ,Algorithm ,Word (computer architecture) ,media_common - Abstract
Word sense disambiguation (WSD) could be a difficult exploration-research issue in computational etymology that was perceived toward the commencement of the exact interest in machine translation (MT) and artificial intelligence (AI). WSD is the process of detecting the right meaning of a word with various senses and requires depth knowledge of various sources. Phrases that contain multifunctional words, easily present a different parsing structure and provoke different understanding, and are always referred to as ambiguous. Much effort is required to resolve this problem using machine translation, but the hard work is still continuing. Numerous techniques have been used in disambiguation process and executed on various frames for approximately all dialects. This article presents a detailed analysis of WSD algorithms and their different approaches adopted by researchers in their researches for many Indian languages. In this paper, we present forward an examination of directed, undirected and information-based methodology and calculations accessible in word sense disambiguation.
- Published
- 2021
- Full Text
- View/download PDF
22. Programming with Natural Languages: A Survey
- Author
-
Sayu Sajeev, Julien Joseph Thomas, K. S. Sunil, Muhammed Anas, and Vishnu Suresh
- Subjects
Parsing ,Computer science ,Programming language ,business.industry ,media_common.quotation_subject ,Robotics ,Ambiguity ,computer.software_genre ,Artificial intelligence ,business ,computer ,Natural language ,Range (computer programming) ,media_common - Abstract
Programming with natural language is a research area that has a wide range of applications including basic programming, robotics, etc. Factors like preserving the meanings, handling the ambiguity, etc. have to be considered while converting a natural language text to programming language statements. Many developments have been taken place in this area over the past few years. Different types of CFG parsers were used initially for converting the natural language texts to programming language statements. The developments in technologies based on AI have a huge impact in this area and efficient models like GPT-3 are created. These models are capable of converting natural language to target programming language more precisely. In this paper, we do a detailed and systematic study of the developments that happened in this area and list some of the relevant research works among them.
- Published
- 2021
- Full Text
- View/download PDF
23. Practical Comparison Between the LR(1) Bottom-Up and LL(1) Top-Down Methodology
- Author
-
Nabil Amein Ali
- Subjects
TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Parsing ,Computer science ,Programming language ,String (computer science) ,Compiler ,Top-down and bottom-up design ,computer.software_genre ,computer - Abstract
The syntax analysis or parser is the essential stage of designing compilers, and the role of the parser is parsing strings according to a definite number of rules. Researchers often use either LL(1) top-down technique or LR bottom-up technique to parse strings to decide whether all these strings or some of them are accepted by some defined language. Theoretically, many papers illustrated that the LR method is more suitable for LL(1). The current paper treats the problem of parsing practically. It proves that the LR technique is more suitable of LL(1) in terms of the computing time of parsing in each technique.
- Published
- 2021
- Full Text
- View/download PDF
24. Application of NLP for Information Extraction from Unstructured Documents
- Author
-
Sajjan Adhikari, Shushanta Pudasaini, Sujan Raj Adhikari, Sagar Lamichhane, Aakash Tamang, and Subarna Shakya
- Subjects
Matching (statistics) ,Parsing ,Computer science ,business.industry ,Document classification ,computer.software_genre ,Pipeline (software) ,Information extraction ,Named-entity recognition ,Analytics ,Chunking (psychology) ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
The world is intrigued by data. In fact, huge capitals are invested to devise means that implements statistics and extract analytics from these sources. However, when we examine the studies performed on applicant tracking systems that retrieve valuable information from candidates’ CVs and job descriptions, they are mostly rule-based and hardly manage to employ contemporary techniques. Even though these documents vary in contents, the structure is almost identical. Accordingly, in this paper, we implement an NLP pipeline for the extraction of such structured information from a wide variety of textual documents. As a reference, textual documents which are used in applicant tracking systems like CV (Curriculum Vitae) and job vacancy information have been considered. The proposed NLP pipeline is built with several NLP techniques like document classification, document segmentation and text extraction. Initially for the classification of textual documents, support vector machines (SVM) and XGBoost algorithms have been implemented. Different segments of the identified document are categorized using NLP techniques such as chunking, regex matching and POS tagging. Relevant information from every segment is further extracted using techniques like Named Entity Recognition (NER), regex matching and pool parsing. Extraction of such structured information from textual documents can help to gain insights and use those insights in document maintenance, document scoring, matching and auto-filling forms.
- Published
- 2021
- Full Text
- View/download PDF
25. Scoring of Resume and Job Description Using Word2vec and Matching Them Using Gale–Shapley Algorithm
- Author
-
Sagar Lamichhane, Subarna Shakya, Sajjan Adhikari, Shushanta Pudasaini, Sujan Raj Adhikari, and Aakash Tamang
- Subjects
Matching (statistics) ,Parsing ,Word embedding ,Information retrieval ,Computer science ,Job description ,Cosine similarity ,Word2vec ,Stable marriage problem ,computer.software_genre ,computer ,Ranking (information retrieval) - Abstract
The paper introduces a quick-witted system that assists employers to find the right candidate for a job and vice-versa. Multiple approaches have to be taken into account for parsing, analyzing, and scoring documents (CV, vacancy details). However, In this paper, we have devised an approach for ranking such documents using word2vec algorithm and matching them to their appropriate pair using Gale–Shapley algorithm. When ranking a CV, different cases are taken into consideration: skills, experience, education, and location. The ranks are then used to find an appropriate match of employers and employees with the use of Gale–Shapley algorithm which eases companies for higher best possible candidates. The methods experimented for the scoring, and matching is explained below on the paper.
- Published
- 2021
- Full Text
- View/download PDF
26. Fine-Grained Semantic Segmentation of National Costume Grayscale Image Based on Human Parsing
- Author
-
Wei Zou, Jianhou Gan, and Di Wu
- Subjects
Parsing ,Color image ,Computer science ,business.industry ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Context (language use) ,computer.software_genre ,Grayscale ,Image (mathematics) ,Consistency (database systems) ,Feature (computer vision) ,Computer vision ,Segmentation ,Artificial intelligence ,business ,computer ,ComputingMethodologies_COMPUTERGRAPHICS - Abstract
In order to enhance the image understanding of different regions for national costume grayscale image automatic colorization, let coloring tasks take advantage of semantic conditions, also let it can apply the human parsing semantic segmentation method to the national costume grayscale image for semantic segmentation task. This paper proposes a semantic segmentation model for context embedding based on edge perceiving. Aiming at the features of national costume grayscale image, more optimizing the model and loss function. The national costume grayscale image semantic segmentation is different from semantic segmentation of the color image, this task is more difficult for the grayscale image has no color feature. In this paper, edge information and edge consistency constraints are used to improve the national costume grayscale image coloring effect. The experimental results show that the model designed in this paper can obtain more accurate fine-grained semantic segmentation results for the national costume grayscale image.
- Published
- 2021
- Full Text
- View/download PDF
27. Resume Data Extraction Using NLP
- Author
-
Aman Adhikari, Umang Goyal, Anirudh Negi, Subhash Chand Gupta, and Tanupriya Choudhury
- Subjects
Parsing ,Computer science ,business.industry ,Download ,Process (computing) ,computer.file_format ,computer.software_genre ,Upload ,Data extraction ,Relational model ,Formatted text ,Artificial intelligence ,business ,Relevant information ,computer ,Natural language processing - Abstract
Extracting valuable and relevant information from a potential employee’s CV to ease the hiring process for employers, automating data extraction, parsing documents of multiple formats, and storing data in a standardized relational database model. The user uploads a single resume or multiple resumes into the program, the program accepts multiple formats (.pdf, .doc, .rtf, etc.) and converts it into a standard text format which is later parsed for the required information, and the extracted data is organized in a standard defined format. The user can then download the extracted information in the .CSV format.
- Published
- 2021
- Full Text
- View/download PDF
28. Literature Survey: Sign Language Recognition Using Gesture Recognition and Natural Language Processing
- Author
-
Prajakta Satav, Minal Sadani, Aditi Patil, Harshada Yesane, and Anagha Kulkarni
- Subjects
Parsing ,American Sign Language ,business.industry ,Computer science ,Sign (semiotics) ,Sign language ,computer.software_genre ,language.human_language ,Gesture recognition ,language ,Artificial intelligence ,Literature survey ,business ,computer ,Minority language ,Natural language processing ,Gesture - Abstract
The deaf communities prevalent in India are still struggling for Indian Sign Language to gain the status of a minority language. A system is required that translates Indian Sign Language to the corresponding English language excerpt. For this, the visual, as well as non-visual input of Sign Language signs, have to be processed, translated into English words, and then these words have to be put together into a grammatically correct and meaningful sentence (or sentences). The researchers have worked on processing input which can be sensor-based, image-based, with videos in their entirety, or sampling videos after fixed intervals of time to decide the trajectories of motions. The input could be of any form, i.e., a hardware system for recognizing hand movements, images, or video format. This paper focuses on state-of-the-art literature that identifies areas of interest in the non-visual inputs, image frames, and video frames to determine the features for a particular hand gesture. The literature survey also takes into account the approaches considered by researchers across different sign languages like American Sign Language, Taiwanese Sign Language, etc. which will help to develop a perspective for Indian Sign Language. This paper also reviews previous research work that has been conducted to translate a video to the English language using Natural Language Processing techniques such as the Viterbi algorithm, tokenization, part-of-speech tagging, and parsing.
- Published
- 2021
- Full Text
- View/download PDF
29. Optimal Design of Work Quantification System
- Author
-
Lei Liu, Yingcong Huang, and Wenru Lin
- Subjects
Optimal design ,Correctness ,Parsing ,business.industry ,Computer science ,Data management ,computer.software_genre ,Variety (cybernetics) ,Set (abstract data type) ,Work (electrical) ,The Internet ,business ,Software engineering ,computer - Abstract
In view of the requirements of the correctness of quantitative results, the dynamic variety of quantitative rules and collaborative result recording in data management, the work quantification system is analyzed and optimized. The core module of the work quantification is designed by using the features of dynamic parsing script provided by JavaScript engine, and finally, a set of applicative, efficient, and expandable work quantification system is developed. The application of the system will bring convenience to all users due to its effective data management. In addition, easy access to the fast Internet makes the analysis and use of achievement data more efficient.
- Published
- 2021
- Full Text
- View/download PDF
30. Text Generation and Enhanced Evaluation of Metric for Machine Translation
- Author
-
Sujit S. Amin and Lata Ragha
- Subjects
Parsing ,Machine translation ,Grammar ,business.industry ,Computer science ,media_common.quotation_subject ,computer.software_genre ,Metric (mathematics) ,Synonym (database) ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Artificial intelligence ,business ,computer ,Natural language processing ,Natural language ,Sentence ,BLEU ,media_common - Abstract
Here the power of a recurrent neural network (RNN) has been exhibited for generating grammatically correct new text from given input text and translation of the new text to the Hindi language with modified bilingual evaluation understudy (BLEU) metric score. Our system aims to generate a grammatically correct new text from given input sentences or paragraphs and translate generated text to Hindi with high translation score. To accomplish a grammatically correct sentence, natural language toolkit (NLTK) is used for grammar correction at the end of text generation. RNN is not very useful for text generation of a gated connection decided to be used for this purpose. The generated text is transferred to machine translation (MT) module. For MT since evaluation is done by humans is a time-consuming task and results differ from evaluator to another evaluator. Hence, the need for assessment of translation system is emerged. The synonym issue is not considered by the BLEU metric. A synonym is treated as a separate word. A modified BLEU (M-BLEU) has been developed as evaluation metrics. It includes several features such as replacing synonym and shallow modules of parsing. The final score of translation is given by BLEU metric scores. Finally, two outputs are there: one is generated text (English) and second is translated text with an improved translation score (Hindi).
- Published
- 2021
- Full Text
- View/download PDF
31. A Rule-Based Parsing for Bangla Grammar Pattern Detection
- Author
-
K. M. Azharul Hasan, Md. Rifat Anwar, and Aroni Saha Prapty
- Subjects
Parsing ,Grammar ,Computer science ,business.industry ,media_common.quotation_subject ,Context-free grammar ,computer.software_genre ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,Rule-based machine translation ,CYK algorithm ,Terminal and nonterminal symbols ,Artificial intelligence ,business ,computer ,Natural language processing ,Generative grammar ,Sentence ,media_common - Abstract
Rule-based parsing is the task that recognizes the structure of a sentence and assigns probable rules to parse the sentence. The structures can be assigned by context-free grammars that are employed to generate the accurate parse trees. We have developed formal grammars for Bangla language for various sentence structures. Since inherently these grammars are ambiguous, the CYK parsing algorithm is applied to verify the grammars. By checking the parse table, grammatical errors are detected if it has no entry for the terminal symbol. For recognizing the pattern of Bangla sentences, domain-specific context-free grammar (CFG) is developed based on the rules of Bangla grammar and applied to the domain. We have selected the air traffic information system (ATIS) as our selected domain. The efficiency of the proposed parsing scheme is shown using different kinds of Bangla sentences that show high accuracy within the domain.
- Published
- 2021
- Full Text
- View/download PDF
32. A Contextual Model for Information Extraction in Resume Analytics Using NLP’s Spacy
- Author
-
Channabasamma, Yeresime Suresh, and A. Manusha Reddy
- Subjects
Phrase ,Parsing ,business.industry ,Computer science ,Context (language use) ,File format ,computer.software_genre ,Information extraction ,Data visualization ,Contextual design ,Analytics ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
The unstructured document like resume will have different file formats (pdf, txt, doc, etc.), and also, there is a lot of ambiguity and variability in the language used in the resume. Such heterogeneity makes the extraction of useful information a challenging task. It gives rise to the urgent need for understanding the context in which words occur. This article proposes a machine learning approach to phrase matching in resumes, focusing on the extraction of special skills using spaCy, an advanced natural language processing (NLP) library. It can analyze and extract detailed information from resumes like a human recruiter. It keeps a count of the phrases while parsing to categorize persons based on their expatriation. The decision-making process can be accelerated through data visualization using matplotlib. Relative comparison of candidates can be made to filter out the candidates.
- Published
- 2021
- Full Text
- View/download PDF
33. Proposing Perturbation Application Tool for Portable Data Security in Cloud Computing
- Author
-
Kalpana Sharma, Amit Chaturvedi, and Meetendra Singh Chahar
- Subjects
Parsing ,business.industry ,Computer science ,Distributed computing ,Data security ,Perturbation (astronomy) ,Cloud computing ,computer.software_genre ,Encryption ,Field (computer science) ,Feature (computer vision) ,Noise (video) ,business ,computer - Abstract
Cloud computing is an emerging field of computing in a shared resources environment, where multiple clients share the computing resources. The data will be stored on the shared third-party cloud servers and hence it will be in other’s hand. There may be the chance of mishandling the data, if the data is placed in original form. Perturbation is a technique that will not only reduce the understanding level of the data but may convert the data sufficiently in such a form that will not be in the preview of any language parser. The main feature of this perturbation technique is that it will provide the facility to implement the perturbation in an innovative form. This paper proposes an innovative “Perturbation Application Tool (PAT)” that provides the facility to implement noise addition algorithm with encryption. We have also illustrated the implementation and outcomes of the PAT tool and this tool is working successfully for securing the portable data.
- Published
- 2021
- Full Text
- View/download PDF
34. Collective Examinations of Documents on COVID-19 Peril Factors Through NLP
- Author
-
Ch. Usha Kumari, E. Laxmi Lydia, Jose Moses Gummadi, B. Prasad, Ravuri Daniel, and Chinmaya Ranjan Pattanaik
- Subjects
2019-20 coronavirus outbreak ,Parsing ,Coronavirus disease 2019 (COVID-19) ,business.industry ,Computer science ,Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) ,Interpretation (philosophy) ,Cytotoxic chemotherapy ,computer.software_genre ,Identification (information) ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
The outbreak of the novel COVID-19 virus is identified across all experimental scientific tests that assist victims to fight against the pandemic situation. The problem seems to have a large number of scientific COVID-19 articles with different risk factors. The quick identification of documents allows the processing and interpretation of inevitable essential knowledge for investigators. This article provides a solution by creating an unsupervised framework for the interpretation of clinical trials over COVID-19 risk factors with a diverse range of articles related to vaccines and treatments from a large corpus of documents. It also provides practical informative knowledge regarding COVID-19 risk factors and helps researchers to enable any single author to obtain appropriate information. The present application uses artificial intelligence, natural language processing approaches, incorporated throughout the search engines, to search for keywords to classify categories with normalized linguistic data. The text data are instead parsed in phrases and thresholds the text with recognition of data frame components with relevant outcomes.
- Published
- 2021
- Full Text
- View/download PDF
35. Sentiment Analysis Combination in Terrorist Detection on Twitter: A Brief Survey of Approaches and Techniques
- Author
-
Salam Al-augby and Esraa Najjar
- Subjects
Naive Bayes classifier ,Parsing ,Computer science ,Emerging technologies ,Terrorism ,Sentiment analysis ,Decision tree ,Social media ,AdaBoost ,computer.software_genre ,computer ,Data science - Abstract
Terrorism is a big concern for many governments and people, especially with using social media such as Twitter that uses new technologies. Terrorism uses many techniques to carry out their actions and plans. Technology can play an important role in providing accurate predictions of terrorist activities. Here, we tried to do so using sentiment analysis for terrorist-related of Twitter because the early detection of terrorist activity is very important to the recent attack and to combat the spread of global terrorist activity. This work studied the techniques of effective analysis of terrorist activity data on Twitter. It is based on 17 articles that used Twitter to study terrorism for different purposes, while highlighting the different techniques used, from this survey one can notice that the machine learning techniques were used the most for sentimental analysis with good accuracy depending on the data used such as AdaBoost, support vector machine, maximum entropy, Naive Bayes, decision tree algorithms. Few number of papers are analyzed tweets in Arabic language as compared to English version because of its complexity parsing beside the complexity in analyzing feelings in Arabic makes tasks more challenging.
- Published
- 2021
- Full Text
- View/download PDF
36. Quick Response Code Based on Least Significant Bit
- Author
-
Zhuohao Weng, Yan Zhang, Jian Zhang, and Cui Qin
- Subjects
Scheme (programming language) ,Parsing ,Pixel ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Process (computing) ,computer.software_genre ,Image (mathematics) ,Least significant bit ,Transmission (telecommunications) ,Code (cryptography) ,Arithmetic ,computer ,computer.programming_language - Abstract
Quick response (QR) code design based on Least Significant Bit (LSB) image is studied. The basic principle of the research is to put the required information into the QR code first, and embed the QR code into the lowest bit of the pixels in the background image for hiding the information. If the picture information is necessary to read by obtaining the QR code picture, the lowest bit of the picture pixel is extracted through the program language, and the hidden information can be obtained by parsing the QR code picture. In this scheme, the information is put into the QR code picture, and the QR code is merged with the least significant bit of the picture. It is shown that cannot be recognized by the naked eye. This method can ensure the safety and security of the information during the transmission process. Accuracy, and greatly improve the visual aesthetics of the QR code.
- Published
- 2021
- Full Text
- View/download PDF
37. Enhancing Periodic Storage Performance in IoT-Based Waste Management
- Author
-
M. Nishanth, Omkar Shrenik Khot, D. Naveen Kumar, P. I. Basarkod, and S. Pavan Kumar
- Subjects
Parsing ,Database ,business.industry ,Computer science ,computer.software_genre ,Object (computer science) ,Stern ,Unsupervised learning ,Cluster analysis ,business ,Internet of Things ,computer ,Agile software development ,Garbage collection - Abstract
Every day, omnipresent gadgets are approaching agile and more akin. Due to rapid growth in Internet of things (IoT)—System of interrelated computing devices—every object can now be purely recognized and made to interact with each other. This strategy has been enforced for dustbins to supervise garbage collection and portray diverse valuable intuition. Our procedure harnesses a self-same routeway to not only supervise junk cumulus and yet enhance it by practicing the concept of machine learning. By the technique of unsupervised learning, we draw on K-means clustering, universally employed in data mining and logical analysis. Our real device captivates dustbin content level with the help of ultrasonic sensor operation. The vital dataset attributes produced are examined by our k-means algorithm, to find the particular time intervals of the day, when a periodic clean-off is required, such that the dustbins are free from the junk, for maximum attainable quantity time. The design or algorithm displays the emplacement, where additional dustbins to be ensconced, for further enhancement. It can be done by perusing a single cluster idiomatically and parsing out particulars, which are the stern most away from its nearest centroid and dustbin-related numerous particulars. Furthermore, in such positions, a new waste collector is installed. Hence, due to optimization, data produced discover that it had an admiring effect.
- Published
- 2021
- Full Text
- View/download PDF
38. Static and Dynamic Learning-Based PDF Malware Detection classifiers—A Comparative Study
- Author
-
Awadhesh Kumar Shukla, Sripada Manasa Lakshmi, and N. S. Vishnu
- Subjects
Software portability ,Information retrieval ,Parsing ,Computer science ,Feature extraction ,Obfuscation ,Classifier (linguistics) ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,Code (cryptography) ,Malware ,F1 score ,computer.software_genre ,computer - Abstract
The malicious software are still accounting up as a substantial threat to the cyber world. The most widely used vectors to infect different systems using malware are the document files. In this, the attacker tries to blend the malevolent code with the benign document files to carry out the attack. Portable document format (PDF) is the most commonly used document format to share the documents due to its portability and light weight. In this modern era, the attackers are implementing highly advance techniques to obfuscate the malware inside the document file. So, it becomes difficult for the malware detection classifiers to classify the document efficiently. These classifiers can be of two main type, namely, static and dynamic. In this paper, we surveyed various static and dynamic learning-based PDF malware classifiers to understand their architecture and working procedures. We also have presented the structure of the PDF files to understand the sections of PDF document where the malevolent code can be implanted. At the end, we performed a comparative study on the different surveyed classifiers by observing their true Positive percentages and F1 score.
- Published
- 2021
- Full Text
- View/download PDF
39. Part-of-Speech Annotation
- Author
-
Niladri Sekhar Dash
- Subjects
Text corpus ,Parsing ,business.industry ,Computer science ,Part of speech ,computer.software_genre ,language.human_language ,Metadata ,Annotation ,Bengali ,Chunking (psychology) ,language ,Artificial intelligence ,business ,Cognitive linguistics ,computer ,Natural language processing - Abstract
Annotating words at the part-of-speech level, either manually or by a machine, is a tough task. It is done effectively when human annotators, as well as computer systems, are properly trained so that they can correctly identify morphological properties and syntactic functions of words in a piece of text. We discuss in this chapter some theoretical aspects and practical issues of part-of-speech (POS) annotation on a written Bengali text corpus. We deliberately avoid all those issues and aspects that are required to design and develop an automatic POS annotation tool for a text, since this is not the goal of this chapter. To keep things simple and within the capacity of those readers who are not well-versed in the application of computers, we address here some of the primary concerns and challenges involved in POS annotation. Starting with the basic concept of POS annotation, we highlight the underlying differences between POS annotation and morphological processing; define the levels and stages of POS annotation; refer to some of the early works on POS annotation; present a generic scheme for POS annotation; and show how a POS annotated text is utilized in various domains and sub-domains of theoretical, descriptive, applied, computational, and cognitive linguistics. The data and information presented in this chapter are primarily meant for the students of those less-advanced languages which still lack linguistic resources like POS annotated texts. The rudimentary ideas and information that are presented in this chapter may be treated as valuable and usable inputs for designing linguistic and computational models for POS annotation in these less-advanced languages.
- Published
- 2021
- Full Text
- View/download PDF
40. An Implementation of Text Mining Decision Feedback Model Using Hadoop MapReduce
- Author
-
Swetaleena Sahoo, Manjusha Pandey, Siddharth Swarup Rautaray, and Swagat Khatai
- Subjects
Parsing ,Computer science ,business.industry ,Big data ,Predictive analytics ,computer.software_genre ,Set (abstract data type) ,Text mining ,Knowledge extraction ,Stemming ,Data mining ,Cluster analysis ,business ,computer - Abstract
A very large amount of unstructured text data is generated everyday on the Internet as well as in real life. Text mining has dramatically lifted the commercial value of these data by pulling out the unknown comprehensive potential patterns from these data. Text mining uses the algorithms of data mining, statistics, machine learning, and natural language processing for hidden knowledge discovery from the unstructured text data. This paper hosts the extensive research done on text mining in recent years. Then, the overall process of text mining is discussed with some high-end applications. The entire process is classified into different modules which are test parsing, text filtering, transformation, clustering, and predictive analytics. A more efficient and more sophisticated text mining model is also proposed with a decision feedback perception in which it is a way advanced than the conventional models providing a better accuracy and attending broader objectives. The text filtering module is discussed in detail with the implementation of word stemming algorithms like Lovins stemmer and Porter stemmer using MapReduce. The implementation set up has been done on a single node Hadoop cluster operating in pseudo-distributed mode. An enhanced implementation technique has been also proposed which is Porter stemmer with partitioner (PSP). Then, a comparative analysis using MapReduce has been done considering above three algorithms where the PSP provides a better stemming performance than Lovins stemmer and Porter stemmer. Experimental result shows that PSP provides 20–25% more stemming capacity than Lovins stemmer and 3–15% more stemming capacity then Porter stemmer algorithm.
- Published
- 2021
- Full Text
- View/download PDF
41. Dependency to Semantics: Structure Transformation and Syntax-Guided Attention for Neural Semantic Parsing
- Author
-
Bo Chen, Shan Wu, Xianpei Han, and Le Sun
- Subjects
Structure (mathematical logic) ,Parsing ,Dependency (UML) ,Syntax (programming languages) ,business.industry ,Computer science ,computer.software_genre ,Semantics ,Consistency (database systems) ,Meaning (philosophy of language) ,TheoryofComputation_MATHEMATICALLOGICANDFORMALLANGUAGES ,TheoryofComputation_LOGICSANDMEANINGSOFPROGRAMS ,Artificial intelligence ,business ,computer ,Natural language processing ,Sentence - Abstract
It has long been known that the syntactic structure and the semantic representation of a sentence are closely associated [1, 2]. However, it is still a hard problem to exploit the syntactic-semantic correspondence in end-to-end neural semantic parsing, mainly due to the partial consistency between their structures. In this paper, we propose a neural dependency to semantics transformation model – Dep2Sem, which can effectively learn the structure correspondence between dependency trees and formal meaning representations. Based on Dep2Sem, a dependency-informed attention mechanism is proposed to exploit syntactic structure for neural semantic parsing. Experiments on Geo, Jobs, and Atis benchmarks show that our approach can significantly enhance the performance of neural semantic parsers.
- Published
- 2021
- Full Text
- View/download PDF
42. Resume Screening Using Natural Language Processing and Machine Learning: A Systematic Review
- Author
-
Md. Amir Khusru Akhtar, Arvind Kumar Sinha, and Ashwani Kumar
- Subjects
Parsing ,Syntax (programming languages) ,Computer science ,business.industry ,Semantic search ,Context (language use) ,Unstructured data ,computer.software_genre ,Machine learning ,Writing style ,Written language ,Artificial intelligence ,business ,computer ,Scope (computer science) ,Natural language processing - Abstract
Curriculum vitae or resume screening is a time-consuming procedure. Natural language processing and machine learning have the capability to understand and parse the unstructured written language, and extract the desired information. The idea is to train the machine to analyze the written documents like a human being. This paper presents a systematic review on resume screening and enlightens the comparison of recognized works. Several techniques and approaches of machine learning for evaluating and analyzing the unstructured data have been discussed. Existing resume parsers use semantic search to understand the context of the language in order to find the reliable and comprehensive results. A review on the use of semantic search for context-based searching has been explained. In addition, this paper also shows the research challenges and future scope of resume parsing in terms of writing style, word choice and syntax of unstructured written language.
- Published
- 2021
- Full Text
- View/download PDF
43. Performance Evaluation of Clustering Techniques for Financial Crisis Prediction
- Author
-
R. Madhanmohan, R. Arunkumar, and S. Anand Christy
- Subjects
Parsing ,Computer science ,Process (engineering) ,business.industry ,media_common.quotation_subject ,Certainty ,computer.software_genre ,Machine learning ,Upgrade ,Work (electrical) ,Financial crisis ,Artificial intelligence ,Off time ,Cluster analysis ,business ,computer ,media_common - Abstract
In the present days, financial crisis prediction (FCP) is winding up progressively in the business advertises. As organizations gather an ever-increasing number of data from day-by-day activities, they hope to draw valuable decisions from the gathered data to aid on practical assessments for new client demands, e.g., client credit classification, certainty of return that was expected, and so forth. Banks as well as institutes of finance have connected diverse mining methods on data to upgrade their business execution. With all these strategies, clustering has been measured as a major strategy to catch the usual organization of data. Be that as it may, there are very few examinations on clustering methodologies for FCP. In this work, we assess two clustering algorithms, namely k-means and farthest first clustering algorithms for parsing distinctive financial datasets shifted out off time periods to trades. The evaluation process was conducted for datasets Weislaw, Polish, and German. The simulation results reported that the k-means clustering algorithm outperforms well than farthest first algorithm on all the applied dataset.
- Published
- 2021
- Full Text
- View/download PDF
44. Building a Knowledge Graph of Vietnam Tourism from Text
- Author
-
Hung Le and Phuc Do
- Subjects
Parsing ,business.industry ,Process (engineering) ,Computer science ,Vietnamese ,computer.software_genre ,Pipeline (software) ,language.human_language ,Task (project management) ,World Wide Web ,ComputingMethodologies_DOCUMENTANDTEXTPROCESSING ,language ,The Internet ,Paragraph ,business ,computer ,Tourism - Abstract
Most data in the world is in form of text. Therefore, we can say text stores large amount of the knowledge of human beings. Extracting useful knowledge from text, however, is not a simple task. In this paper, we present a complete pipeline to extract knowledge from paragraph. This pipeline combines state-of-the-art systems in order to yield optimal results. There are some other Knowledge Graphs such as Google Knowledge Graph, YAGO, or DBpedia. Most of the data in these Knowledge Graphs is in English. On the other hand, the results from our system is used to build a new Knowledge Graph in Vietnamese of Vietnam Tourism. We use the rich resources language like English to process a low resources language like Vietnamese. We utilize the NLP tools of English such as Google translate, Stanford parser, Co-referencing, ClausIE, MinIE. We develop Google Search to find the text describing the entities in the Internet. This text is in Vietnamese. Then, we translate the Vietnamese text into English text and use English NLP tools to extract triples. Finally, we translate the triples back into Vietnamese and build the knowledge graph of Vietnam tourism. We conduct experiment and discover the advantages and disadvantages of our method.
- Published
- 2021
- Full Text
- View/download PDF
45. Novel Design Approach for Optimal Execution Plan and Strategy for Query Execution
- Author
-
Rajendra D. Gawali and Subhash K. Shinde
- Subjects
Task (computing) ,Parsing ,Database ,Computer science ,Feature extraction ,InformationSystems_DATABASEMANAGEMENT ,Plan (drawing) ,Ideal solution ,Reuse ,computer.software_genre ,Query optimization ,computer ,Expression (mathematics) - Abstract
Query optimization is a challenging task for database management researchers. After parsing of queries during query processing, in query optimization step, various query execution plans are generated. The job of query optimizer is to propose an optimal plan that can evaluate the given relational expression at a reasonably lower cost. For every new input query instance, generating multiple execution plans and identifying an efficient optimal plan amongst is always challenging in terms of consumption of resources and costs associated with optimization. As the number of plans increases, it can take longer to find a good plan. Thus, to make query optimization practical and efficient, reusing the existing execution plans will provide the ideal solution for the new instances of equivalent old queries.
- Published
- 2021
- Full Text
- View/download PDF
46. Improving Question Answering over Knowledge Base with External Linguistic Knowledge
- Author
-
Xiaowang Zhang and Peiyun Wu
- Subjects
Parsing ,Relation (database) ,business.industry ,Computer science ,media_common.quotation_subject ,Representation (systemics) ,Ambiguity ,Semantics ,computer.software_genre ,Linguistics ,Knowledge base ,Question answering ,business ,computer ,media_common ,Meaning (linguistics) - Abstract
Semantic parsing is an important method to question answering over knowledge base (KBQA), which transforms a question into logical queries to retrieve answers. Existing works largely focus on fine-grained relations representation while ignoring the latent semantic information behind the implicit meaning of relations. In this paper, we leverage an external linguistic knowledge (ELK) to enhance relation semantics where ELK is used to remove the ambiguity of words occurring in a relation. Moreover, we present a sense-based attention for word-level relation representation and a Graph-Attention-Network (GAT)-based question encoder. Experiments evaluated on two data sets show that our model outperforms existing approaches.
- Published
- 2021
- Full Text
- View/download PDF
47. Towards Nested and Fine-Grained Open Information Extraction
- Author
-
Jiawei Wang, Zheng Xin, Zhigang Chen, Jiajie Xu, Jianfeng Qu, Qiang Yang, and Zhixu Li
- Subjects
Parsing ,business.industry ,Computer science ,Process (engineering) ,computer.software_genre ,Information extraction ,Task (computing) ,Empirical research ,Knowledge extraction ,Simple (abstract algebra) ,Data mining ,business ,computer ,Biomedicine - Abstract
Open Information Extraction is a crucial task in natural language processing with wide applications. Existing efforts only work on extracting simple flat triplets that are not minimized, which neglect triplets of other kinds and their nested combinations. As a result, they cannot provide comprehensive extraction results for its downstream tasks. In this paper, we define three more fine-grained types of triplets, and also pay attention to the nested combination of these triplets. Particular, we propose a novel end-to-end joint extraction model, which identifies the basic semantic elements, comprehensive types of triplets, as well as their nested combinations from plain texts jointly. In this way, information is shared more thoroughly in the whole parsing process, which also lets the model achieve more fine-grained knowledge extraction without relying on external NLP tools or resources. Our empirical study on datasets of two domains, Building Codes and Biomedicine, demonstrates the effectiveness of our model comparing to state-of-the-art approaches.
- Published
- 2021
- Full Text
- View/download PDF
48. Automatic Question and Answer Generation from Text Using Neural Networks
- Author
-
Amal Saha, Praveen Kumar, and Sonam Soni
- Subjects
Questions and answers ,Parsing ,Artificial neural network ,Syntax (programming languages) ,Computer science ,business.industry ,computer.software_genre ,Test (assessment) ,Recurrent neural network ,Action (philosophy) ,Knowledge building ,Artificial intelligence ,business ,computer ,Natural language processing - Abstract
Generating Questions is a major action in educational learning. Questions provide various stages of complication in the educational learning procedure. Knowledge building is costly for computer Assisted evaluation when setting exercise questions, teachers use test producers to build question Banks. The paper introduces a system that produces questions with answers automatically applying natural language processing and openNMT. Previous work question generation automatically, creates questions from sentences applying syntax and semantic parser. The paper presents a new approach for generating question and answer.
- Published
- 2021
- Full Text
- View/download PDF
49. A User-Define Method of Coding Rule Checking Using HAL
- Author
-
Yao Chunyue, You Jing, and Sun Yuming
- Subjects
Parsing ,business.industry ,Computer science ,Programming language ,Verilog Procedural Interface ,computer.software_genre ,Front and back ends ,Software ,Rule checking ,VHDL ,business ,Cadence ,Hardware_REGISTER-TRANSFER-LEVELIMPLEMENTATION ,computer ,computer.programming_language ,Coding (social sciences) - Abstract
A user-define method is proposed in this paper, to help the designer improve the quality of products and shorten the development term. It solves some special coding rule checking problems. The method is implemented by means of Cadence HAL software, Using Verilog procedural interface (VPI) to check Verilog code, Using VHDL procedural interface (VHPI) to check VHDL code, and using common front end procedural interface (CPI) to check common code. The user-define method of coding rule checking includes three steps. The first step is compiling the design using Cadence ncvlog or ncvhdl. The second step is using HAL called VPI, VHPI or CPI to implement the User-define rule. The third step is compiling the User-define rule. The implementation example is presented, and the 24 rules are defined using the method in Practice. The results show that the user-define Method could implement defining rules rather than parsing syntax structure in advance.
- Published
- 2020
- Full Text
- View/download PDF
50. Research on Tibetan Phrase Classification Method for Language Information Processing
- Author
-
Zangtai Cai, Nancairang Suo, and Rangjia Cai
- Subjects
Space (punctuation) ,Markup language ,Phrase ,Parsing ,Machine translation ,Computer science ,business.industry ,media_common.quotation_subject ,Information processing ,computer.software_genre ,Field (computer science) ,Artificial intelligence ,Function (engineering) ,business ,computer ,Natural language processing ,media_common - Abstract
Phrases as a level of linguistic analysis, occupying a very important position in this field. Effective phrase analysis is crucial to reduce difficulty of subsequent syntactic analysis, and reduce search space of syntactic analyzer, as well as improving accuracy of machine translation. At present, research in Tibetan phrases for information processing has just started; it needs to be further developed. Based on previous studies on boundary between Tibetan phrases and Tibetan sentences, with characteristics and requirements of Tibetan information processing, in accordance with grammatical function and the principle of automatic analysis to classification of phrases processing, and specify markup codes for Tibetan phrase units in information processing are discussed in this paper.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.