26 results on '"SANYAL, DEBARSHI KUMAR"'
Search Results
2. Transfer Learning and Transformer Architecture for Financial Sentiment Analysis
- Author
-
Rehman, Tohida, Bose, Raghubir, Chattopadhyay, Samiran, and Sanyal, Debarshi Kumar
- Subjects
Computer Science - Computation and Language - Abstract
Financial sentiment analysis allows financial institutions like Banks and Insurance Companies to better manage the credit scoring of their customers in a better way. Financial domain uses specialized mechanisms which makes sentiment analysis difficult. In this paper, we propose a pre-trained language model which can help to solve this problem with fewer labelled data. We extend on the principles of Transfer learning and Transformation architecture principles and also take into consideration recent outbreak of pandemics like COVID. We apply the sentiment analysis to two different sets of data. We also take smaller training set and fine tune the same as part of the model., Comment: 12 pages, 9 figures
- Published
- 2024
- Full Text
- View/download PDF
3. GINopic: Topic Modeling with Graph Isomorphism Network
- Author
-
Adhya, Suman and Sanyal, Debarshi Kumar
- Subjects
Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
Topic modeling is a widely used approach for analyzing and exploring large document collections. Recent research efforts have incorporated pre-trained contextualized language models, such as BERT embeddings, into topic modeling. However, they often neglect the intrinsic informational value conveyed by mutual dependencies between words. In this study, we introduce GINopic, a topic modeling framework based on graph isomorphism networks to capture the correlation between words. By conducting intrinsic (quantitative as well as qualitative) and extrinsic evaluations on diverse benchmark datasets, we demonstrate the effectiveness of GINopic compared to existing topic models and highlight its potential for advancing topic modeling., Comment: Accepted as a long paper for NAACL 2024 main conference
- Published
- 2024
4. Automatic Recognition of Learning Resource Category in a Digital Library
- Author
-
Banerjee, Soumya, Sanyal, Debarshi Kumar, Chattopadhyay, Samiran, Bhowmick, Plaban Kumar, and Das, Partha Pratim
- Subjects
Computer Science - Digital Libraries ,Computer Science - Computer Vision and Pattern Recognition - Abstract
Digital libraries often face the challenge of processing a large volume of diverse document types. The manual collection and tagging of metadata can be a time-consuming and error-prone task. To address this, we aim to develop an automatic metadata extractor for digital libraries. In this work, we introduce the Heterogeneous Learning Resources (HLR) dataset designed for document image classification. The approach involves decomposing individual learning resources into constituent document images (sheets). These images are then processed through an OCR tool to extract textual representation. State-of-the-art classifiers are employed to classify both the document image and its textual content. Subsequently, the labels of the constituent document images are utilized to predict the label of the overall document., Comment: 2 pages, 3 figures, Published in JCDL 21
- Published
- 2023
- Full Text
- View/download PDF
5. Hallucination Reduction in Long Input Text Summarization
- Author
-
Rehman, Tohida, Mandal, Ronit, Agarwal, Abhishek, and Sanyal, Debarshi Kumar
- Subjects
Computer Science - Computation and Language ,Computer Science - Information Retrieval ,Computer Science - Machine Learning - Abstract
Hallucination in text summarization refers to the phenomenon where the model generates information that is not supported by the input source document. Hallucination poses significant obstacles to the accuracy and reliability of the generated summaries. In this paper, we aim to reduce hallucinated outputs or hallucinations in summaries of long-form text documents. We have used the PubMed dataset, which contains long scientific research documents and their abstracts. We have incorporated the techniques of data filtering and joint entity and summary generation (JAENS) in the fine-tuning of the Longformer Encoder-Decoder (LED) model to minimize hallucinations and thereby improve the quality of the generated summary. We have used the following metrics to measure factual consistency at the entity level: precision-source, and F1-target. Our experiments show that the fine-tuned LED model performs well in generating the paper abstract. Data filtering techniques based on some preprocessing steps reduce entity-level hallucinations in the generated summaries in terms of some of the factual consistency metrics., Comment: 9 pages, 1 figure, 1 table
- Published
- 2023
6. CitePrompt: Using Prompts to Identify Citation Intent in Scientific Papers
- Author
-
Lahiri, Avishek, Sanyal, Debarshi Kumar, and Mukherjee, Imon
- Subjects
Computer Science - Computation and Language - Abstract
Citations in scientific papers not only help us trace the intellectual lineage but also are a useful indicator of the scientific significance of the work. Citation intents prove beneficial as they specify the role of the citation in a given context. In this paper, we present CitePrompt, a framework which uses the hitherto unexplored approach of prompt-based learning for citation intent classification. We argue that with the proper choice of the pretrained language model, the prompt template, and the prompt verbalizer, we can not only get results that are better than or comparable to those obtained with the state-of-the-art methods but also do it with much less exterior information about the scientific document. We report state-of-the-art results on the ACL-ARC dataset, and also show significant improvement on the SciCite dataset over all baseline models except one. As suitably large labelled datasets for citation intent classification can be quite hard to find, in a first, we propose the conversion of this task to the few-shot and zero-shot settings. For the ACL-ARC dataset, we report a 53.86% F1 score for the zero-shot setting, which improves to 63.61% and 66.99% for the 5-shot and 10-shot settings, respectively., Comment: Selected for publication at ACM/IEEE JOINT CONFERENCE ON DIGITAL LIBRARIES 2023
- Published
- 2023
7. What Does the Indian Parliament Discuss? An Exploratory Analysis of the Question Hour in the Lok Sabha
- Author
-
Adhya, Suman and Sanyal, Debarshi Kumar
- Subjects
Computer Science - Computation and Language - Abstract
The TCPD-IPD dataset is a collection of questions and answers discussed in the Lower House of the Parliament of India during the Question Hour between 1999 and 2019. Although it is difficult to analyze such a huge collection manually, modern text analysis tools can provide a powerful means to navigate it. In this paper, we perform an exploratory analysis of the dataset. In particular, we present insightful corpus-level statistics and a detailed analysis of three subsets of the dataset. In the latter analysis, the focus is on understanding the temporal evolution of topics using a dynamic topic model. We observe that the parliamentary conversation indeed mirrors the political and socio-economic tensions of each period., Comment: Accepted at the workshop PoliticalNLP co-located with the conference LREC 2022
- Published
- 2023
8. Do Neural Topic Models Really Need Dropout? Analysis of the Effect of Dropout in Topic Modeling
- Author
-
Adhya, Suman, Lahiri, Avishek, and Sanyal, Debarshi Kumar
- Subjects
Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
Dropout is a widely used regularization trick to resolve the overfitting issue in large feedforward neural networks trained on a small dataset, which performs poorly on the held-out test subset. Although the effectiveness of this regularization trick has been extensively studied for convolutional neural networks, there is a lack of analysis of it for unsupervised models and in particular, VAE-based neural topic models. In this paper, we have analyzed the consequences of dropout in the encoder as well as in the decoder of the VAE architecture in three widely used neural topic models, namely, contextualized topic model (CTM), ProdLDA, and embedded topic model (ETM) using four publicly available datasets. We characterize the dropout effect on these models in terms of the quality and predictive performance of the generated topics., Comment: Accepted at EACL 2023
- Published
- 2023
9. Improving Contextualized Topic Models with Negative Sampling
- Author
-
Adhya, Suman, Lahiri, Avishek, Sanyal, Debarshi Kumar, and Das, Partha Pratim
- Subjects
Computer Science - Computation and Language ,Computer Science - Machine Learning - Abstract
Topic modeling has emerged as a dominant method for exploring large document collections. Recent approaches to topic modeling use large contextualized language models and variational autoencoders. In this paper, we propose a negative sampling mechanism for a contextualized topic model to improve the quality of the generated topics. In particular, during model training, we perturb the generated document-topic vector and use a triplet loss to encourage the document reconstructed from the correct document-topic vector to be similar to the input document and dissimilar to the document reconstructed from the perturbed vector. Experiments for different topic counts on three publicly available benchmark datasets show that in most cases, our approach leads to an increase in topic coherence over that of the baselines. Our model also achieves very high topic diversity., Comment: Accepted at 19th International Conference on Natural Language Processing (ICON 2022)
- Published
- 2023
10. Improving Neural Topic Models with Wasserstein Knowledge Distillation
- Author
-
Adhya, Suman and Sanyal, Debarshi Kumar
- Subjects
Computer Science - Computation and Language ,Computer Science - Information Retrieval ,Computer Science - Machine Learning - Abstract
Topic modeling is a dominant method for exploring document collections on the web and in digital libraries. Recent approaches to topic modeling use pretrained contextualized language models and variational autoencoders. However, large neural topic models have a considerable memory footprint. In this paper, we propose a knowledge distillation framework to compress a contextualized topic model without loss in topic quality. In particular, the proposed distillation objective is to minimize the cross-entropy of the soft labels produced by the teacher and the student models, as well as to minimize the squared 2-Wasserstein distance between the latent distributions learned by the two models. Experiments on two publicly available datasets show that the student trained with knowledge distillation achieves topic coherence much higher than that of the original student model, and even surpasses the teacher while containing far fewer parameters than the teacher's. The distilled model also outperforms several other competitive topic models on topic coherence., Comment: Accepted at ECIR 2023
- Published
- 2023
- Full Text
- View/download PDF
11. An Analysis of Abstractive Text Summarization Using Pre-trained Models
- Author
-
Rehman, Tohida, Das, Suchandan, Sanyal, Debarshi Kumar, and Chattopadhyay, Samiran
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
People nowadays use search engines like Google, Yahoo, and Bing to find information on the Internet. Due to explosion in data, it is helpful for users if they are provided relevant summaries of the search results rather than just links to webpages. Text summarization has become a vital approach to help consumers swiftly grasp vast amounts of information.In this paper, different pre-trained models for text summarization are evaluated on different datasets. Specifically, we have used three different pre-trained models, namely, google/pegasus-cnn-dailymail, T5-base, facebook/bart-large-cnn. We have considered three different datasets, namely, CNN-dailymail, SAMSum and BillSum to get the output from the above three models. The pre-trained models are compared over these different datasets, each of 2000 examples, through ROUGH and BLEU metrics., Comment: 11 Pages, 6 Figures, 3 Tables
- Published
- 2023
- Full Text
- View/download PDF
12. Named Entity Recognition Based Automatic Generation of Research Highlights
- Author
-
Rehman, Tohida, Sanyal, Debarshi Kumar, Majumder, Prasenjit, and Chattopadhyay, Samiran
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
A scientific paper is traditionally prefaced by an abstract that summarizes the paper. Recently, research highlights that focus on the main findings of the paper have emerged as a complementary summary in addition to an abstract. However, highlights are not yet as common as abstracts, and are absent in many papers. In this paper, we aim to automatically generate research highlights using different sections of a research paper as input. We investigate whether the use of named entity recognition on the input improves the quality of the generated highlights. In particular, we have used two deep learning-based models: the first is a pointer-generator network, and the second augments the first model with coverage mechanism. We then augment each of the above models with named entity recognition features. The proposed method can be used to produce highlights for papers with missing highlights. Our experiments show that adding named entity information improves the performance of the deep learning-based summarizers in terms of ROUGE, METEOR and BERTScore measures., Comment: 7 Pages, 3 Figures, 2 Tables
- Published
- 2023
13. Abstractive Text Summarization using Attentive GRU based Encoder-Decoder
- Author
-
Rehman, Tohida, Das, Suchandan, Sanyal, Debarshi Kumar, and Chattopadhyay, Samiran
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
In todays era huge volume of information exists everywhere. Therefore, it is very crucial to evaluate that information and extract useful, and often summarized, information out of it so that it may be used for relevant purposes. This extraction can be achieved through a crucial technique of artificial intelligence, namely, machine learning. Indeed automatic text summarization has emerged as an important application of machine learning in text processing. In this paper, an english text summarizer has been built with GRU-based encoder and decoder. Bahdanau attention mechanism has been added to overcome the problem of handling long sequences in the input text. A news-summary dataset has been used to train the model. The output is observed to outperform competitive models in the literature. The generated summary can be used as a newspaper headline., Comment: 9 pages, 2 Tables, 5 Figures
- Published
- 2023
- Full Text
- View/download PDF
14. Generation of Highlights from Research Papers Using Pointer-Generator Networks and SciBERT Embeddings
- Author
-
Rehman, Tohida, Sanyal, Debarshi Kumar, Chattopadhyay, Samiran, Bhowmick, Plaban Kumar, and Das, Partha Pratim
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning - Abstract
Nowadays many research articles are prefaced with research highlights to summarize the main findings of the paper. Highlights not only help researchers precisely and quickly identify the contributions of a paper, they also enhance the discoverability of the article via search engines. We aim to automatically construct research highlights given certain segments of a research paper. We use a pointer-generator network with coverage mechanism and a contextual embedding layer at the input that encodes the input tokens into SciBERT embeddings. We test our model on a benchmark dataset, CSPubSum, and also present MixSub, a new multi-disciplinary corpus of papers for automatic research highlight generation. For both CSPubSum and MixSub, we have observed that the proposed model achieves the best performance compared to related variants and other models proposed in the literature. On the CSPubSum dataset, our model achieves the best performance when the input is only the abstract of a paper as opposed to other segments of the paper. It produces ROUGE-1, ROUGE-2 and ROUGE-L F1-scores of 38.26, 14.26 and 35.51, respectively, METEOR score of 32.62, and BERTScore F1 of 86.65 which outperform all other baselines. On the new MixSub dataset, where only the abstract is the input, our proposed model (when trained on the whole training corpus without distinguishing between the subject categories) achieves ROUGE-1, ROUGE-2 and ROUGE-L F1-scores of 31.78, 9.76 and 29.3, respectively, METEOR score of 24.00, and BERTScore F1 of 85.25., Comment: 19 Pages, 7 Figures, 8 Tables
- Published
- 2023
- Full Text
- View/download PDF
15. Personal Research Knowledge Graphs
- Author
-
Chakraborty, Prantika, Dutta, Sudakshina, and Sanyal, Debarshi Kumar
- Subjects
Computer Science - Information Retrieval ,Computer Science - Human-Computer Interaction - Abstract
Maintaining research-related information in an organized manner can be challenging for a researcher. In this paper, we envision personal research knowledge graphs (PRKGs) as a means to represent structured information about the research activities of a researcher. PRKGs can be used to power intelligent personal assistants, and personalize various applications. We explore what entities and relations could be potentially included in a PRKG, how to extract them from various sources, and how to share a PRKG within a research group.
- Published
- 2022
16. Segmenting Scientific Abstracts into Discourse Categories: A Deep Learning-Based Approach for Sparse Labeled Data
- Author
-
Banerjee, Soumya, Sanyal, Debarshi Kumar, Chattopadhyay, Samiran, Bhowmick, Plaban Kumar, and Das, Parthapratim
- Subjects
Computer Science - Computation and Language ,I.5.1 ,H.3.7 - Abstract
The abstract of a scientific paper distills the contents of the paper into a short paragraph. In the biomedical literature, it is customary to structure an abstract into discourse categories like BACKGROUND, OBJECTIVE, METHOD, RESULT, and CONCLUSION, but this segmentation is uncommon in other fields like computer science. Explicit categories could be helpful for more granular, that is, discourse-level search and recommendation. The sparsity of labeled data makes it challenging to construct supervised machine learning solutions for automatic discourse-level segmentation of abstracts in non-bio domains. In this paper, we address this problem using transfer learning. In particular, we define three discourse categories BACKGROUND, TECHNIQUE, OBSERVATION-for an abstract because these three categories are the most common. We train a deep neural network on structured abstracts from PubMed, then fine-tune it on a small hand-labeled corpus of computer science papers. We observe an accuracy of 75% on the test corpus. We perform an ablation study to highlight the roles of the different parts of the model. Our method appears to be a promising solution to the automatic segmentation of abstracts, where the labeled data is sparse., Comment: to appear in the proceedings of JCDL'2020
- Published
- 2020
- Full Text
- View/download PDF
17. Designing an Efficient Delay Sensitive Routing Metric for IEEE 802.16 Mesh Networks
- Author
-
Bhakta, Ishita, Chakraborty, Sandip, Mitra, Barsha, Sanyal, Debarshi Kumar, Chattopadhyay, Samiran, and Chattopadhyay, Matangini
- Subjects
Computer Science - Networking and Internet Architecture - Abstract
Quality of Service provisioning is one of the major design goals of IEEE 802.16 mesh networks. In order to provide quality delivery of delay sensitive services such as voice, video etc., it is required to route such traffic over a minimum delay path. In this paper we propose a routing metric for delay sensitive services in IEEE 802.16 mesh networks. We design a new cross layer routing metric, namely Expected Scheduler Delay (ESD), based on HoldOff exponent and the current load at each node of the network. This proposed metric takes into account the expected theoretical end-to-end delay of routing paths as well as network congestion to find the best suited path. We propose an efficient distributed scheme to calculate ESD and route the packets using source routing mechanism based on ESD. The simulation results demonstrate that our metric achieves reduced delay compared to a standard scheme used in IEEE 802.16 mesh, that takes hop count to find the path., Comment: This paper has been presented at the International Conference on Wireless and Optical Communications, May 2011, China
- Published
- 2013
18. National Digital Library of India: Democratizing Education in India.
- Author
-
BHOWMICK, PLABAN KUMAR, DAS, PARTHA PRATIM, CHAKRABARTI, PARTHA PRATIM, and SANYAL, DEBARSHI KUMAR
- Subjects
DIGITAL libraries ,DIGITAL library design & construction ,LIBRARY users ,LIBRARIES & state - Abstract
The article discusses the National Digital Library of India (NDLI), which was designed by the Indian Institute of Technology (IIT) in Kharagpur, India, and funded by India's Ministry of Education. Some of the technical challenges in designing such a meta-library, user engagement, and computing research at NDLI are discussed.
- Published
- 2022
- Full Text
- View/download PDF
19. A Sneak Peek into 5G Communications
- Author
-
Kar, Udit Narayana and Sanyal, Debarshi Kumar
- Published
- 2018
- Full Text
- View/download PDF
20. What Does the Indian Parliament Discuss? An Exploratory Analysis of the Question Hour in the Lok Sabha
- Author
-
Suman Adhya and Sanyal, Debarshi Kumar
- Subjects
FOS: Computer and information sciences ,Computer Science - Computation and Language ,Computation and Language (cs.CL) - Abstract
The TCPD-IPD dataset is a collection of questions and answers discussed in the Lower House of the Parliament of India during the Question Hour between 1999 and 2019. Although it is difficult to analyze such a huge collection manually, modern text analysis tools can provide a powerful means to navigate it. In this paper, we perform an exploratory analysis of the dataset. In particular, we present insightful corpus-level statistics and a detailed analysis of three subsets of the dataset. In the latter analysis, the focus is on understanding the temporal evolution of topics using a dynamic topic model. We observe that the parliamentary conversation indeed mirrors the political and socio-economic tensions of each period., Accepted at the workshop PoliticalNLP co-located with the conference LREC 2022
- Published
- 2023
21. Label informed hierarchical transformers for sequential sentence classification in scientific abstracts
- Author
-
Tokala, Yaswanth Sri Sai Santosh, primary, Aluru, Sai Saketh, additional, Vallabhajosyula, Anoop, additional, Sanyal, Debarshi Kumar, additional, and Das, Partha Pratim, additional
- Published
- 2023
- Full Text
- View/download PDF
22. Automated classification of software issue reports using machine learning techniques: an empirical study
- Author
-
Pandey, Nitish, Sanyal, Debarshi Kumar, Hudait, Abir, and Sen, Amitava
- Published
- 2017
- Full Text
- View/download PDF
23. RESEARCH HIGHLIGHT GENERATION WITH ELMO CONTEXTUAL EMBEDDINGS.
- Author
-
REHMAN, TOHIDA, SANYAL, DEBARSHI KUMAR, and CHATTOPADHYAY, SAMIRAN
- Subjects
INTERNET publishing ,ONLINE databases ,DEEP learning ,METEORS ,NATURAL languages - Abstract
With the advent of digital publishing and online databases, the volume of textual data generated by scientific research has increased exponentially. This makes it increasingly difficult for academics to keep up with new breakthroughs and synthesise important information for their own work. Abstracts have long been a standard feature of scientific papers, providing a concise summary of the paper’s content and main findings. In recent years, some journals have begun to provide research highlights as an additional summary of the paper. The aim of this article is to create research highlights automatically by using various sections of a research paper as input. We employ a pointer-generator network with a coverage mechanism and pretrained ELMo contextual embeddings to generate the highlights. Our experiments shows that the proposed model outperforms several competitive models in the literature in terms of ROUGE, METEOR, BERTScore, and MoverScore metrics. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
24. DAKE: Document-Level Attention for Keyphrase Extraction
- Author
-
Santosh, Tokala Yaswanth Sri Sai, Sanyal, Debarshi Kumar, Bhowmick, Plaban Kumar, and Das, Partha Pratim
- Subjects
Keyphrase extraction ,Document-level attention ,Sequence labelling ,LSTM ,Article - Abstract
Keyphrases provide a concise representation of the topical content of a document and they are helpful in various downstream tasks. Previous approaches for keyphrase extraction model it as a sequence labelling task and use local contextual information to understand the semantics of the input text but they fail when the local context is ambiguous or unclear. We present a new framework to improve keyphrase extraction by utilizing additional supporting contextual information. We retrieve this additional information from other sentences within the same document. To this end, we propose Document-level Attention for Keyphrase Extraction (DAKE), which comprises Bidirectional Long Short-Term Memory networks that capture hidden semantics in text, a document-level attention mechanism to incorporate document level contextual information, gating mechanisms which help to determine the influence of additional contextual information on the fusion with local contextual information, and Conditional Random Fields which capture output label dependencies. Our experimental results on a dataset of research papers show that the proposed model outperforms previous state-of-the-art approaches for keyphrase extraction.
- Published
- 2020
25. An overview of device-to-device communication in cellular networks
- Author
-
Kar, Udit Narayana, primary and Sanyal, Debarshi Kumar, additional
- Published
- 2018
- Full Text
- View/download PDF
26. A DiffServ Architecture for QoS-Aware Routing for Delay-Sensitive and Best-Effort Services in IEEE 802.16 Mesh Networks
- Author
-
Bhakta, Ishita, primary, Chakraborty, Sandip, additional, Mitra, Barsha, additional, Sanyal, Debarshi Kumar, additional, Chattopadhyay, Samiran, additional, and Chattopadhyay, Matangini, additional
- Published
- 2011
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.