1,169 results on '"text generation"'
Search Results
2. Pre-training a Transformer-Based Generative Model Using a Small Sepedi Dataset
- Author
-
Ramalepe, Simon Phetole, Modipa, Thipe I., Davel, Marelie H., Li, Gang, Series Editor, Filipe, Joaquim, Series Editor, Xu, Zhiwei, Series Editor, Gerber, Aurona, editor, Maritz, Jacques, editor, and Pillay, Anban W., editor
- Published
- 2025
- Full Text
- View/download PDF
3. Comparative Analysis of Pretrained Models for Text Classification, Generation and Summarization: A Detailed Analysis
- Author
-
Pathak, Prakrit, Rana, Prashant Singh, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Antonacopoulos, Apostolos, editor, Chaudhuri, Subhasis, editor, Chellappa, Rama, editor, Liu, Cheng-Lin, editor, Bhattacharya, Saumik, editor, and Pal, Umapada, editor
- Published
- 2025
- Full Text
- View/download PDF
4. Data Augmentation Using Large Language Model for Fake Review Identification
- Author
-
Li, Qingxu, Chen, Jindong, Zhang, Wen, Li, Gang, Series Editor, Filipe, Joaquim, Series Editor, Xu, Zhiwei, Series Editor, Tang, Xijin, editor, Huynh, Van Nam, editor, Xia, Haoxiang, editor, and Bai, Quan, editor
- Published
- 2025
- Full Text
- View/download PDF
5. A Decomposed-Distilled Sequential Framework for Text-to-Table Task with LLMs
- Author
-
Chen, Jiarui, Li, Shuangyin, Jiang, Yuncheng, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Hadfi, Rafik, editor, Anthony, Patricia, editor, Sharma, Alok, editor, Ito, Takayuki, editor, and Bai, Quan, editor
- Published
- 2025
- Full Text
- View/download PDF
6. A Survey on Deciphering of EEG Waves
- Author
-
Mahajan, Gaurav, Divija, L., Jeevan, R., Kumari, P. Deekshitha, Narayan, Surabhi, Howlett, Robert J., Series Editor, Jain, Lakhmi C., Series Editor, Pal, Sankar K., editor, Thampi, Sabu M., editor, and Abraham, Ajith, editor
- Published
- 2025
- Full Text
- View/download PDF
7. Enhancing domain-specific text generation for power grid maintenance with P2FT.
- Author
-
Yang, Yi, Li, Chenhao, Zhu, Binghang, Zheng, Wenjie, Zhang, Fengda, and Li, Zhuangzhuang
- Subjects
- *
LANGUAGE models , *NATURAL language processing , *ELECTRIC power distribution grids , *PROCESS capability , *COMPUTER performance - Abstract
The digitization of operation and maintenance in the intelligent power grid equipment relies on a diverse array of information for smart decision-making. In the domain of intelligent decision generation, proficiency is contingent upon extensive learning from copious amounts of text. This necessitates not only robust processing capabilities but also a high level of specialization. In addressing situations where authorization is lacking, pre-trained language models (PLMs) have already provided ideas when confronted with specialized domains or tasks. In consideration of the complexity of textual content in the field of the power grid, which encompasses a multitude of specialized knowledge and involves an abundance of proprietary terminology, we have undertaken an exploration of pre-trained model specialization using the power grid domain as an example, specifically for the task of generating maintenance strategies. A two-stage fine-tuning approach (P2FT) is employed, utilizing a large-scale pre-training model specifically designed for natural language processing. The efficacy and practical value of this method were evaluated through multiple metrics, juxtaposed with other advanced approaches involving low-parameter or parameter-free fine-tuning methods. Through a meticulous analysis and validation of experimental outcomes, we have corroborated the feasibility and practical application value of employing this approach for pre-trained model specialization. Additionally, it has furnished valuable guidance for text generation within both the Chinese language domain and the power grid domain. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. A Text Generation Method Based on a Multimodal Knowledge Graph for Fault Diagnosis of Consumer Electronics.
- Author
-
Wu, Yuezhong, Sun, Yuxuan, Chen, Lingjiao, Zhang, Xuanang, and Liu, Qiang
- Subjects
LANGUAGE models ,KNOWLEDGE graphs ,FAULT diagnosis ,HOUSEHOLD electronics ,AUTOMATION - Abstract
As consumer electronics evolve towards greater intelligence, their automation and complexity also increase, making it difficult for users to diagnose faults when they occur. To address the problem where users, relying solely on their own knowledge, struggle to diagnose faults in consumer electronics promptly and accurately, we propose a multimodal knowledge graph-based text generation method. Our method begins by using deep learning models like the Residual Network (ResNet) and Bidirectional Encoder Representations from Transformers (BERT) to extract features from user-provided fault information, which can include images, text, audio, and even olfactory data. These multimodal features are then combined to form a comprehensive representation. The fused features are fed into a graph convolutional network (GCN) for fault inference, identifying potential fault nodes in the electronics. These fault nodes are subsequently fed into a pre-constructed knowledge graph to determine the final diagnosis. Finally, this information is processed through the Bias-term Fine-tuning (BitFit) enhanced Chinese Pre-trained Transformer (CPT) model, which generates the final fault diagnosis text for the user. The experimental results show that our proposed method achieves a 4.4% improvement over baseline methods, reaching a fault diagnosis accuracy of 98.4%. Our approach effectively leverages multimodal fault information, addressing the challenges users face in diagnosing faults through the integration of graph convolutional network and knowledge graph technologies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Automated Classification of Exchange Information Requirements for Construction Projects Using Word2Vec and SVM.
- Author
-
Mitera-Kiełbasa, Ewelina and Zima, Krzysztof
- Subjects
CONSTRUCTION project management ,BUILDING information modeling ,DIGITAL twins ,SUPPORT vector machines ,CONSTRUCTION projects - Abstract
This study addresses the challenge of automating the creation of Exchange Information Requirements (EIRs) for construction projects using Building Information Modelling (BIM) and Digital Twins, as specified in the ISO 19650 standard. This paper focuses on automating the classification of EIR paragraphs according to the ISO 19650 standard's categories, aiming to improve information management in construction projects. It addresses a gap in applying AI to enhance BIM project management, where barriers often include technological limitations, a shortage of specialists, and limited understanding of the methodology. The proposed method uses Word2Vec for text vectorisation and Support Vector Machines (SVMs) with an RBF kernel for text classification, and it attempts to apply Word2Vec with cosine similarity for text generation. The model achieved an average F1 score of 0.7, with predicted categories for provided sentences and similar matches for selected phrases. While the text classification results were promising, further refinement is required for the text generation component. This study concludes that integrating AI tools such as Word2Vec and SVM offers a feasible solution for enhancing EIR creation. However, further development of text generation, particularly using advanced techniques such as GPT, is recommended. These findings contribute to improving managing complex construction projects and advancing digitalization in the AECO sector. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Automated multiple-choice question generation in Spanish using neural language models.
- Author
-
de-Fitero-Dominguez, David, Garcia-Cabot, Antonio, and Garcia-Lopez, Eva
- Subjects
- *
NATURAL language processing , *LANGUAGE models , *SPANISH language , *MACHINE learning , *TRANSFORMER models - Abstract
This research presents an approach to automatic multiple-choice question (MCQ) generation in the Spanish language, using mT5-based models. The process encompasses three crucial tasks: candidate answer extraction, answer-aware question generation, and distractor generation. A methodical pipeline is structured to seamlessly integrate these tasks, converting an input text into a systematic questionnaire. For model fine-tuning, the Stanford Question Answering Dataset is employed for the first two tasks, while a combination of three different multiple-choice question datasets, translated automatically into Spanish, is used for the distractor generation task. The efficiency of the models is then evaluated by using a triad of metrics, namely BLEU, ROUGE-L, and cosine similarity. The outcomes indicate a marginal deviation from the baseline model in the question generation task but demonstrate superior performance in the distractor generation task. Importantly, this research emphasizes the potential and effectiveness of language models for automating MCQ generation, providing a valuable contribution to the field and enhancing the understanding and application of such models in the context of the Spanish language. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. EDS: Exploring deeper into semantics for video captioning.
- Author
-
Lou, Yibo, Zhang, Wenjie, Song, Xiaoning, Hua, Yang, and Wu, Xiao-Jun
- Subjects
- *
LINGUISTIC context , *PROBLEM solving , *INFORMATION resources , *VIDEOS , *VOCABULARY - Abstract
Efficiently leveraging semantic information is crucial for advancing video captioning in recent years. But, prevailing approaches that involve designing various Part-of-Speech (POS) tags as prior information lack essential linguistic knowledge guidance throughout the training procedure, particularly in the context of POS and initial description generation. Furthermore, the restriction to a single source of semantic information ignores the potential for varied interpretations inherent in each video. To solve these problems, we propose the Exploring Deeper into Semantics (EDS) method for video captioning. EDS comprises three feasible modules that focus on semantic information. Specifically, we propose the Semantic Supervised Generation (SSG) module. It integrates semantic information as a prior, and facilitates enriched interrelations among words for POS supervision. A novel Similarity Semantic Extension (SSE) module is proposed to employ a query-based semantic expansion for collaboratively generating fine-grained content. Additionally, the proposed Input Semantic Enhancement (ISE) module provides a strategy for mitigating the information constraints faced during the initial phase of word generation. The experiments conducted show that, by exploiting semantic information through supervision, extension, and enhancement, EDS not only yields promising results but also underlines the effectiveness. Code will be available at https://github.com/BradenJoson/EDS. • A novel method EDS is proposed to explore semantics utilization for video captioning. • A prior-based extractor that provides accurate semantic information. • A similarity-based extension module that handles varied video interpretations. • A simple module that refines the initial word generation with enhanced semantics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Table Transformers for imputing textual attributes.
- Author
-
Wei, Ting-Ruen, Wang, Yuan, Inoue, Yoshitaka, Wu, Hsin-Tai, and Fang, Yi
- Subjects
- *
TRANSFORMER models , *DEEP learning , *MISSING data (Statistics) , *CHATGPT , *INTEGRATED software , *RECURRENT neural networks - Abstract
Missing data in tabular dataset is a common issue as the performance of downstream tasks usually depends on the completeness of the training dataset. Previous missing data imputation methods focus on numeric and categorical columns, but we propose a novel end-to-end approach called Table Transformers for Imputing Textual Attributes (TTITA) based on the transformer to impute unstructured textual columns using other columns in the table. We conduct extensive experiments on three datasets, and our approach shows competitive performance outperforming baseline models such as recurrent neural networks and Llama2. The performance improvement is more significant when the target sequence has a longer length. Additionally, we incorporate multi-task learning to simultaneously impute for heterogeneous columns, boosting the performance for text imputation. We also qualitatively compare with ChatGPT for realistic applications. • Proposed TTITA to impute text attributes given other heterogeneous tabular columns. • Encoded inputs into a context vector for cross-attention in the transformer decoder. • Outperformed baseline models including the GRU and Llama2 on real-world datasets. • Incorporated multi-task learning for multi-column imputation and boosting performance. • Prepared the software as an open-source package for custom applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Enhancing domain-specific text generation for power grid maintenance with P2FT
- Author
-
Yi Yang, Chenhao Li, Binghang Zhu, Wenjie Zheng, Fengda Zhang, and Zhuangzhuang Li
- Subjects
Natural language processing ,Language model ,Power grid domain ,Text generation ,Fine-tuning ,Medicine ,Science - Abstract
Abstract The digitization of operation and maintenance in the intelligent power grid equipment relies on a diverse array of information for smart decision-making. In the domain of intelligent decision generation, proficiency is contingent upon extensive learning from copious amounts of text. This necessitates not only robust processing capabilities but also a high level of specialization. In addressing situations where authorization is lacking, pre-trained language models (PLMs) have already provided ideas when confronted with specialized domains or tasks. In consideration of the complexity of textual content in the field of the power grid, which encompasses a multitude of specialized knowledge and involves an abundance of proprietary terminology, we have undertaken an exploration of pre-trained model specialization using the power grid domain as an example, specifically for the task of generating maintenance strategies. A two-stage fine-tuning approach (P2FT) is employed, utilizing a large-scale pre-training model specifically designed for natural language processing. The efficacy and practical value of this method were evaluated through multiple metrics, juxtaposed with other advanced approaches involving low-parameter or parameter-free fine-tuning methods. Through a meticulous analysis and validation of experimental outcomes, we have corroborated the feasibility and practical application value of employing this approach for pre-trained model specialization. Additionally, it has furnished valuable guidance for text generation within both the Chinese language domain and the power grid domain.
- Published
- 2024
- Full Text
- View/download PDF
14. CRKG: combining retrieval knowledge with generative language models: CRKG: combining retrieval knowledge with generative...: F.Chen et al.
- Author
-
Chen, Fei, Zhang, Carter, and Ning, Bo
- Abstract
Multi-turn dialogue generation tasks heavily rely on capturing contextual information. However, in real-life scenarios, capturing the speaker’s needs accurately cannot be achieved solely with limited context, so background knowledge information is also necessary. Existing works focus on using local keywords to retrieve external knowledge and simply concatenating retrieval information with context, which results in low-quality retrieved external knowledge and redundant context, leading to difficulty in understanding the context. To address these issues, this paper proposes the CRKG model. The CRKG mode first designs a turn-level attention mechanism to capture important information in the context. Then, it retrieves knowledge from historical dialogues as an external knowledge-base based on the important information representation. Finally, it designs a hierarchical fusion encoder to dynamically integrate the retrieved information. We validate our proposed method on text-based small parameter size model and large language model. Experimental results show that our proposed method achieves the best results on multiple public datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
15. ChatGPT vs state-of-the-art models: a benchmarking study in keyphrase generation task.
- Author
-
Martínez-Cruz, Roberto, López-López, Alvaro J., and Portela, José
- Abstract
Transformer-based language models, including ChatGPT, have demonstrated exceptional performance in various natural language generation tasks. However, there has been limited research evaluating ChatGPT’s keyphrase generation ability, which involves identifying informative phrases that accurately reflect a document’s content. This study seeks to address this gap by comparing ChatGPT’s keyphrase generation performance with state-of-the-art models, while also testing its potential as a solution for two significant challenges in the field: domain adaptation and keyphrase generation from long documents. We conducted experiments on eight publicly available datasets spanning scientific, news, and biomedical domains, analyzing performance across both short and long documents. Our results show that ChatGPT outperforms current state-of-the-art models in all tested datasets and environments, generating high-quality keyphrases that adapt well to diverse domains and document lengths. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
16. Application of Generative Large Language Models in Chinese Radiology Domain
- Author
-
CHEN Longfei, GAO Xin, HOU Haotian, YE Chuyang, LIU Ya'ou, ZHANG Meihui
- Subjects
large language model ,radiology report ,text classification ,text generation ,efficient fine-tuning strategy ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
In the Chinese radiology domain, radiology reports serve as a crucial basis for clinical decision-making. Therefore, utilizing natural language processing (NLP) technology to understand and learn from the textual content of radiology reports, thereby aiding radiological clinical work, has become an important research direction in this domain. However, when dealing with the natural language classification and generation tasks based on Chinese radiology reports using traditional methods, there are still challenges such as a lack of training corpora, privacy concerns, and poor model generalization capabilities, leading to insufficient overall performance. To address these issues, a solution for natural language tasks in the Chinese radiology domain based on locally efficient fine-tuning large language models is proposed. By collecting and constructing a large-scale, high-quality dataset for natural language tasks in the Chinese radiology reports, and employing the LoRA efficient fine-tuning method for supervised fine-tuning training of the open-source large language model Baichuan2, the “RadGPT” capable of solving four types of clinical tasks in the Chinese radiology domain simultaneously is proposed. A set of evaluation systems for natural language classification and generation tasks in the Chinese radiology domain is introduced. Multiple sets of experiments are conducted on three types of radiology report datasets from two centers, and comparisons are made with several typical existing methods. The results demonstrate that the proposed method performs better in terms of classification performance, text summarization and expansion capabilities, and model generalization.
- Published
- 2024
- Full Text
- View/download PDF
17. Identifying multidisciplinary problems from scientific publications based on a text generation method
- Author
-
Xu Ziyan, Han Hongqi, Li Linna, Zhang Junsheng, and Zhou Zexu
- Subjects
problem identification ,multidisciplinary ,text generation ,text classification ,Information technology ,T58.5-58.64 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
A text generation based multidisciplinary problem identification method is proposed, which does not rely on a large amount of data annotation.
- Published
- 2024
- Full Text
- View/download PDF
18. 短文本新闻标题生成方法.
- Author
-
赵明
- Subjects
- *
LANGUAGE models , *HEADLINES , *PROBLEM solving , *PRESS releases - Abstract
Today's news has the characteristics of short text, frequent release, timeliness, etc. A media account releases dozens of news in a day. Developing suitable and attractive headlines for large volumes of news has become a major part of the work of media workers. Media workers need a system that automatically generates short text headlines to relieve their stress. To solve this problem, this study proposes a short text news title generation model. The model adopts sequence-to-sequence structure, using pre-trained language model and layered self-attention decoder in encoder and decoder respectively. In order to make the generated headlines contain the key information of the original news, a staged training method based on LCSTS data set and Weibo4 data set is proposed, and the model learns to extract the key news information and construct a stylized expression from the two data sets respectively, so that the generated headlines can accurately express the core content of the news and attract readers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Leveraging Text Generation Models for Aspect-Based Sentiment Text Generation.
- Author
-
Tummala, Purnima and Ch, Koteswararao
- Subjects
GENERATIVE adversarial networks ,SENTIMENT analysis ,DATA augmentation ,GENERATIVE pre-trained transformers ,RESTAURANT reviews - Abstract
Sentiment analysis is a vital tool in natural language processing (NLP), enabling the interpretation and understanding of opinions expressed in textual data. Traditional sentiment analysis methods, often limited to document or sentence-level analysis, primarily focus on identifying the sentiment without generating detailed sentiment text expressions. To address this limitation, we propose a novel Aspect-Specific Sentiment Expression Generation (ASSEG) model. Unlike traditional approaches, the ASSEG model leverages advanced text generation models, such as GPT-2 and T5, to automatically generate sentiment expressions tailored to diverse aspects of entities discussed in the text. The key innovation of our approach lies in the integration of aspect-specific attention mechanisms, which enable the model to effectively identify and prioritize aspects within the text, generating coherent and contextually relevant sentiment expressions. Our methodology includes using Recurrent Generative Adversarial Networks (RGANs) for data augmentation, addressing data imbalance issues, and enhancing the robustness of sentiment analysis models. Experimental evaluations were conducted on domain-specific datasets, including laptop and restaurant reviews. Our experimental evaluations on domain-specific datasets, including laptop and restaurant reviews, demonstrate the superior performance of our ASSEG model. The GPT-2 model achieved an accuracy of 75% and 65%, and an F1 score of 77% and 65% for restaurant and laptop datasets, respectively. Meanwhile, the T5 model outperformed GPT-2, achieving an accuracy of 85% and 75%, and an F1 score of 83% and 74% for restaurant and laptop datasets, respectively. These results highlight the potential of the ASSEG model, offering deeper insights into user opinions by generating detailed and contextually relevant sentiment expressions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. 生成式大语言模型在中文放射医学领域的应用研究.
- Author
-
陈龙飞, 高鑫, 侯皓天, 叶初阳, 刘亚欧, and 张美慧
- Subjects
LANGUAGE models ,NATURAL language processing ,TEXT summarization ,NATURAL languages ,CHINESE language - Abstract
Copyright of Journal of Frontiers of Computer Science & Technology is the property of Beijing Journal of Computer Engineering & Applications Journal Co Ltd. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
21. BioInstruct: instruction tuning of large language models for biomedical natural language processing.
- Author
-
Tran, Hieu, Yang, Zhichao, Yao, Zonghai, and Yu, Hong
- Abstract
Objectives To enhance the performance of large language models (LLMs) in biomedical natural language processing (BioNLP) by introducing a domain-specific instruction dataset and examining its impact when combined with multi-task learning principles. Materials and Methods We created the BioInstruct , comprising 25 005 instructions to instruction-tune LLMs (LLaMA 1 and 2, 7B and 13B version). The instructions were created by prompting the GPT-4 language model with 3-seed samples randomly drawn from an 80 human curated instructions. We employed Low-Rank Adaptation (LoRA) for parameter-efficient fine-tuning. We then evaluated these instruction-tuned LLMs on several BioNLP tasks, which can be grouped into 3 major categories: question answering (QA), information extraction (IE), and text generation (GEN). We also examined whether categories (eg, QA, IE, and generation) of instructions impact model performance. Results and Discussion Comparing with LLMs without instruction-tuned, our instruction-tuned LLMs demonstrated marked performance gains: 17.3% in QA on average accuracy metric, 5.7% in IE on average F1 metric, and 96% in Generation tasks on average GPT-4 score metric. Our 7B-parameter instruction-tuned LLaMA 1 model was competitive or even surpassed other LLMs in the biomedical domain that were also fine-tuned from LLaMA 1 with vast domain-specific data or a variety of tasks. Our results also show that the performance gain is significantly higher when instruction fine-tuning is conducted with closely related tasks. Our findings align with the observations of multi-task learning, suggesting the synergies between 2 tasks. Conclusion The BioInstruct dataset serves as a valuable resource and instruction tuned LLMs lead to the best performing BioNLP applications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Generative large language models are all-purpose text analytics engines: text-to-text learning is all your need.
- Author
-
Peng, Cheng, Yang, Xi, Chen, Aokun, Yu, Zehao, Smith, Kaleb E, Costa, Anthony B, Flores, Mona G, Bian, Jiang, and Wu, Yonghui
- Abstract
Objective To solve major clinical natural language processing (NLP) tasks using a unified text-to-text learning architecture based on a generative large language model (LLM) via prompt tuning. Methods We formulated 7 key clinical NLP tasks as text-to-text learning and solved them using one unified generative clinical LLM, GatorTronGPT, developed using GPT-3 architecture and trained with up to 20 billion parameters. We adopted soft prompts (ie, trainable vectors) with frozen LLM, where the LLM parameters were not updated (ie, frozen) and only the vectors of soft prompts were updated, known as prompt tuning. We added additional soft prompts as a prefix to the input layer, which were optimized during the prompt tuning. We evaluated the proposed method using 7 clinical NLP tasks and compared them with previous task-specific solutions based on Transformer models. Results and Conclusion The proposed approach achieved state-of-the-art performance for 5 out of 7 major clinical NLP tasks using one unified generative LLM. Our approach outperformed previous task-specific transformer models by ∼3% for concept extraction and 7% for relation extraction applied to social determinants of health, 3.4% for clinical concept normalization, 3.4%-10% for clinical abbreviation disambiguation, and 5.5%-9% for natural language inference. Our approach also outperformed a previously developed prompt-based machine reading comprehension (MRC) model, GatorTron-MRC, for clinical concept and relation extraction. The proposed approach can deliver the "one model for all" promise from training to deployment using a unified generative LLM. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Overview of RefutES at IberLEF 2024: Automatic Generation of Counter Speech in Spanish.
- Author
-
Vallecillo-Rodríguez, María Estrella, Cantero-Romero, María Victoria, Cabrera-de-Castro, Isabel, Alfonso Ureña-López, Luis, Montejo-Ráez, Arturo, and Martín-Valdivia, María Teresa
- Subjects
LANGUAGE models ,NATURAL language processing ,SUSTAINABILITY ,SPEECH ,SPANISH language - Abstract
Copyright of Procesamiento del Lenguaje Natural is the property of Sociedad Espanola para el Procesamiento del Lenguaje Natural and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
24. Folded ensemble deep learning based text generation on the brain signal.
- Author
-
Rathod, Vasundhara S., Tiwari, Ashish, and Kakde, Omprakash G.
- Subjects
CONVOLUTIONAL neural networks ,TEXT recognition ,DEEP learning ,MACHINE translating ,BRAIN-computer interfaces ,NATURAL languages ,SPEECH perception - Abstract
The text generation technique employs the transformation of the word document from the source to the targeted document based on the sequence to sequence generation. Video captioning, language identification, image captioning, recognition of speech, machine translation, and several other natural language generations are the application areas of the text generation techniques. The Electroencephalographic (EEG) signals record brain activity and are considered the source of information for using the brain-computer interface. Several kinds of research were developed for text generation. The most challenging task is more accurate text generation by considering the large contextual information and the significant features for generating the text. Hence, in this research, text generation using Folded deep learning is proposed for generating the text through text prediction and suggestion through the non-invasive technique. The EEG signal recorded from the patients is utilized for the prediction of the first letter using the proposed Folded Ensemble Deep convolutional neural network (DeepCNN), in which the hybrid ensemble activation function along with the folded concept in validating the training data to obtain the network stability and to solve the class imbalance issue. Then, the next letter suggestion is employed using the proposed Folded Ensemble Bidirectional long short-term memory (BiLSTM) approach based on the eye-blink criteria for generating the sequence-to-sequence text generation. The enhanced performance is evaluated using accuracy, precision, and recall and acquired the maximal values of 97.22%, 98.00%, and 98.00%, respectively. The proposed method can be utilized for real-time processing applications due to its non-invasive nature. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Understanding Readability of Large Language Models Output: An Empirical Analysis.
- Author
-
Marulli, Fiammetta, Campanile, Lelio, de Biase, Maria Stella, Marrone, Stefano, Verde, Laura, and Bifulco, Marianna
- Subjects
LANGUAGE models ,GENERATIVE artificial intelligence ,TECHNOLOGICAL innovations ,COMPUTATIONAL linguistics ,ENGLISH language - Abstract
Recently, Large Language Models (LLMs) have seen some impressive leaps, achieving the ability to accomplish several tasks, from text completion to powerful chatbots. The great variety of available LLMs and the fast pace of technological innovations in this field, is making LLM assessment a hard task to accomplish: understanding not only what such a kind of systems generate but also which is the quality of their results is of a paramount importance. Generally, the quality of a synthetically generated object could refer to the reliability of the content, to the lexical variety or coherence of the text. Regarding the quality of text generation, an aspect that up to now has not been adequately discussed is concerning the readability of textual artefacts. This work focuses on the latter aspect, proposing a set of experiments aiming to better understanding and evaluating the degree of readability of texts automatically generated by an LLM. The analysis is performed through an empirical study based on: considering a subset of five pre-trained LLMs; considering a pool of English text generation tasks, with increasing difficulty, assigned to each of the models; and, computing a set of the most popular readability indexes available from the computational linguistics literature. Readability indexes will be computed for each model to provide a first perspective of the readability of textual contents artificially generated can vary among different models and under different requirements of the users. The results obtained by evaluating and comparing different models provide interesting insights, especially into the responsible use of these tools by both beginners and not overly experienced practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. A Survey on RAG with LLMs.
- Author
-
Arslan, Muhammad, Ghanem, Hussam, Munawar, Saba, and Cruz, Christophe
- Subjects
LANGUAGE models ,DIGITAL transformation ,NATURAL language processing ,TECHNOLOGICAL innovations ,DIGITAL technology - Abstract
In the fast-paced realm of digital transformation, businesses are increasingly pressured to innovate and boost efficiency to remain competitive and foster growth. Large Language Models (LLMs) have emerged as game-changers across industries, revolutionizing various sectors by harnessing extensive text data to analyze and generate human-like text. Despite their impressive capabilities, LLMs often encounter challenges when dealing with domain-specific queries, potentially leading to inaccuracies in their outputs. In response, Retrieval-Augmented Generation (RAG) has emerged as a viable solution. By seamlessly integrating external data retrieval into text generation processes, RAG aims to enhance the accuracy and relevance of the generated content. However, existing literature reviews tend to focus primarily on the technological advancements of RAG, overlooking a comprehensive exploration of its applications. This paper seeks to address this gap by providing a thorough review of RAG applications, encompassing both task-specific and discipline-specific studies, while also outlining potential avenues for future research. By shedding light on current RAG research and outlining future directions, this review aims to catalyze further exploration and development in this dynamic field, thereby contributing to ongoing digital transformation efforts. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Usability of Texts Generated by Artificial Intelligence for Reading Skills in Teaching Turkish as a Foreign Language: The Example of ChatGPT-3.5.
- Author
-
KATI, Tuba Nur and CAN, Uğur
- Subjects
NATURAL language processing ,LANGUAGE teachers ,ARTIFICIAL intelligence ,CHATGPT ,GRAMMATICAL categories - Abstract
Copyright of Inonu University Journal of the Faculty of Education (INUJFE) is the property of Inonu University Journal of the Faculty of Education and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
28. 基于强化正则的小样本自动摘要方法.
- Author
-
李清 and 万卫兵
- Abstract
Automatic text summarization aims to extract the main statements from text information for the purpose of compressing information. Existing generative automatic summarization methods do not take full advantage of the pre-trained model to learn the semantics of the original text, resulting in the loss of important information in the generated content, when the data set with a small number of samples is often prone to overfitting. In order to solve such problems and obtain better fine-tuning performance, the pre-trained model mT5(multilingual T5) is used as a baseline to improve the learning ability of the model by combining R-drop(Regularized dropout) with reinforced regularity for model fine-tuning, and Sparse softmax is used to reduce the ambiguity of prediction generation to ensure the accuracy of the output. The model calculates BLEU(Bilingual Evaluation Understudy) for hyperparameter test on Chinese data sets LCSTS and CSL, and uses Rouge as evaluation index to evaluate data sets of different orders of magnitude. The experimental results show that the optimized pre-trained model can better learn the semantic representation of the original text, and the model can maintain a good fit in the small samples and generate more practical results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Towards Reliable Healthcare LLM Agents: A Case Study for Pilgrims during Hajj.
- Author
-
Alghamdi, Hanan M. and Mostafa, Abeer
- Subjects
- *
LANGUAGE models , *ARTIFICIAL intelligence , *DATA augmentation , *EVIDENCE gaps , *DEEP learning , *CHATBOTS - Abstract
There is a pressing need for healthcare conversational agents with domain-specific expertise to ensure the provision of accurate and reliable information tailored to specific medical contexts. Moreover, there is a notable gap in research ensuring the credibility and trustworthiness of the information provided by these healthcare agents, particularly in critical scenarios such as medical emergencies. Pilgrims come from diverse cultural and linguistic backgrounds, often facing difficulties in accessing medical advice and information. Establishing an AI-powered multilingual chatbot can bridge this gap by providing readily available medical guidance and support, contributing to the well-being and safety of pilgrims. In this paper, we present a comprehensive methodology aimed at enhancing the reliability and efficacy of healthcare conversational agents, with a specific focus on addressing the needs of Hajj pilgrims. Our approach leverages domain-specific fine-tuning techniques on a large language model, alongside synthetic data augmentation strategies, to optimize performance in delivering contextually relevant healthcare information by introducing the HajjHealthQA dataset. Additionally, we employ a retrieval-augmented generation (RAG) module as a crucial component to validate uncertain generated responses, which improves model performance by 5%. Moreover, we train a secondary AI agent on a well-known health fact-checking dataset and use it to validate medical information in the generated responses. Our approach significantly elevates the chatbot's accuracy, demonstrating its adaptability to a wide range of pilgrim queries. We evaluate the chatbot's performance using quantitative and qualitative metrics, highlighting its proficiency in generating accurate responses and achieve competitive results compared to state-of-the-art models, in addition to mitigating the risk of misinformation and providing users with trustworthy health information. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Towards accurate unsupervised video captioning with implicit visual feature injection and explicit.
- Author
-
Zhang, Yunjie, Xu, Tianyang, Song, Xiaoning, Zhu, Xue-Feng, Feng, Zhenghua, and Wu, Xiao-Jun
- Abstract
In the realm of the video captioning field, acquiring large amounts of high-quality aligned video–text pairs remains laborious, impeding its practical applications. Therefore, we explore the modelling techniques for unsupervised video captioning. Using text inputs similar to the video representation to generate captions has been a successful unsupervised video captioning generation strategy in the past. However, this setting relies solely on the textual data for training, neglecting vital visual cues related to the spatio-temporal appearance within the video. The absence of visual information increases the risk of generating erroneous video captions. In view of this, we propose a novel unsupervised video captioning method that introduces visual information related to text features keywords to implicitly enhance training for text generation tasks. Simultaneously, our method incorporates sentence to explicitly augment the training process. our method injects additional implicit visual features and explicit keywords into the model, Which can inject the generated captions with more accurate semantics. the experimental analysis demonstrates the merit of the proposed formulation, achieving superior performance against the state-of-the-art unsupervised studies. • Contrast learning is used to minimise the disparities between pseudo-text labels and video features. • Visual clues are aligned with the text generator for consistent semantic enhancement. • Leveraging the Found within the sentences for semantic preservation. • Outperforming existing unsupervised video captioning approaches. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Generating Factual Text via Entailment Recognition Task.
- Author
-
Dai, Jinqiao, Cheng, Pengsen, and Liu, Jiayong
- Subjects
AUTOMATIC summarization ,NATURAL language processing ,ARTIFICIAL intelligence ,NATURAL languages - Abstract
Generating diverse and factual text is challenging and is receiving increasing attention. By sampling from the latent space, variational autoencoder-based models have recently enhanced the diversity of generated text. However, existing research predominantly depends on summarization models to offer paragraph-level semantic information for enhancing factual correctness. The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models. In this paper, a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text. Specifically, our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network. By training a conditional variational autoencoder network, the model is enabled to generate text based on input facts. Building upon this foundation, the input text is passed to the discriminator along with the generated text. By employing adversarial training, the model is encouraged to generate text that is indistinguishable to the discriminator, thereby enhancing the quality of the generated text. To further improve the factual correctness, inspired by the natural language inference system, the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning. Moreover, based on the entailment recognition results, a penalty term is further proposed to reconstruct the loss of our model, forcing the generator to generate text consistent with the facts. Experimental results demonstrate that compared with competitive models, our model has achieved substantial improvements in both the quality and factual correctness of the text, despite only sacrificing a small amount of diversity. Furthermore, when considering a comprehensive evaluation of diversity and quality metrics, our model has also demonstrated the best performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Generation of Space Descriptions Based on Distributional Semantic Models
- Author
-
Gorovaia, S. P., Brilly, Mitja, Advisory Editor, Hoalst-Pullen, Nancy, Advisory Editor, Leitner, Michael, Advisory Editor, Patterson, Mark W., Advisory Editor, Veress, Márton, Advisory Editor, Bakaev, Maxim, editor, Bolgov, Radomir, editor, Chugunov, Andrei V., editor, Pereira, Roberto, editor, R, Elakkiya, editor, and Zhang, Wei, editor
- Published
- 2024
- Full Text
- View/download PDF
33. Safet AIsović: Comparison of Methods for Generating Sevdah Music Lyrics
- Author
-
Šabić, Ejub, Fazlić, Amar, Genjac, Amar, Kovačević, Aldin, Kečo, Dino, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Ademović, Naida, editor, Akšamija, Zlatan, editor, and Karabegović, Almir, editor
- Published
- 2024
- Full Text
- View/download PDF
34. Generative Sentiment Analysis via Latent Category Distribution and Constrained Decoding
- Author
-
Zhou, Jun, Yu, Dongyang, Aziz, Kamran, Su, Fangfang, Zhang, Qing, Li, Fei, Ji, Donghong, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Wand, Michael, editor, Malinovská, Kristína, editor, Schmidhuber, Jürgen, editor, and Tetko, Igor V., editor
- Published
- 2024
- Full Text
- View/download PDF
35. Hybrid Approach Text Generation for Low-Resource Language
- Author
-
Rakhimova, Diana, Adali, Eşref, Karibayeva, Aidana, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Nguyen, Ngoc-Than, editor, Franczyk, Bogdan, editor, Ludwig, André, editor, Nunez, Manuel, editor, Treur, Jan, editor, Vossen, Gottfried, editor, and Kozierkiewicz, Adrianna, editor
- Published
- 2024
- Full Text
- View/download PDF
36. Neuro-Evolution-Based Language Model for Text Generation
- Author
-
Bagavathi, C., Prakash, Abhijith C., Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Carette, Jacques, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Stettner, Lukasz, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Rettberg, Achim, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, Owoc, Mieczyslaw Lech, editor, Varghese Sicily, Felix Enigo, editor, Rajaram, Kanchana, editor, and Balasundaram, Prabavathy, editor
- Published
- 2024
- Full Text
- View/download PDF
37. THE BAT: Thoughts Hierarchical Enhancement Beyond Arbitrary Text Style Transfer
- Author
-
Zeng, Biqing, Liang, Junjie, Hua, Yining, Li, Ruizhe, Deng, Huimin, Peng, Yihao, Wang, Ruitang, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Huang, De-Shuang, editor, Si, Zhanjun, editor, and Zhang, Chuanlei, editor
- Published
- 2024
- Full Text
- View/download PDF
38. On the Way to Controllable Text Summarization in Russian
- Author
-
Dremina, Alena, Tikhonova, Maria, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Ignatov, Dmitry I., editor, Khachay, Michael, editor, Kutuzov, Andrey, editor, Madoyan, Habet, editor, Makarov, Ilya, editor, Nikishina, Irina, editor, Panchenko, Alexander, editor, Panov, Maxim, editor, M. Pardalos, Panos, editor, Savchenko, Andrey V., editor, Tsymbalov, Evgenii, editor, Tutubalina, Elena, editor, and Zagoruyko, Sergey, editor
- Published
- 2024
- Full Text
- View/download PDF
39. Integrating Prior Scenario Knowledge for Composition Review Generation
- Author
-
Zheng, Luyang, Jiang, Hailan, Wang, Jian, Sun, Yuqinq, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Cao, Cungeng, editor, Chen, Huajun, editor, Zhao, Liang, editor, Arshad, Junaid, editor, Asyhari, Taufiq, editor, and Wang, Yonghao, editor
- Published
- 2024
- Full Text
- View/download PDF
40. Challenges and Opportunities in Text Generation Explainability
- Author
-
Amara, Kenza, Sevastjanova, Rita, El-Assady, Mennatallah, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Longo, Luca, editor, Lapuschkin, Sebastian, editor, and Seifert, Christin, editor
- Published
- 2024
- Full Text
- View/download PDF
41. AI for the Restoration of Ancient Inscriptions: A Computational Linguistics Perspective
- Author
-
Locaputo, Alessandro, Portelli, Beatrice, Magnani, Stefano, Colombi, Emanuela, Serra, Giuseppe, Moral-Andrés, Fernando, editor, Merino-Gómez, Elena, editor, and Reviriego, Pedro, editor
- Published
- 2024
- Full Text
- View/download PDF
42. Stylometric Analysis of Large Language Model-Generated Commentaries in the Context of Medical Neuroscience
- Author
-
K. Argasiński, Jan, Grabska-Gradzińska, Iwona, Przystalski, Karol, K. Ochab, Jeremi, Walkowiak, Tomasz, Hartmanis, Juris, Founding Editor, van Leeuwen, Jan, Series Editor, Hutchison, David, Editorial Board Member, Kanade, Takeo, Editorial Board Member, Kittler, Josef, Editorial Board Member, Kleinberg, Jon M., Editorial Board Member, Kobsa, Alfred, Series Editor, Mattern, Friedemann, Editorial Board Member, Mitchell, John C., Editorial Board Member, Naor, Moni, Editorial Board Member, Nierstrasz, Oscar, Series Editor, Pandu Rangan, C., Editorial Board Member, Sudan, Madhu, Series Editor, Terzopoulos, Demetri, Editorial Board Member, Tygar, Doug, Editorial Board Member, Weikum, Gerhard, Series Editor, Vardi, Moshe Y, Series Editor, Goos, Gerhard, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Franco, Leonardo, editor, de Mulatier, Clélia, editor, Paszynski, Maciej, editor, Krzhizhanovskaya, Valeria V., editor, Dongarra, Jack J., editor, and Sloot, Peter M. A., editor
- Published
- 2024
- Full Text
- View/download PDF
43. Extending Abstract Categorial Grammars with Feature Structures: Theory and Practice
- Author
-
de Groote, Philippe, Guillaume, Maxime, Helman, Agathe, Pogodalla, Sylvain, Salmon, Raphaël, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bekki, Daisuke, editor, Mineshima, Koji, editor, and McCready, Elin, editor
- Published
- 2024
- Full Text
- View/download PDF
44. Generating Synthetic Text Data for Improving Class Balance in Personality Prediction
- Author
-
Lakhtaria, Dhruvil, L., Durga Supriya H., Chhabra, Radhika, Taparia, Rohit, M., Anand Kumar, Celebi, Emre, Series Editor, Chen, Jingdong, Series Editor, Gopi, E. S., Series Editor, Neustein, Amy, Series Editor, Liotta, Antonio, Series Editor, Di Mauro, Mario, Series Editor, and Maheswaran, P, editor
- Published
- 2024
- Full Text
- View/download PDF
45. MAP-Elites with Transverse Assessment for Multimodal Problems in Creative Domains
- Author
-
Zammit, Marvin, Liapis, Antonios, Yannakakis, Georgios N., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Johnson, Colin, editor, Rebelo, Sérgio M., editor, and Santos, Iria, editor
- Published
- 2024
- Full Text
- View/download PDF
46. AI-Driven Meditation: Personalization for Inner Peace
- Author
-
Nguyen, Peter, Fdez, Javier, Witkowski, Olaf, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Johnson, Colin, editor, Rebelo, Sérgio M., editor, and Santos, Iria, editor
- Published
- 2024
- Full Text
- View/download PDF
47. Controllable Story Generation Based on Perplexity Minimization
- Author
-
Vychegzhanin, Sergey, Kotelnikova, Anastasia, Sergeev, Alexander, Kotelnikov, Evgeny, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Ignatov, Dmitry I., editor, Khachay, Michael, editor, Kutuzov, Andrey, editor, Madoyan, Habet, editor, Makarov, Ilya, editor, Nikishina, Irina, editor, Panchenko, Alexander, editor, Panov, Maxim, editor, Pardalos, Panos M., editor, Savchenko, Andrey V., editor, Tsymbalov, Evgenii, editor, Tutubalina, Elena, editor, and Zagoruyko, Sergey, editor
- Published
- 2024
- Full Text
- View/download PDF
48. Comparison of Textual Data Augmentation Methods on SST-2 Dataset
- Author
-
Çataltaş, Mustafa, Baykan, Nurdan Akhan, Cicekli, Ilyas, Chlamtac, Imrich, Series Editor, and Seyman, Muhammet Nuri, editor
- Published
- 2024
- Full Text
- View/download PDF
49. The Rise of AI-Powered Writing: How ChatGPT is Revolutionizing Scientific Communication for Better or for Worse
- Author
-
Pawlicka, Aleksandra, Pawlicki, Marek, Kozik, Rafał, Choraś, Michał, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Huang, De-Shuang, editor, Premaratne, Prashan, editor, and Yuan, Changan, editor
- Published
- 2024
- Full Text
- View/download PDF
50. Visually Reporting Geographic Data Insights as Integrated Visual and Textual Representations
- Author
-
Beck, Fabian, Latif, Shahid, Burghardt, Dirk, editor, Demidova, Elena, editor, and Keim, Daniel A., editor
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.