54 results
Search Results
2. Group to establish standards for AI in papers.
- Author
-
Else, Holly
- Subjects
- *
GENERATIVE artificial intelligence , *ARTIFICIAL intelligence , *LANGUAGE models , *CHATBOTS - Abstract
The article highlights a Chinese team's discovery of a bacterium within mosquitoes' guts that inhibits dengue and Zika viruses, potentially aiding disease control efforts. Topics include the bacterium's efficacy in disrupting viral transmission, the significance amidst rising mosquito resistance to insecticides, and ongoing research to assess its real-world impact alongside existing control measures.
- Published
- 2024
- Full Text
- View/download PDF
3. Quantifying social capital creation in post‐disaster recovery aid in Indonesia: methodological innovation by an AI‐based language model.
- Author
-
Marutschke, Daniel Moritz, Nurdin, Muhammad Riza, and Hirono, Miwa
- Subjects
- *
LANGUAGE models , *ARTIFICIAL intelligence , *SOCIAL capital , *NATURAL language processing , *DISASTER relief , *ETHNOLOGY research , *DISASTER resilience - Abstract
Smooth interaction with a disaster‐affected community can create and strengthen its social capital, leading to greater effectiveness in the provision of successful post‐disaster recovery aid. To understand the relationship between the types of interaction, the strength of social capital generated, and the provision of successful post‐disaster recovery aid, intricate ethnographic qualitative research is required, but it is likely to remain illustrative because it is based, at least to some degree, on the researcher's intuition. This paper thus offers an innovative research method employing a quantitative artificial intelligence (AI)‐based language model, which allows researchers to re‐examine data, thereby validating the findings of the qualitative research, and to glean additional insights that might otherwise have been missed. This paper argues that well‐connected personnel and religiously‐based communal activities help to enhance social capital by bonding within a community and linking to outside agencies and that mixed methods, based on the AI‐based language model, effectively strengthen text‐based qualitative research. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Reviving the Philosophical Dialogue with Large Language Models.
- Author
-
Smithson, Robert and Zweber, Adam
- Subjects
- *
LANGUAGE models , *PHILOSOPHY education , *PLAGIARISM , *STUDENT assignments , *ARTIFICIAL intelligence - Abstract
Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers "entirely on their own." For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least "look like" good papers, many students will complete paper assignments in a way that fails to develop their philosophical abilities. We argue that this problem exists even if students can produce better papers with AI and even if instructors can detect AI-generated content with decent reliability. But LLMs also create a pedagogical opportunity. We propose that instructors shift the emphasis of their assignments from philosophy papers to "LLM dialogues": philosophical conversations between the student and an LLM. We describe our experience with using these types of assignments over the past several semesters. We argue that, far from undermining quality philosophical instruction, LLMs allow us to teach philosophy more effectively than was possible before. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. ChatGPT: The transformative influence of generative AI on science and healthcare.
- Author
-
Varghese, Julian and Chapiro, Julius
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *CHATGPT , *ARTIFICIAL intelligence , *CLINICAL decision support systems - Abstract
In an age where technology is evolving at a sometimes incomprehensibly rapid pace, the liver community must adjust and learn to embrace breakthroughs with an open mind in order to benefit from potentially transformative influences on our science and practice. The Journal of Hepatology has responded to novel developments in artificial intelligence (AI) by recruiting experts in the field to serve on the Editorial Board. Publications introducing novel AI technology are no longer uncommon in our journal and are among the most highly debated and possibly practice-changing papers across a broad range of scientific disciplines, united by their focus on liver disease. As AI is rapidly evolving, this expert paper will focus on educating our readership on large language models and their possible impact on our research practice and clinical outlook, outlining both challenges and opportunities in the field. "To improve is to change; to be perfect is to change often." ― Winston S. Churchill [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Deep Time Series Forecasting Models: A Comprehensive Survey.
- Author
-
Liu, Xinhe and Wang, Wenmin
- Subjects
- *
DEEP learning , *ARTIFICIAL neural networks , *TIME series analysis , *CONVOLUTIONAL neural networks , *ARTIFICIAL intelligence , *LANGUAGE models - Abstract
Deep learning, a crucial technique for achieving artificial intelligence (AI), has been successfully applied in many fields. The gradual application of the latest architectures of deep learning in the field of time series forecasting (TSF), such as Transformers, has shown excellent performance and results compared to traditional statistical methods. These applications are widely present in academia and in our daily lives, covering many areas including forecasting electricity consumption in power systems, meteorological rainfall, traffic flow, quantitative trading, risk control in finance, sales operations and price predictions for commercial companies, and pandemic prediction in the medical field. Deep learning-based TSF tasks stand out as one of the most valuable AI scenarios for research, playing an important role in explaining complex real-world phenomena. However, deep learning models still face challenges: they need to deal with the challenge of large-scale data in the information age, achieve longer forecasting ranges, reduce excessively high computational complexity, etc. Therefore, novel methods and more effective solutions are essential. In this paper, we review the latest developments in deep learning for TSF. We begin by introducing the recent development trends in the field of TSF and then propose a new taxonomy from the perspective of deep neural network models, comprehensively covering articles published over the past five years. We also organize commonly used experimental evaluation metrics and datasets. Finally, we point out current issues with the existing solutions and suggest promising future directions in the field of deep learning combined with TSF. This paper is the most comprehensive review related to TSF in recent years and will provide a detailed index for researchers in this field and those who are just starting out. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Chatbot Invasion.
- Author
-
Stokel-Walker, Chris
- Subjects
- *
CHATBOTS , *LANGUAGE models , *SCIENTIFIC literature , *ACADEMIC librarians , *ARTIFICIAL intelligence - Abstract
A recent article in Scientific American discusses the concern among scientists that chatbots, such as ChatGPT, are being misused to produce scientific literature. Researchers have identified certain keywords and phrases that tend to appear more often in AI-generated sentences than in human writing. However, automated AI text detectors are unreliable, and the involvement of AI in scientific papers is not always clear-cut. Librarian Andrew Gray's analysis suggests that at least 60,000 papers, slightly more than 1 percent of all scientific articles published globally last year, may have used a large language model. The use of AI in scientific writing raises concerns about the accuracy and integrity of the research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
8. Assessing the impact and challenges of AI-based language models on the education sector: a proposal for new assessment strategies and design.
- Author
-
Le, Anh Viet and Metzger, Warren
- Subjects
- *
LANGUAGE models , *ARTIFICIAL intelligence , *CHATGPT , *TOURISM education , *SCHOOL environment , *RECOMMENDER systems - Abstract
This paper examines the potential impact of AI-based language models on education, with a specific focus on the ChatGPT model developed by OpenAI. The study begins by providing an overview of the current state of AI in education, including the challenges and opportunities presented by these models. It then delves into an analysis of ChatGPT, including its capabilities, strengths and limitations. The paper also conducts a thorough analysis of potential implications and offers suggestions for assessment design to prevent students from utilising AI-based language models in educational environments. Finally, the paper concludes with a set of recommendations for assessment design, considering the specific characteristics of ChatGPT and the potential impact of AI-based language models on tourism education. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Using an Artificial intelligence chatbot to critically review the scientific literature on the use of Artificial intelligence in Environmental Impact Assessment.
- Author
-
Bond, Alan, Cilliers, Dirk, Retief, Francois, Alberts, Reece, Roos, Claudine, and Moolman, Jurie
- Subjects
- *
ENVIRONMENTAL impact analysis , *SCIENTIFIC literature , *ARTIFICIAL intelligence , *CHATBOTS , *LITERATURE reviews , *LANGUAGE models - Abstract
There is considerable uncertainty about the role that Artificial Intelligence (AI) might play in Environmental Impact Assessment (EIA), including into research. AI large language model (LLM) chatbots have the potential to increase the efficiency of EIA research, but their outputs can create concerns. This paper investigates the potential time savings achievable using LLM chatbots to undertake a critical review of literature focussing on the use of AI in EIA. Using a combination of ChatGPT and Elicit, literature was reviewed to identify 12 key issues associated with the use of AI in EIA and this paper was prepared in three and a half days from initial conception. A protocol is developed to assist researchers in fact checking evidence delivered through Elicit (or other machine learning tools) which serves as a novel outcome of this research. Using comments from three peer reviewers allowed some more objective reflection on the credibility of the LLM chatbot-derived output, on the appropriateness of the time savings, and on the future research needed on the application of LLM chatbots in this context. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Segment Anything Model Can Not Segment Anything: Assessing AI Foundation Model's Generalizability in Permafrost Mapping.
- Author
-
Li, Wenwen, Hsu, Chia-Yu, Wang, Sizhe, Yang, Yezhou, Lee, Hyunho, Liljedahl, Anna, Witharana, Chandi, Yang, Yili, Rogers, Brendan M., Arundel, Samantha T., Jones, Matthew B., McHenry, Kenton, and Solis, Patricia
- Subjects
- *
LANGUAGE models , *BUILDING foundations , *ARTIFICIAL intelligence , *PERMAFROST , *GLOBAL warming , *TUNDRAS - Abstract
This paper assesses trending AI foundation models, especially emerging computer vision foundation models and their performance in natural landscape feature segmentation. While the term foundation model has quickly garnered interest from the geospatial domain, its definition remains vague. Hence, this paper will first introduce AI foundation models and their defining characteristics. Built upon the tremendous success achieved by Large Language Models (LLMs) as the foundation models for language tasks, this paper discusses the challenges of building foundation models for geospatial artificial intelligence (GeoAI) vision tasks. To evaluate the performance of large AI vision models, especially Meta's Segment Anything Model (SAM), we implemented different instance segmentation pipelines that minimize the changes to SAM to leverage its power as a foundation model. A series of prompt strategies were developed to test SAM's performance regarding its theoretical upper bound of predictive accuracy, zero-shot performance, and domain adaptability through fine-tuning. The analysis used two permafrost feature datasets, ice-wedge polygons and retrogressive thaw slumps because (1) these landform features are more challenging to segment than man-made features due to their complicated formation mechanisms, diverse forms, and vague boundaries; (2) their presence and changes are important indicators for Arctic warming and climate change. The results show that although promising, SAM still has room for improvement to support AI-augmented terrain mapping. The spatial and domain generalizability of this finding is further validated using a more general dataset EuroCrops for agricultural field mapping. Finally, we discuss future research directions that strengthen SAM's applicability in challenging geospatial domains. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Vox Populi, Vox ChatGPT: Large Language Models, Education and Democracy.
- Author
-
Zuber, Niina and Gogoll, Jan
- Subjects
- *
LANGUAGE models , *CHATGPT , *GENERATIVE artificial intelligence , *ARTIFICIAL intelligence , *DEMOCRACY - Abstract
In the era of generative AI and specifically large language models (LLMs), exemplified by ChatGPT, the intersection of artificial intelligence and human reasoning has become a focal point of global attention. Unlike conventional search engines, LLMs go beyond mere information retrieval, entering into the realm of discourse culture. Their outputs mimic well-considered, independent opinions or statements of facts, presenting a pretense of wisdom. This paper explores the potential transformative impact of LLMs on democratic societies. It delves into the concerns regarding the difficulty in distinguishing ChatGPT-generated texts from human output. The discussion emphasizes the essence of authorship, rooted in the unique human capacity for reason—a quality indispensable for democratic discourse and successful collaboration within free societies. Highlighting the potential threats to democracy, this paper presents three arguments: the Substitution argument, the Authenticity argument, and the Facts argument. These arguments highlight the potential risks that are associated with an overreliance on LLMs. The central thesis posits that widespread deployment of LLMs may adversely affect the fabric of a democracy if not comprehended and addressed proactively and properly. In proposing a solution, we advocate for an emphasis on education as a means to mitigate risks. We suggest cultivating thinking skills in children, fostering coherent thought formulation, and distinguishing between machine-generated output and genuine, i.e., human, reasoning. The focus should be on the responsible development and usage of LLMs, with the goal of augmenting human capacities in thinking, deliberating and decision-making rather than substituting them. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. ArchGPT: harnessing large language models for supporting renovation and conservation of traditional architectural heritage.
- Author
-
Zhang, Jiaxin, Xiang, Rikui, Kuang, Zheyuan, Wang, Bowen, and Li, Yunqin
- Subjects
- *
LANGUAGE models , *PRESERVATION of architecture , *ARTIFICIAL intelligence , *ARCHITECTURAL designs , *VERNACULAR architecture , *CULTURAL property , *CONSERVATION & restoration , *TRADITIONAL ecological knowledge - Abstract
The renovation of traditional architecture contributes to the inheritance of cultural heritage and promotes the development of social civilization. However, executing renovation plans that simultaneously align with the demands of residents, heritage conservation personnel, and architectural experts poses a significant challenge. In this paper, we introduce an Artificial Intelligence (AI) agent, Architectural GPT (ArchGPT), designed for comprehensively and accurately understanding needs and tackling architectural renovation tasks, accelerating and assisting the renovation process. To address users' requirements, ArchGPT utilizes the reasoning capabilities of large language models (LLMs) for task planning. Operating under the use of tools, task-specific models, and professional architectural guidelines, it resolves issues within the architectural domain through sensible planning, combination, and invocation. Ultimately, ArchGPT achieves satisfactory results in terms of response and overall satisfaction rates for customized tasks related to the conservation and restoration of traditional architecture. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Intelligent extraction of reservoir dispatching information integrating large language model and structured prompts.
- Author
-
Yang, Yangrui, Chen, Sisi, Zhu, Yaping, Liu, Xuemei, Ma, Wei, and Feng, Ling
- Subjects
- *
LANGUAGE models , *ARTIFICIAL intelligence , *RESERVOIRS , *DATA mining , *MERGERS & acquisitions , *FLOOD control - Abstract
Reservoir dispatching regulations are a crucial basis for reservoir operation, and using information extraction technology to extract entities and relationships from heterogeneous texts to form triples can provide structured knowledge support for professionals in making dispatch decisions and intelligent recommendations. Current information extraction technologies require manual data labeling, consuming a significant amount of time. As the number of dispatch rules increases, this method cannot meet the need for timely generation of dispatch plans during emergency flood control periods. Furthermore, utilizing natural language prompts to guide large language models in completing reservoir dispatch extraction tasks also presents challenges of cognitive load and instability in model output. Therefore, this paper proposes an entity and relationship extraction method for reservoir dispatch based on structured prompt language. Initially, a variety of labels are refined according to the extraction tasks, then organized and defined using the Backus–Naur Form (BNF) to create a structured format, thus better guiding large language models in the extraction work. Moreover, an AI agent based on this method has been developed to facilitate operation by dispatch professionals, allowing for the quick acquisition of structured data. Experimental verification has shown that, in the task of extracting entities and relationships for reservoir dispatch, this AI agent not only effectively reduces cognitive burden and the impact of instability in model output but also demonstrates high extraction performance (with F1 scores for extracting entities and relationships both above 80%), offering a new solution approach for knowledge extraction tasks in other water resource fields. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Ethical considerations for artificial intelligence in dermatology: a scoping review.
- Author
-
Gordon, Emily R, Trager, Megan H, Kontos, Despina, Weng, Chunhua, Geskin, Larisa J, Dugdale, Lydia S, and Samie, Faramarz H
- Subjects
- *
ARTIFICIAL intelligence , *LANGUAGE models , *ATTITUDES toward technology , *DERMATOLOGY , *CHATGPT - Abstract
The field of dermatology is experiencing the rapid deployment of artificial intelligence (AI), from mobile applications (apps) for skin cancer detection to large language models like ChatGPT that can answer generalist or specialist questions about skin diagnoses. With these new applications, ethical concerns have emerged. In this scoping review, we aimed to identify the applications of AI to the field of dermatology and to understand their ethical implications. We used a multifaceted search approach, searching PubMed, MEDLINE, Cochrane Library and Google Scholar for primary literature, following the PRISMA Extension for Scoping Reviews guidance. Our advanced query included terms related to dermatology, AI and ethical considerations. Our search yielded 202 papers. After initial screening, 68 studies were included. Thirty-two were related to clinical image analysis and raised ethical concerns for misdiagnosis, data security, privacy violations and replacement of dermatologist jobs. Seventeen discussed limited skin of colour representation in datasets leading to potential misdiagnosis in the general population. Nine articles about teledermatology raised ethical concerns, including the exacerbation of health disparities, lack of standardized regulations, informed consent for AI use and privacy challenges. Seven addressed inaccuracies in the responses of large language models. Seven examined attitudes toward and trust in AI, with most patients requesting supplemental assessment by a physician to ensure reliability and accountability. Benefits of AI integration into clinical practice include increased patient access, improved clinical decision-making, efficiency and many others. However, safeguards must be put in place to ensure the ethical application of AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Chat3D: Interactive understanding 3D scene-level point clouds by chatting with foundation model for urban ecological construction.
- Author
-
Chen, Yiping, Zhang, Shuai, Han, Ting, Du, Yumeng, Zhang, Wuming, and Li, Jonathan
- Subjects
- *
LANGUAGE models , *POINT cloud , *ARTIFICIAL intelligence , *SUSTAINABLE urban development , *ECOLOGICAL models - Abstract
With the artificial intelligence technology development boom, large language models are demonstrating their potential in comprehension and creativity. Large language models such as GPT-4 and Gemini have been able to powerfully study for various professional-level exams. However, as a language model itself, its powerful comprehension can only be reflected in text sequences. Currently, although videos can be generated through the connection between 3D point clouds and large language models, there is currently no prompt project that directly interacts with one-dimensional through attribute calculation results. The point cloud data is also rich in information that can support various tasks of urban construction. For scene-level point cloud data, there has been a lot of research done on semantic segmentation, target detection, and other tasks. However, it is usually difficult to provide direct help to scene construction from the perception results. This paper presents a method for applying large language models to urban ecological construction by combining the results of 3D point cloud semantic segmentation. The objective is to integrate the prior knowledge and creative capabilities of Large Language Models (LLMs) within urban development with the outcomes derived from point cloud semantic segmentation results. This integration aims to establish an interactive point cloud intelligent analysis system, tailored for aiding decision-making processes in urban ecological civilization construction, thus presenting innovative perspectives for the advancement of smart city development. • Chat3D, a large model for interactive large 3D scene understanding to urban ecological construction based on prompt engineering. • Utilizing thought chains, different levels of prompts input improve the completeness and credibility of report generation. • Experiments show that Chat3D accurately predicts urban environmental indices, assesses urban ecological risks, and benefits sustainable urban development. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Generative AI for pentesting: the good, the bad, the ugly.
- Author
-
Hilario, Eric, Azam, Sami, Sundaram, Jawahar, Imran Mohammed, Khwaja, and Shanmugam, Bharanidharan
- Subjects
- *
GENERATIVE artificial intelligence , *LANGUAGE models , *CHATGPT , *INTERNET security , *ARTIFICIAL intelligence - Abstract
This paper examines the role of Generative AI (GenAI) and Large Language Models (LLMs) in penetration testing exploring the benefits, challenges, and risks associated with cyber security applications. Through the use of generative artificial intelligence, penetration testing becomes more creative, test environments are customised, and continuous learning and adaptation is achieved. We examined how GenAI (ChatGPT 3.5) helps penetration testers with options and suggestions during the five stages of penetration testing. The effectiveness of the GenAI tool was tested using a publicly available vulnerable machine from VulnHub. It was amazing how quickly they responded at each stage and provided better pentesting report. In this article, we discuss potential risks, unintended consequences, and uncontrolled AI development associated with pentesting. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Emerging opportunities of using large language models for translation between drug molecules and indications.
- Author
-
Oniani, David, Hilsman, Jordan, Zang, Chengxi, Wang, Junmei, Cai, Lianjin, Zawala, Jan, and Wang, Yanshan
- Subjects
- *
LANGUAGE models , *GENERATIVE artificial intelligence , *DRUG discovery , *MOLECULES , *EVIDENCE gaps - Abstract
A drug molecule is a substance that changes an organism's mental or physical state. Every approved drug has an indication, which refers to the therapeutic use of that drug for treating a particular medical condition. While the Large Language Model (LLM), a generative Artificial Intelligence (AI) technique, has recently demonstrated effectiveness in translating between molecules and their textual descriptions, there remains a gap in research regarding their application in facilitating the translation between drug molecules and indications (which describes the disease, condition or symptoms for which the drug is used), or vice versa. Addressing this challenge could greatly benefit the drug discovery process. The capability of generating a drug from a given indication would allow for the discovery of drugs targeting specific diseases or targets and ultimately provide patients with better treatments. In this paper, we first propose a new task, the translation between drug molecules and corresponding indications, and then test existing LLMs on this new task. Specifically, we consider nine variations of the T5 LLM and evaluate them on two public datasets obtained from ChEMBL and DrugBank. Our experiments show the early results of using LLMs for this task and provide a perspective on the state-of-the-art. We also emphasize the current limitations and discuss future work that has the potential to improve the performance on this task. The creation of molecules from indications, or vice versa, will allow for more efficient targeting of diseases and significantly reduce the cost of drug discovery, with the potential to revolutionize the field of drug discovery in the era of generative AI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Prompting Change: Exploring Prompt Engineering in Large Language Model AI and Its Potential to Transform Education.
- Author
-
Cain, William
- Subjects
- *
LANGUAGE models , *ARTIFICIAL intelligence , *ITERATIVE learning control , *ENGINEERING , *ARTIFICIAL languages , *ACTIVE learning - Abstract
This paper explores the transformative potential of Large Language Models Artificial Intelligence (LLM AI) in educational contexts, particularly focusing on the innovative practice of prompt engineering. Prompt engineering, characterized by three essential components of content knowledge, critical thinking, and iterative design, emerges as a key mechanism to access the transformative capabilities of LLM AI in the learning process. This paper charts the evolving trajectory of LLM AI as a tool poised to reshape educational practices and assumptions. In particular, this paper breaks down the potential of prompt engineering practices to enhance learning by fostering personalized, engaging, and equitable educational experiences. The paper underscores how the natural language capabilities of LLM AI tools can help students and educators transition from passive recipients to active co-creators of their learning experiences. Critical thinking skills, particularly information literacy, media literacy, and digital citizenship, are identified as crucial for using LLM AI tools effectively and responsibly. Looking forward, the paper advocates for continued research to validate the benefits of prompt engineering practices across diverse learning contexts while simultaneously promoting potential defects, biases, and ethical concerns related to LLM AI use in education. It calls upon practitioners to explore and train educational stakeholders in best practices around prompt engineering for LLM AI, fostering progress towards a more engaging and equitable educational future. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Practical and ethical challenges of large language models in education: A systematic scoping review.
- Author
-
Yan, Lixiang, Sha, Lele, Zhao, Linxuan, Li, Yuheng, Martinez‐Maldonado, Roberto, Chen, Guanliang, Li, Xinyu, Jin, Yueqiao, and Gašević, Dragan
- Subjects
- *
LANGUAGE models , *GENERATIVE artificial intelligence , *KNOWLEDGE representation (Information theory) , *CHATGPT , *TECHNOLOGICAL innovations , *EDUCATIONAL technology , *XBRL (Document markup language) , *NATURAL language processing - Abstract
Educational technology innovations leveraging large language models (LLMs) have shown the potential to automate the laborious process of generating and analysing textual content. While various innovations have been developed to automate a range of educational tasks (eg, question generation, feedback provision, and essay grading), there are concerns regarding the practicality and ethicality of these innovations. Such concerns may hinder future research and the adoption of LLMs‐based innovations in authentic educational contexts. To address this, we conducted a systematic scoping review of 118 peer‐reviewed papers published since 2017 to pinpoint the current state of research on using LLMs to automate and support educational tasks. The findings revealed 53 use cases for LLMs in automating education tasks, categorised into nine main categories: profiling/labelling, detection, grading, teaching support, prediction, knowledge representation, feedback, content generation, and recommendation. Additionally, we also identified several practical and ethical challenges, including low technological readiness, lack of replicability and transparency and insufficient privacy and beneficence considerations. The findings were summarised into three recommendations for future studies, including updating existing innovations with state‐of‐the‐art models (eg, GPT‐3/4), embracing the initiative of open‐sourcing models/systems, and adopting a human‐centred approach throughout the developmental process. As the intersection of AI and education is continuously evolving, the findings of this study can serve as an essential reference point for researchers, allowing them to leverage the strengths, learn from the limitations, and uncover potential research opportunities enabled by ChatGPT and other generative AI models. Practitioner notesWhat is currently known about this topic Generating and analysing text‐based content are time‐consuming and laborious tasks.Large language models are capable of efficiently analysing an unprecedented amount of textual content and completing complex natural language processing and generation tasks.Large language models have been increasingly used to develop educational technologies that aim to automate the generation and analysis of textual content, such as automated question generation and essay scoring.What this paper adds A comprehensive list of different educational tasks that could potentially benefit from LLMs‐based innovations through automation.A structured assessment of the practicality and ethicality of existing LLMs‐based innovations from seven important aspects using established frameworks.Three recommendations that could potentially support future studies to develop LLMs‐based innovations that are practical and ethical to implement in authentic educational contexts.Implications for practice and/or policy Updating existing innovations with state‐of‐the‐art models may further reduce the amount of manual effort required for adapting existing models to different educational tasks.The reporting standards of empirical research that aims to develop educational technologies using large language models need to be improved.Adopting a human‐centred approach throughout the developmental process could contribute to resolving the practical and ethical challenges of large language models in education. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. How a Decades-Old Technology and a Paper From Meta Created an AI Industry Standard.
- Author
-
Lin, Belle
- Subjects
- *
GENERATIVE artificial intelligence , *ARTIFICIAL intelligence , *LANGUAGE models - Published
- 2024
21. AI and Responsible Authorship.
- Author
-
Pennock, Robert T.
- Subjects
- *
ARTIFICIAL intelligence , *CHATBOTS , *GENERATIVE artificial intelligence , *LANGUAGE models , *SCIENTIFIC knowledge , *ARTIFICIAL neural networks - Abstract
This article explores the question of whether artificial intelligence (AI) should be considered a coauthor of scientific papers. It provides a historical overview of AI, from its early beginnings to its current capabilities. The author argues that while AI can generate text, it lacks the ability to make new discoveries or truly understand concepts. The article also raises ethical concerns about the accuracy and truthfulness of AI-generated content. It concludes that while AI can assist in research, the responsibility for its use lies with human researchers, as AI tools are not yet capable of taking ethical responsibility for the research. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
22. AI model disgorgement: Methods and choices.
- Author
-
Achille, Alessandro, Kerns, Michael, Klingenberg, Carson, and Soatto, Stefano
- Subjects
- *
MACHINE learning , *GENERATIVE artificial intelligence , *LANGUAGE models , *ARTIFICIAL intelligence , *INTELLECTUAL property - Abstract
Over the past few years, machine learning models have significantly increased in size and complexity, especially in the area of generative AI such as large language models. These models require massive amounts of data andcomputecapacitytotrain,totheextentthatconcerns over the training data (such as protected or private content)cannotbepracticallyaddressedbyretrainingthe model"fromscratch"withthequestionabledataremoved or altered. Furthermore, despite significant efforts and controls dedicated to ensuring that training corpora are properly curated and composed, the sheer volume re- quiredmakesitinfeasibletomanuallyinspecteachdatum comprising a training corpus. One potential approach to training corpus data defects is model disgorgement, by which we broadly mean the elimination or reduction of not only any improperly used data, but also the effects of improperly used data on any component of an ML model. Model disgorgement techniques can be used to address a wide range of issues, such as reducing bias or toxicity, increasing fidelity, and ensuring responsible use of intellectual property. In this paper, we survey the land- scape of model disgorgement methods and introduce a taxonomyofdisgorgementtechniquesthatareapplicable to modern ML systems. In particular, we investigate the variousmeaningsof"removingtheeffects"ofdataonthe trained model in a way that does not require retraining from scratch [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Large language models as tax attorneys: a case study in legal capabilities emergence.
- Author
-
Nay, John J., Karamardian, David, Lawsky, Sarah B., Tao, Wenting, Bhat, Meghana, Jain, Raghav, Lee, Aaron Travis, Choi, Jonathan H., and Kasai, Jungo
- Subjects
- *
LANGUAGE models , *LEGAL professions , *ARTIFICIAL intelligence , *GENERATIVE pre-trained transformers , *LAWYERS , *NETWORK governance - Abstract
Better understanding of Large Language Models' (LLMs) legal analysis abilities can contribute to improving the efficiency of legal services, governing artificial intelligence and leveraging LLMs to identify inconsistencies in law. This paper explores LLM capabilities in applying tax law. We choose this area of law because it has a structure that allows us to set up automated validation pipelines across thousands of examples, requires logical reasoning and maths skills, and enables us to test LLM capabilities in a manner relevant to real-world economic lives of citizens and companies. Our experiments demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release. We experiment with retrieving and using the relevant legal authority to assess the impact of providing additional legal context to LLMs. Few-shot prompting, presenting examples of question–answer pairs, is also found to significantly enhance the performance of the most advanced model, GPT-4. The findings indicate that LLMs, particularly when combined with prompting enhancements and the correct legal texts, can perform at high levels of accuracy but not yet at expert tax lawyer levels. As LLMs continue to advance, their ability to reason about law autonomously could have significant implications for the legal profession and AI governance. This article is part of the theme issue 'A complexity science approach to law and governance'. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Towards human-centred standards for legal help AI.
- Author
-
Hagan, Margaret
- Subjects
- *
LANGUAGE models , *ARTIFICIAL intelligence , *COMPLEXITY (Philosophy) , *COURT personnel , *VIRTUAL work teams - Abstract
As more groups consider how AI may be used in the legal sector, this paper envisions how companies and policymakers can prioritize the perspective of community members as they design AI and policies around it. It presents findings of structured interviews and design sessions with community members, in which they were asked about whether, how, and why they would use AI tools powered by large language models to respond to legal problems like receiving an eviction notice. The respondents reviewed options for simple versus complex interfaces for AI tools, and expressed how they would want to engage with an AI tool to resolve a legal problem. These empirical findings provide directions that can counterbalance legal domain experts' proposals about the public interest around AI, as expressed by attorneys, court officials, advocates and regulators. By hearing directly from community members about how they want to use AI for civil justice tasks, what risks concern them, and the value they would find in different kinds of AI tools, this research can ensure that people's points of view are understood and prioritized, rather than only domain experts' assertions about people's needs and preferences around legal help AI. This article is part of the theme issue 'A complexity science approach to law and governance'. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Accuracy of ChatGPT-3.5 and -4 in providing scientific references in otolaryngology–head and neck surgery.
- Author
-
Lechien, Jerome R., Briganti, Giovanni, and Vaira, Luigi A.
- Subjects
- *
CHATGPT , *GENERATIVE pre-trained transformers , *CHATBOTS , *LANGUAGE models , *ARTIFICIAL intelligence - Abstract
Introduction: Chatbot generative pre-trained transformer (ChatGPT) is a new artificial intelligence-powered language model of chatbot able to help otolaryngologists in practice and research. We investigated the accuracy of ChatGPT-3.5 and -4 in the referencing of manuscripts published in otolaryngology. Methods: ChatGPT-3.5 and ChatGPT-4 were interrogated for providing references of the top-30 most cited papers in otolaryngology in the past 40 years including clinical guidelines and key studies that changed the practice. The responses were regenerated three times to assess the accuracy and stability of ChatGPT. ChatGPT-3.5 and ChatGPT-4 were compared for accuracy of reference and potential mistakes. Results: The accuracy of ChatGPT-3.5 and ChatGPT-4.0 ranged from 47% to 60%, and 73% to 87%, respectively (p < 0.005). ChatGPT-3.5 provided 19 inaccurate references and invented 2 references throughout the regenerated questions. ChatGPT-4.0 provided 13 inaccurate references, while it proposed only one invented reference. The stability of responses throughout regenerated answers was mild (k = 0.238) and moderate (k = 0.408) for ChatGPT-3.5 and 4.0, respectively. Conclusions: ChatGPT-4.0 reported higher accuracy than the free-access version (3.5). False references were detected in both 3.5 and 4.0 versions. Practitioners need to be careful regarding the use of ChatGPT in the reach of some key reference when writing a report. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Applications of Large Language Models in Pathology.
- Author
-
Cheng, Jerome
- Subjects
- *
LANGUAGE models , *FORENSIC pathology , *ARTIFICIAL intelligence , *NATURAL language processing - Abstract
Large language models (LLMs) are transformer-based neural networks that can provide human-like responses to questions and instructions. LLMs can generate educational material, summarize text, extract structured data from free text, create reports, write programs, and potentially assist in case sign-out. LLMs combined with vision models can assist in interpreting histopathology images. LLMs have immense potential in transforming pathology practice and education, but these models are not infallible, so any artificial intelligence generated content must be verified with reputable sources. Caution must be exercised on how these models are integrated into clinical practice, as these models can produce hallucinations and incorrect results, and an over-reliance on artificial intelligence may lead to de-skilling and automation bias. This review paper provides a brief history of LLMs and highlights several use cases for LLMs in the field of pathology. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. FGeo-DRL: Deductive Reasoning for Geometric Problems through Deep Reinforcement Learning.
- Author
-
Zou, Jia, Zhang, Xiaokai, He, Yiming, Zhu, Na, and Leng, Tuo
- Subjects
- *
DEEP reinforcement learning , *ARTIFICIAL intelligence , *REINFORCEMENT learning , *LANGUAGE models , *HEURISTIC , *PROBLEM solving - Abstract
Human-like automatic deductive reasoning has always been one of the most challenging open problems in the interdisciplinary field of mathematics and artificial intelligence. This paper is the third in a series of our works. We built a neural-symbolic system, named FGeo-DRL, to automatically perform human-like geometric deductive reasoning. The neural part is an AI agent based on deep reinforcement learning, capable of autonomously learning problem-solving methods from the feedback of a formalized environment, without the need for human supervision. It leverages a pre-trained natural language model to establish a policy network for theorem selection and employ Monte Carlo Tree Search for heuristic exploration. The symbolic part is a reinforcement learning environment based on geometry formalization theory and FormalGeo, which models geometric problem solving (GPS) as a Markov Decision Process (MDP). In the formal symbolic system, the symmetry of plane geometric transformations ensures the uniqueness of geometric problems when converted into states. Finally, the known conditions and objectives of the problem form the state space, while the set of theorems forms the action space. Leveraging FGeo-DRL, we have achieved readable and verifiable automated solutions to geometric problems. Experiments conducted on the formalgeo7k dataset have achieved a problem-solving success rate of 86.40%. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. FGeo-TP: A Language Model-Enhanced Solver for Euclidean Geometry Problems.
- Author
-
He, Yiming, Zou, Jia, Zhang, Xiaokai, Zhu, Na, and Leng, Tuo
- Subjects
- *
EUCLIDEAN geometry , *ARTIFICIAL intelligence , *LANGUAGE models , *TRANSFORMER models , *GEOMETRIC approach - Abstract
The application of contemporary artificial intelligence techniques to address geometric problems and automated deductive proofs has always been a grand challenge to the interdisciplinary field of mathematics and artificial intelligence. This is the fourth article in a series of our works, in our previous work, we established a geometric formalized system known as FormalGeo. Moreover, we annotated approximately 7000 geometric problems, forming the FormalGeo7k dataset. Despite the fact that FGPS (Formal Geometry Problem Solver) can achieve interpretable algebraic equation solving and human-like deductive reasoning, it often experiences timeouts due to the complexity of the search strategy. In this paper, we introduced FGeo-TP (theorem predictor), which utilizes the language model to predict the theorem sequences for solving geometry problems. The encoder and decoder components in the transformer architecture naturally establish a mapping between the sequences and embedding vectors, exhibiting inherent symmetry. We compare the effectiveness of various transformer architectures, such as BART or T5, in theorem prediction, and implement pruning in the search process of FGPS, thereby improving its performance when solving geometry problems. Our results demonstrate a significant increase in the problem-solving rate of the language model-enhanced FGeo-TP on the FormalGeo7k dataset, rising from 39.7% to 80.86%. Furthermore, FGeo-TP exhibits notable reductions in solution times and search steps across problems of varying difficulty levels. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Map Reading and Analysis with GPT-4V(ision).
- Author
-
Xu, Jinwen and Tao, Ran
- Subjects
- *
GENERATIVE pre-trained transformers , *LANGUAGE models , *GEOGRAPHY , *MAPS , *INSPECTION & review , *ARTIFICIAL intelligence , *MULTIMODAL user interfaces - Abstract
In late 2023, the image-reading capability added to a Generative Pre-trained Transformer (GPT) framework provided the opportunity to potentially revolutionize the way we view and understand geographic maps, the core component of cartography, geography, and spatial data science. In this study, we explore reading and analyzing maps with the latest version of GPT-4-vision-preview (GPT-4V), to fully evaluate its advantages and disadvantages in comparison with human eye-based visual inspections. We found that GPT-4V is able to properly retrieve information from various types of maps in different scales and spatiotemporal resolutions. GPT-4V can also perform basic map analysis, such as identifying visual changes before and after a natural disaster. It has the potential to replace human efforts by examining batches of maps, accurately extracting information from maps, and linking observed patterns with its pre-trained large dataset. However, it is encumbered by limitations such as diminished accuracy in visual content extraction and a lack of validation. This paper sets an example of effectively using GPT-4V for map reading and analytical tasks, which is a promising application for large multimodal models, large language models, and artificial intelligence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Will Artificial Intelligence Affect How Cultural Heritage Will Be Managed in the Future? Responses Generated by Four genAI Models.
- Author
-
Spennemann, Dirk H. R.
- Subjects
- *
GENERATIVE artificial intelligence , *ARTIFICIAL intelligence , *LANGUAGE models , *CULTURAL property , *GEMINI (Chatbot) , *HEALTH literacy - Abstract
Generative artificial intelligence (genAI) language models have become firmly embedded in public consciousness. Their abilities to extract and summarise information from a wide range of sources in their training data have attracted the attention of many scholars. This paper examines how four genAI large language models (ChatGPT, GPT4, DeepAI, and Google Bard) responded to prompts, asking (i) whether artificial intelligence would affect how cultural heritage will be managed in the future (with examples requested) and (ii) what dangers might emerge when relying heavily on genAI to guide cultural heritage professionals in their actions. The genAI systems provided a range of examples, commonly drawing on and extending the status quo. Without a doubt, AI tools will revolutionise the execution of repetitive and mundane tasks, such as the classification of some classes of artifacts, or allow for the predictive modelling of the decay of objects. Important examples were used to assess the purported power of genAI tools to extract, aggregate, and synthesize large volumes of data from multiple sources, as well as their ability to recognise patterns and connections that people may miss. An inherent risk in the 'results' presented by genAI systems is that the presented connections are 'artifacts' of the system rather than being genuine. Since present genAI tools are unable to purposively generate creative or innovative thoughts, it is left to the reader to determine whether any text that is provided by genAI that is out of the ordinary is meaningful or nonsensical. Additional risks identified by the genAI systems were that some cultural heritage professionals might use AI systems without the required level of AI literacy and that overreliance on genAI systems might lead to a deskilling of general heritage practitioners. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Artificial intelligence in clinical pharmacology: A case study and scoping review of large language models and bioweapon potential.
- Author
-
Rubinic, Igor, Kurtov, Marija, Rubinic, Ivan, Likic, Robert, Dargan, Paul I., and Wood, David M.
- Subjects
- *
LANGUAGE models , *CLINICAL pharmacology , *ARTIFICIAL intelligence , *HAZARDOUS substances , *BOTANICAL chemistry , *VALUES (Ethics) , *AUTOMATIC speech recognition - Abstract
This paper aims to explore the possibility of employing large language models (LLMs) – a type of artificial intelligence (AI) – in clinical pharmacology, with a focus on its possible misuse in bioweapon development. Additionally, ethical considerations, legislation and potential risk reduction measures are analysed. The existing literature is reviewed to investigate the potential misuse of AI and LLMs in bioweapon creation. The search includes articles from PubMed, Scopus and Web of Science Core Collection that were identified using a specific protocol. To explore the regulatory landscape, the OECD.ai platform was used. The review highlights the dual‐use vulnerability of AI and LLMs, with a focus on bioweapon development. Subsequently, a case study is used to illustrate the potential of AI manipulation resulting in harmful substance synthesis. Existing regulations inadequately address the ethical concerns tied to AI and LLMs. Mitigation measures are proposed, including technical solutions (explainable AI), establishing ethical guidelines through collaborative efforts, and implementing policy changes to create a comprehensive regulatory framework. The integration of AI and LLMs into clinical pharmacology presents invaluable opportunities, while also introducing significant ethical and safety considerations. Addressing the dual‐use nature of AI requires robust regulations, as well as adopting a strategic approach grounded in technical solutions and ethical values following the principles of transparency, accountability and safety. Additionally, AI's potential role in developing countermeasures against novel hazardous substances is underscored. By adopting a proactive approach, the potential benefits of AI and LLMs can be fully harnessed while minimizing the associated risks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. AI literacy in geographic education and research: Capabilities, caveats, and criticality.
- Author
-
Wilby, Robert L. and Esson, James
- Subjects
- *
LANGUAGE models , *LITERACY education , *ARTIFICIAL intelligence , *CHATGPT , *GEOGRAPHY education , *HEALTH literacy - Abstract
Concerns about runaway artificial intelligence (AI) – including large language models (LLMs) like ChatGPT – are at the forefront of contemporary political, social, and scientific discourse. This commentary provides a first look at ChatGPT's capabilities and limitations in supporting geographic research, critical thinking, learning, and curriculum development. We assessed ChatGPT's geographic knowledge, synthesising abilities, and potential for extrapolation. ChatGPT was employed for writing assistance, research evaluation, curriculum material creation, and content generation. Despite achieving scores of 47% to 55% on an actual exam paper, ChatGPT exhibited shortcomings including the generation of false references. Ethical concerns regarding academic misconduct, model bias, robustness, and toxic output were also identified. We assert that AI and LLMs like ChatGPT have transformative potential in Geography education and knowledge production but demand critical usage. Accordingly, we urge geographers to enhance AI literacy to enable responsible and effective use of these assistive technologies in our academic practice. This commentary provides a first look at ChatGPT's capabilities and limitations in supporting geographic research, critical thinking, learning, and curriculum development. We assessed ChatGPT's geographic knowledge, synthesising abilities, and potential for extrapolation as well as its use as a supportive technology for writing assistance, research evaluation, curriculum material creation, and content generation. We close by urging geographers to enhance AI literacy to enable responsible and effective use of these assistive technologies in our academic practice. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Computational approaches to Portuguese: introduction to the special issue.
- Author
-
Santos, Diana and Pardo, Thiago Alexandre Salgueiro
- Subjects
- *
PORTUGUESE language , *NATURAL language processing , *LANGUAGE models , *ARTIFICIAL intelligence , *SUPERVISED learning - Abstract
This document is an introduction to a special issue of the journal "Language Resources & Evaluation" focused on computational approaches to the Portuguese language. The Portuguese language is spoken by approximately 250 million people and is the official language in 10 regions around the world. The special issue aims to provide an overview of the computational processing of Portuguese over the past three decades and showcase the theoretical contributions, resources, tools, and applications that have been developed. The issue includes papers on various topics such as language modeling, corpus construction, sentiment analysis, and creative text generation. The authors hope that this special issue will inspire new research initiatives and international collaboration in the field of Portuguese language processing. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
34. Passion-Net: a robust precise and explainable predictor for hate speech detection in Roman Urdu text.
- Author
-
Mehmood, Faiza, Ghafoor, Hina, Asim, Muhammad Nabeel, Ghani, Muhammad Usman, Mahmood, Waqar, and Dengel, Andreas
- Subjects
- *
HATE speech , *COMPUTATIONAL intelligence , *LANGUAGE models , *AUTOMATIC speech recognition , *URDU language , *ARTIFICIAL intelligence , *SOCIAL media - Abstract
With an aim to eliminate or reduce the spread of hate content across social media platforms, the development of artificial intelligence supported computational predictors is an active area of research. However, diversity of languages hinders development of generic predictors that can precisely identify hate content. Several language-specific hate speech detection predictors have been developed for most common languages including English, Chinese and German. Specifically, for Urdu language a few predictors have been developed and these predictors lack in predictive performance. The paper in hand presents a precise and explainable deep learning predictor which makes use of advanced language modelling strategies for the extraction of semantic and discriminative patterns. Extracted patterns are utilized to train an attention-based novel classifier that is competent in precisely identifying hate content. Over coarse-grained benchmark dataset, the proposed predictor significantly outperforms state-of-the-art predictor by 8.7% in terms of accuracy, precision and F1-score. Similarly, over fine-grained dataset, in comparison with state-of-the-art predictor, it achieves performance gain of 10.6%, 17.6%, 18.6% and 17.6% in terms of accuracy, precision, recall and F1-score. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Leveraging Artificial Intelligence to Expedite Antibody Design and Enhance Antibody–Antigen Interactions.
- Author
-
Kim, Doo Nam, McNaughton, Andrew D., and Kumar, Neeraj
- Subjects
- *
DEEP learning , *ARTIFICIAL intelligence , *LANGUAGE models , *THERAPEUTIC use of proteins , *IMMUNOGLOBULINS , *MACHINE learning - Abstract
This perspective sheds light on the transformative impact of recent computational advancements in the field of protein therapeutics, with a particular focus on the design and development of antibodies. Cutting-edge computational methods have revolutionized our understanding of protein–protein interactions (PPIs), enhancing the efficacy of protein therapeutics in preclinical and clinical settings. Central to these advancements is the application of machine learning and deep learning, which offers unprecedented insights into the intricate mechanisms of PPIs and facilitates precise control over protein functions. Despite these advancements, the complex structural nuances of antibodies pose ongoing challenges in their design and optimization. Our review provides a comprehensive exploration of the latest deep learning approaches, including language models and diffusion techniques, and their role in surmounting these challenges. We also present a critical analysis of these methods, offering insights to drive further progress in this rapidly evolving field. The paper includes practical recommendations for the application of these computational techniques, supplemented with independent benchmark studies. These studies focus on key performance metrics such as accuracy and the ease of program execution, providing a valuable resource for researchers engaged in antibody design and development. Through this detailed perspective, we aim to contribute to the advancement of antibody design, equipping researchers with the tools and knowledge to navigate the complexities of this field. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. AI Model for Industry Classification Based on Website Data.
- Author
-
Jagrič, Timotej and Herman, Aljaž
- Subjects
- *
LANGUAGE models , *INDUSTRY classification , *ARTIFICIAL intelligence - Abstract
This paper presents a broad study on the application of the BERT (Bidirectional Encoder Representations from Transformers) model for multiclass text classification, specifically focusing on categorizing business descriptions into 1 of 13 distinct industry categories. The study involved a detailed fine-tuning phase resulting in a consistent decrease in training loss, indicative of the model's learning efficacy. Subsequent validation on a separate dataset revealed the model's robust performance, with classification accuracies ranging from 83.5% to 92.6% across different industry classes. Our model showed a high overall accuracy of 88.23%, coupled with a robust F1 score of 0.88. These results highlight the model's ability to capture and utilize the nuanced features of text data pertinent to various industries. The model has the capability to harness real-time web data, thereby enabling the utilization of the latest and most up-to-date information affecting to the company's product portfolio. Based on the model's performance and its characteristics, we believe that the process of relative valuation can be drastically improved. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Conversational AI and equity through assessing GPT-3's communication with diverse social groups on contentious topics.
- Author
-
Chen, Kaiping, Shao, Anqi, Burapacheep, Jirayu, and Li, Yixuan
- Subjects
- *
LANGUAGE models , *DEEP learning , *SOCIAL groups , *SCIENTIFIC communication , *ARTIFICIAL intelligence , *BLACK Lives Matter movement - Abstract
Autoregressive language models, which use deep learning to produce human-like texts, have surged in prevalence. Despite advances in these models, concerns arise about their equity across diverse populations. While AI fairness is discussed widely, metrics to measure equity in dialogue systems are lacking. This paper presents a framework, rooted in deliberative democracy and science communication studies, to evaluate equity in human–AI communication. Using it, we conducted an algorithm auditing study to examine how GPT-3 responded to different populations who vary in sociodemographic backgrounds and viewpoints on crucial science and social issues: climate change and the Black Lives Matter (BLM) movement. We analyzed 20,000 dialogues with 3290 participants differing in gender, race, education, and opinions. We found a substantively worse user experience among the opinion minority groups (e.g., climate deniers, racists) and the education minority groups; however, these groups changed attitudes toward supporting BLM and climate change efforts much more compared to other social groups after the chat. GPT-3 used more negative expressions when responding to the education and opinion minority groups. We discuss the social-technological implications of our findings for a conversational AI system that centralizes diversity, equity, and inclusion. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Pre‐trained language models: What do they know?
- Author
-
Guimarães, Nuno, Campos, Ricardo, and Jorge, Alípio
- Subjects
- *
LANGUAGE models , *TEXT summarization , *MACHINE translating , *ARTIFICIAL intelligence , *NATURAL language processing , *VERBAL behavior - Abstract
Large language models (LLMs) have substantially pushed artificial intelligence (AI) research and applications in the last few years. They are currently able to achieve high effectiveness in different natural language processing (NLP) tasks, such as machine translation, named entity recognition, text classification, question answering, or text summarization. Recently, significant attention has been drawn to OpenAI's GPT models' capabilities and extremely accessible interface. LLMs are nowadays routinely used and studied for downstream tasks and specific applications with great success, pushing forward the state of the art in almost all of them. However, they also exhibit impressive inference capabilities when used off the shelf without further training. In this paper, we aim to study the behavior of pre‐trained language models (PLMs) in some inference tasks they were not initially trained for. Therefore, we focus our attention on very recent research works related to the inference capabilities of PLMs in some selected tasks such as factual probing and common‐sense reasoning. We highlight relevant achievements made by these models, as well as some of their current limitations that open opportunities for further research. This article is categorized under:Fundamental Concepts of Data and Knowledge > Key Design Issues in Data MiningTechnologies > Artificial Intelligence [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Investigating AI languages' ability to solve undergraduate finance problems.
- Author
-
Yang, Changyu and Stivers, Adam
- Subjects
- *
LANGUAGE models , *ARTIFICIAL intelligence , *LANGUAGE ability , *PROCESS capability , *CHATGPT , *PLAGIARISM - Abstract
The rapid advancement of artificial intelligence (AI) has given rise to sophisticated language models that excel in understanding and generating human-like text. With the capacity to process vast amounts of information, these models effectively tackle problems across diverse domains. In this paper, we present a comparative analysis of prominent AI language models—ChatGPT and Google Bard—focusing on their ability to solve undergraduate finance problems. We find that GPT-4 significantly outperforms Bard-1.0, excelling in easy problems but struggling with complex ones. The results suggest that it is crucial to handle AI with care in order to uphold academic integrity. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. A Systematic Review and Meta-Analysis of Artificial Intelligence Tools in Medicine and Healthcare: Applications, Considerations, Limitations, Motivation and Challenges.
- Author
-
Younis, Hussain A., Eisa, Taiseer Abdalla Elfadil, Nasser, Maged, Sahib, Thaeer Mueen, Noor, Ameen A., Alyasiri, Osamah Mohammed, Salisu, Sani, Hayder, Israa M., and Younis, Hameed AbdulKareem
- Subjects
- *
ARTIFICIAL intelligence , *LANGUAGE models , *MEDICAL personnel , *CHATGPT , *CELL imaging - Abstract
Artificial intelligence (AI) has emerged as a transformative force in various sectors, including medicine and healthcare. Large language models like ChatGPT showcase AI's potential by generating human-like text through prompts. ChatGPT's adaptability holds promise for reshaping medical practices, improving patient care, and enhancing interactions among healthcare professionals, patients, and data. In pandemic management, ChatGPT rapidly disseminates vital information. It serves as a virtual assistant in surgical consultations, aids dental practices, simplifies medical education, and aids in disease diagnosis. A total of 82 papers were categorised into eight major areas, which are G1: treatment and medicine, G2: buildings and equipment, G3: parts of the human body and areas of the disease, G4: patients, G5: citizens, G6: cellular imaging, radiology, pulse and medical images, G7: doctors and nurses, and G8: tools, devices and administration. Balancing AI's role with human judgment remains a challenge. A systematic literature review using the PRISMA approach explored AI's transformative potential in healthcare, highlighting ChatGPT's versatile applications, limitations, motivation, and challenges. In conclusion, ChatGPT's diverse medical applications demonstrate its potential for innovation, serving as a valuable resource for students, academics, and researchers in healthcare. Additionally, this study serves as a guide, assisting students, academics, and researchers in the field of medicine and healthcare alike. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Exploring natural language processing in mechanical engineering education: Implications for academic integrity.
- Author
-
Lesage, Jonathan, Brennan, Robert, Eaton, Sarah Elaine, Moya, Beatriz, McDermott, Brenda, Wiens, Jason, and Herrero, Kai
- Subjects
- *
MECHANICAL engineering education , *EDUCATION ethics , *LANGUAGE models , *ENGINEERING laboratories , *ENGINEERING students , *ASSISTIVE technology , *NATURAL language processing - Abstract
In this paper, the authors review extant natural language processing models in the context of undergraduate mechanical engineering education. These models have advanced to a stage where it has become increasingly more difficult to discern computer vs. human-produced material, and as a result, have understandably raised questions about their impact on academic integrity. As part of our review, we perform two sets of tests with OpenAI's natural language processing model (1) using GPT-3 to generate text for a mechanical engineering laboratory report and (2) using Codex to generate code for an automation and control systems laboratory. Our results show that natural language processing is a potentially powerful assistive technology for engineering students. However, it is a technology that must be used with care, given its potential to enable cheating and plagiarism behaviours given how the technology challenges traditional assessment practices and traditional notions of authorship. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Exploring ChatGPT for next-generation information retrieval: Opportunities and challenges.
- Author
-
Huang, Yizheng and Huang, Jimmy X.
- Subjects
- *
CHATGPT , *GENERATIVE artificial intelligence , *SUPERVISED learning , *INFORMATION retrieval , *LANGUAGE models , *ARTIFICIAL intelligence - Abstract
The rapid advancement of artificial intelligence (AI) has spotlighted ChatGPT as a key technology in the realm of information retrieval (IR). Unlike its predecessors, it offers notable advantages that have captured the interest of both industry and academia. While some consider ChatGPT to be a revolutionary innovation, others believe its success stems from smart product and market strategy integration. The advent of ChatGPT and GPT-4 has ushered in a new era of Generative AI, producing content that diverges from training examples, and surpassing the capabilities of OpenAI's previous GPT-3 model. In contrast to the established supervised learning approach in IR tasks, ChatGPT challenges traditional paradigms, introducing fresh challenges and opportunities in text quality assurance, model bias, and efficiency. This paper aims to explore the influence of ChatGPT on IR tasks, providing insights into its potential future trajectory. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. AI emerges as the frontier in behavioral science.
- Author
-
Juanjuan Meng
- Subjects
- *
BEHAVIORAL sciences , *ARTIFICIAL intelligence , *GENERATIVE artificial intelligence , *CHATBOTS , *PRISONER'S dilemma game , *LANGUAGE models - Abstract
This article explores the emergence of AI as a new frontier in behavioral science. It discusses a study that compares the decision-making of AI chatbots, specifically ChatGPT-3 and ChatGPT-4, with that of humans using classical behavioral assessments. The study reveals that AI models have distinct preferences and behaviors, leading to the development of a new research direction called "AI Behavioral Science." The article also examines the potential benefits and challenges of studying AI behavior, including its impact on human decision-making, bias correction, and policy design. It emphasizes the need for a comprehensive behavioral assessment framework and the engineering of AI behavior. Furthermore, the article discusses the potential impact of AI integration on human behavior and culture, such as algorithmic bias, cognitive decline, and the promotion of equality. The document includes references to various research papers and studies on biased programmers, biased data, AI ethics, recommender systems, social media, news consumption, and polarization, making it a valuable resource for library patrons researching these specific topics. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
44. To prompt or not to prompt: Navigating the use of Large Language Models for integrating and modeling heterogeneous data.
- Author
-
Remadi, Adel, El Hage, Karim, Hobeika, Yasmina, and Bugiotti, Francesca
- Subjects
- *
LANGUAGE models , *ARTIFICIAL intelligence , *DATA modeling , *DATA extraction , *LEARNING disabilities , *DATA integration - Abstract
Manually integrating data of diverse formats and languages is vital to many artificial intelligence applications. However, the task itself remains challenging and time-consuming. This paper highlights the potential of Large Language Models (LLMs) to streamline data extraction and resolution processes. Our approach aims to address the ongoing challenge of integrating heterogeneous data sources, encouraging advancements in the field of data engineering. Applied on the specific use case of learning disorders in higher education, our research demonstrates LLMs' capability to effectively extract data from unstructured sources. It is then further highlighted that LLMs can enhance data integration by providing the ability to resolve entities originating from multiple data sources. Crucially, the paper underscores the necessity of preliminary data modeling decisions to ensure the success of such technological applications. By merging human expertise with LLM-driven automation, this study advocates for the further exploration of semi-autonomous data engineering pipelines. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Why is Apple so focused on vision AI? Because vision intelligence can understand what it sees, contextualize that information, make decisions based on the information, and change or alter the appearance of what is there.
- Author
-
Holic, Apple and Evans, Jonny
- Subjects
- *
ARTIFICIAL intelligence , *GENERATIVE artificial intelligence , *LANGUAGE models , *DECISION making , *IPHONE (Smartphone) - Abstract
Apple is focusing on vision AI because it allows for understanding, contextualizing, decision-making, and altering the appearance of what is seen. This technology is already being used in various ways, such as text recognition, location identification, image descriptions, and translation. Apple's recent research paper describes a Multimodal Model for Text and Image Data, which combines text and images to train large language models and achieve strong context-learning capabilities. The company's vision for Generative Visual AI, particularly in the creation of digital replicas of spaces, has significant implications for industries like architecture, design, and health. This highly visual AI deployment surpasses the futuristic visions depicted in movies like Minority Report, making Apple an industry leader in this field. [Extracted from the article]
- Published
- 2024
46. Revolutionizing generative pre-traineds: Insights and challenges in deploying ChatGPT and generative chatbots for FAQs.
- Author
-
Khennouche, Feriel, Elmir, Youssef, Himeur, Yassine, Djebari, Nabil, and Amira, Abbes
- Subjects
- *
CHATGPT , *CHATBOTS , *NATURAL language processing , *GENERATIVE artificial intelligence , *ARTIFICIAL intelligence , *LANGUAGE models - Abstract
In the rapidly evolving domain of artificial intelligence, chatbots have emerged as a potent tool for various applications ranging from e-commerce to healthcare. This research delves into the intricacies of chatbot technology, from its foundational concepts to advanced generative models like ChatGPT. We present a comprehensive taxonomy of existing chatbot approaches, distinguishing between rule-based, retrieval-based, generative, and hybrid models. A specific emphasis is placed on ChatGPT, elucidating its merits for frequently asked questions (FAQs)-based chatbots, coupled with an exploration of associated Natural Language Processing (NLP) techniques such as named entity recognition, intent classification, and sentiment analysis. The paper further delves into the customization and fine-tuning of ChatGPT, its integration with knowledge bases, and the consequent challenges and ethical considerations that arise. Through real-world applications in domains such as online shopping, healthcare, and education, we underscore the transformative potential of chatbots. However, we also spotlight open challenges and suggest future research directions, emphasizing the need for optimizing conversational flow, advancing dialogue mechanics, improving domain adaptability, and enhancing ethical considerations. The research culminates in a call for further exploration in ensuring transparent, ethical, and user-centric chatbot systems. • Explore chatbot technologies, including ChatGPT and applications in domains. • Investigate the pros and cons of rule-based, retrieval, generative, and hybrid chatbots. • Emphasize ChatGPT in FAQs chatbots and NLP integration. • Discuss ChatGPT customization, knowledge base, and ethics. • Highlight real-world use of generative chatbots and emphasize research on flow and ethics. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Explainable Artificial Intelligence (XAI) 2.0: A manifesto of open challenges and interdisciplinary research directions.
- Author
-
Longo, Luca, Brcic, Mario, Cabitza, Federico, Choi, Jaesik, Confalonieri, Roberto, Ser, Javier Del, Guidotti, Riccardo, Hayashi, Yoichi, Herrera, Francisco, Holzinger, Andreas, Jiang, Richard, Khosravi, Hassan, Lecue, Freddy, Malgieri, Gianclaudio, Páez, Andrés, Samek, Wojciech, Schneider, Johannes, Speith, Timo, and Stumpf, Simone
- Subjects
- *
ARTIFICIAL intelligence , *GENERATIVE artificial intelligence , *LANGUAGE models , *INTERDISCIPLINARY research - Abstract
Understanding black box models has become paramount as systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper highlights the advancements in XAI and its application in real-world scenarios and addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. We aim to develop a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 28 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. AI for crop production – Where can large language models (LLMs) provide substantial value?
- Author
-
Kuska, Matheus Thomas, Wahabzada, Mirwaes, and Paulus, Stefan
- Subjects
- *
LANGUAGE models , *GENERATIVE pre-trained transformers , *ARTIFICIAL intelligence , *AGRICULTURAL productivity , *AGRICULTURE , *CHATBOTS , *AGRICULTURAL technology - Abstract
[Display omitted] • Farmers must be an allrounder for plant growth, plant protection, legislation and different economic fields. • AI service integration holds great potential for assistance, documentation, education, interpretation forecasts or data-driven predictions. • LLMs depict a fundamental step to reduce the gap between AI-driven data analysis and common user. • Reproduction of processed data would increase confidence in the model support farmer's interpretation and guide the LLM for more precise results. • Introducing these technologies requires not only new training but also significant effort in structural transformation. Since the launch of the "Generative Pre-trained Transformer 3.5", ChatGPT by Open, artificial intelligence (AI) has been a main discussion topic in public. Especially large language models (LLM), so called "intelligent" chatbots, and the possibility to automatically generate highly professional technical texts get high attention. Companies, as well as researchers, are evaluating possible applications and how such a powerful LLM can be integrated into daily work and bring benefits, improve their business or to make the research outcome more efficient. In general, underlying models are trained on large datasets, mainly on sources from websites, and online books and articles. In combination with information provided by the user, the model can give an impressively fast response. Even if the range of questions and answers look unrestricted, there are limits to the models. In this paper, possible use cases for agricultural tasks are elucidated. This includes the textual preparation of facts, consulting tasks, interpretation of decision support models in plant disease management, as well as guides for tutorials to integrate modern digital techniques into agricultural work. Opportunities and challenges are described, as well as limitations and insufficiencies. The authors describe a map of easy-to-reach topics in agriculture where the integration of LLMs seems to be very likely within the next few years. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Research Integrity Enhancement: Integration of Post-Publication Peer Review to Alleviate Artificial Intelligence-Generated Research Misconduct.
- Author
-
Yaseen, Sadia, Kohan, Noushin, and Ayub, Ayesha
- Subjects
- *
RESEARCH integrity , *GENERATIVE artificial intelligence , *NATURAL language processing , *LANGUAGE models , *ARTIFICIAL intelligence - Abstract
This article discusses the integration of post-publication peer review (PPPR) as a strategy to address research misconduct arising from artificial intelligence (AI). The use of AI in scientific research has revolutionized the field, but it also presents ethical concerns such as algorithmic bias, data bias, and privacy issues. PPPR, which involves reviewing journal articles after they have been published, offers an iterative feedback process that enhances the honesty and integrity of research. By combining PPPR with AI-generated research, the quality, transparency, and accountability of academic papers can be improved, reducing the likelihood of misconduct and fostering community collaboration. [Extracted from the article]
- Published
- 2024
- Full Text
- View/download PDF
50. VIRTSI: A novel trust dynamics model enhancing Artificial Intelligence collaboration with human users – Insights from a ChatGPT evaluation study.
- Author
-
Virvou, Maria, Tsihrintzis, George A., and Tsichrintzi, Evangelia-Aikaterini
- Subjects
- *
ARTIFICIAL intelligence , *GENERATIVE artificial intelligence , *LANGUAGE models , *CHATGPT , *MACHINE learning - Abstract
The rapid integration of intelligent processes and methods into information systems in the Artificial Intelligence (AI) era has led to a substantial shift towards autonomous software decision-making. This evolution necessitates robust human oversight, especially in critical domains like Healthcare, Education, and Energy. Human trust in AI plays a vital role in influencing decision-making processes of users interacting with AI. This paper presents VIRTSI (V ariability and I mpact of R eciprocal T rust S tates towards I ntelligent systems), a novel rigorous computational model for human-AI Interaction. VIRTSI simulates human trust states, spanning from overtrust to distrust, through user modelling. It comprises: 1. A trust dynamics representational model based on Deterministic Finite State Automata (DFAs), illustrating transitions among cognitive trust states in response to AI-generated replies. 2. A trust evaluation model based on Confusion Matrices, originating from machine learning and Accuracy Metrics, providing a quantitative framework for analysing human trust dynamics. As a result, this is the first time that trust dynamics have been thoroughly traced in a representational model and a method has been developed to assess the impact of possibly harmful states like overtrust and distrust. An empirical study on the recently launched Large Language Model of generative AI, ChatGPT (version 3.5), provides a radical underexplored AI-generated platform for evaluating the human-AI interaction through VIRTSI. The study involved 1200 interactions of real users as well as AI experts together with experts in two very different domains of evaluation, namely software engineering and poetry. This study traces trust dynamics and the emerging human-AI interaction, in concrete examples of real user synergies with generative AI. The research reveals the vital role of maintaining normal trust states for optimal human-AI interaction and that both AI and human users need further steps towards this goal. The real-world implications of this research can guide the creation and evaluation of user interfaces with AI and the incorporation of functionalities in the development of generative AI chatbots in terms of trust by providing a new rigorous DFA representational method of trust dynamics and a corresponding new perspective of confusion matrix evaluation method of the dynamics' impact in the efficiency of human-AI dialogues. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.