Back to Search
Start Over
Artificial intelligence at the pen’s edge: Exploring the ethical quagmires in using artificial intelligence models like ChatGPT for assisted writing in biomedical research.
- Source :
- Perspectives in Clinical Research; Jul-Sep2024, Vol. 15 Issue 3, p108-115, 8p
- Publication Year :
- 2024
-
Abstract
- Chat generative pretrained transformer (ChatGPT) is a conversational language model powered by artificial intelligence (AI). It is a sophisticated language model that employs deep learning methods to generate human-like text outputs to inputs in the natural language. This narrative review aims to shed light on ethical concerns about using AI models like ChatGPT in writing assistance in the health care and medical domains. Currently, all the AI models like ChatGPT are in the infancy stage; there is a risk of inaccuracy of the generated content, lack of contextual understanding, dynamic knowledge gaps, limited discernment, lack of responsibility and accountability, issues of privacy, data security, transparency, and bias, lack of nuance, and originality. Other issues such as authorship, unintentional plagiarism, falsified and fabricated content, and the threat of being red-flagged as AI-generated content highlight the need for regulatory compliance, transparency, and disclosure. If the legitimate issues are proactively considered and addressed, the potential applications of AI models as writing assistance could be rewarding. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 22293485
- Volume :
- 15
- Issue :
- 3
- Database :
- Complementary Index
- Journal :
- Perspectives in Clinical Research
- Publication Type :
- Academic Journal
- Accession number :
- 178821442
- Full Text :
- https://doi.org/10.4103/picr.picr_196_23