Back to Search
Start Over
The Detection of ChatGPT’s Textual Crumb Trails is an Unsustainable Solution to Imperfect Detection Methods
- Source :
- Open Information Science, Vol 8, Iss 1, Pp e35179-196 (2024)
- Publication Year :
- 2024
- Publisher :
- De Gruyter, 2024.
-
Abstract
- A recent disruptive innovation to scientific publishing is OpenAI’s ChatGPT, a large language model. The International Committee of Medical Journal Editors and COPE, and COPE member journals or publishers, set limitations to ChatGPT’s involvement in academic writing, requesting authors to declare its use. Those guidelines are practically useless because they ignore two fundamentals: first, academics who cheat to achieve success will not declare the use of ChatGPT; second, they fail to explicitly assign the responsibility of detection to editors, journals, and publishers. Using two primers, i.e., residual text that may reflect traces of ChatGPT’s output but that authors may have forgotten to remove from their articles, this commentary draws readers’ attention to 46 open-access examples sourced from PubPeer. Even though editors should be obliged to investigate such cases, a primer-based detection of ChatGPT’s textual crumb trails is only a temporary measure and not a sustainable solution because it relies on the detection of carelessness.
Details
- Language :
- English
- ISSN :
- 24511781 and 20240007
- Volume :
- 8
- Issue :
- 1
- Database :
- Directory of Open Access Journals
- Journal :
- Open Information Science
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.bb2661ecca3548afacf860f57bd59b32
- Document Type :
- article
- Full Text :
- https://doi.org/10.1515/opis-2024-0007