1. Human–computer pragmatics trialled: some (im)polite interactions with ChatGPT 4.0 and the ensuing implications.
- Author
-
Quan, Zhi and Chen, Zhiwei
- Abstract
Upon rapid evolution, ChatGPT can now generate content that is linguistically accurate and logically sound, while sidestepping ethical, social and legal concerns. This research seeks to investigate whether ChatGPT will employ different pragmatic strategies in its responses to (im)polite questions. In our experiment, this AI-powered tool was instructed to answer 200 self-made questions over four (im)politeness levels, and the 200 responses were collected to go through linguistic and sentiment analysis. Triangulated data, together with typical examples, show that ChatGPT tends to give shorter and less positive answers to less polite questions, appearing to be less responsive when confronted with more blunt and offensive inquiries. This, to some extent, resembles how human beings react when treated impolitely. A tentative explanation may be that, given its nature as a large language model, ChatGPT mirrors human interaction in various scenarios, and draws on prevalent human communication tendencies. Thus, interacting with ChatGPT is more of a human-society interaction than human-machine communication in the real sense. Our research sheds light on the coined “human-machine pragmatics”, i.e. how humans can best communicate with computers for the best informative and affective outcomes. The implications for language education are also discussed in the end. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF