Back to Search Start Over

Harnessing the Power of LLMs for Service Quality Assessment From User-Generated Content

Authors :
Taha Falatouri
Denisa Hrusecka
Thomas Fischer
Source :
IEEE Access, Vol 12, Pp 99755-99767 (2024)
Publication Year :
2024
Publisher :
IEEE, 2024.

Abstract

Adopting Large Language Models (LLMs) creates opportunities for organizations to increase efficiency, particularly in sentiment analysis and information extraction tasks. This study explores the efficiency of LLMs in real-world applications, focusing on sentiment analysis and service quality dimension extraction from user-generated content (UGC). For this purpose, we compare the performance of two LLMs (ChatGPT 3.5 and Claude 3) and three traditional NLP methods using two datasets of customer reviews (one in English and one in Persian). The results indicate that LLMs can achieve notable accuracy in information extraction (76% accuracy for ChatGPT and 68% for Claude 3) and sentiment analysis (substantial agreement with human raters for ChatGPT and moderate agreement with human raters for Claude 3), demonstrating an improvement compared to other AI models. However, challenges persist, including discrepancies between model predictions and human judgments and limitations in extracting specific dimensions from unstructured text. Whereas LLMs can streamline the SQ assessment process, human supervision remains essential to ensure reliability.

Details

Language :
English
ISSN :
21693536
Volume :
12
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.295cd5140db341a891064dbdac227c4c
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2024.3429290