Back to Search Start Over

Large Language Models as Zero-Shot Conversational Recommenders

Authors :
He, Zhankui
Xie, Zhouhang
Jha, Rahul
Steck, Harald
Liang, Dawen
Feng, Yesu
Majumder, Bodhisattwa Prasad
Kallus, Nathan
McAuley, Julian
Publication Year :
2023

Abstract

In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders<br />Comment: Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.10053
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3583780.3614949