1. ScalingNote: Scaling up Retrievers with Large Language Models for Real-World Dense Retrieval
- Author
-
Huang, Suyuan, Zhang, Chao, Wu, Yuanyuan, Zhang, Haoxin, Wang, Yuan, Wang, Maolin, Cao, Shaosheng, Xu, Tong, Zhao, Xiangyu, Qin, Zengchang, Gao, Yan, Bai, Yunhan, Fan, Jun, Hu, Yao, and Chen, Enhong
- Subjects
Computer Science - Information Retrieval - Abstract
Dense retrieval in most industries employs dual-tower architectures to retrieve query-relevant documents. Due to online deployment requirements, existing real-world dense retrieval systems mainly enhance performance by designing negative sampling strategies, overlooking the advantages of scaling up. Recently, Large Language Models (LLMs) have exhibited superior performance that can be leveraged for scaling up dense retrieval. However, scaling up retrieval models significantly increases online query latency. To address this challenge, we propose ScalingNote, a two-stage method to exploit the scaling potential of LLMs for retrieval while maintaining online query latency. The first stage is training dual towers, both initialized from the same LLM, to unlock the potential of LLMs for dense retrieval. Then, we distill only the query tower using mean squared error loss and cosine similarity to reduce online costs. Through theoretical analysis and comprehensive offline and online experiments, we show the effectiveness and efficiency of ScalingNote. Our two-stage scaling method outperforms end-to-end models and verifies the scaling law of dense retrieval with LLMs in industrial scenarios, enabling cost-effective scaling of dense retrieval systems. Our online method incorporating ScalingNote significantly enhances the relevance between retrieved documents and queries.
- Published
- 2024