Back to Search Start Over

Data-Centric AI in the Age of Large Language Models

Authors :
Xu, Xinyi
Wu, Zhaoxuan
Qiao, Rui
Verma, Arun
Shu, Yao
Wang, Jingtan
Niu, Xinyuan
He, Zhenfeng
Chen, Jiangwei
Zhou, Zijian
Lau, Gregory Kang Ruey
Dao, Hieu
Agussurja, Lucas
Sim, Rachael Hwee Ling
Lin, Xiaoqiang
Hu, Wenyang
Dai, Zhongxiang
Koh, Pang Wei
Low, Bryan Kian Hsiang
Publication Year :
2024

Abstract

This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs). We start by making the key observation that data is instrumental in the developmental (e.g., pretraining and fine-tuning) and inferential stages (e.g., in-context learning) of LLMs, and yet it receives disproportionally low attention from the research community. We identify four specific scenarios centered around data, covering data-centric benchmarks and data curation, data attribution, knowledge transfer, and inference contextualization. In each scenario, we underscore the importance of data, highlight promising research directions, and articulate the potential impacts on the research community and, where applicable, the society as a whole. For instance, we advocate for a suite of data-centric benchmarks tailored to the scale and complexity of data for LLMs. These benchmarks can be used to develop new data curation methods and document research efforts and results, which can help promote openness and transparency in AI and LLM research.<br />Comment: Preprint

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.14473
Document Type :
Working Paper