Back to Search Start Over

CorrSynth -- A Correlated Sampling Method for Diverse Dataset Generation from LLMs

Authors :
Kowshik, Suhas S
Divekar, Abhishek
Malik, Vijit
Publication Year :
2024

Abstract

Large language models (LLMs) have demonstrated remarkable performance in diverse tasks using zero-shot and few-shot prompting. Even though their capabilities of data synthesis have been studied well in recent years, the generated data suffers from a lack of diversity, less adherence to the prompt, and potential biases that creep into the data from the generator model. In this work, we tackle the challenge of generating datasets with high diversity, upon which a student model is trained for downstream tasks. Taking the route of decoding-time guidance-based approaches, we propose CorrSynth, which generates data that is more diverse and faithful to the input prompt using a correlated sampling strategy. Further, our method overcomes the complexity drawbacks of some other guidance-based techniques like classifier-based guidance. With extensive experiments, we show the effectiveness of our approach and substantiate our claims. In particular, we perform intrinsic evaluation to show the improvements in diversity. Our experiments show that CorrSynth improves both student metrics and intrinsic metrics upon competitive baselines across four datasets, showing the innate advantage of our method.<br />Comment: Published as a main conference paper at EMNLP 2024; First two authors contributed equally

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.08553
Document Type :
Working Paper