Back to Search Start Over

'In-Context Learning' or: How I learned to stop worrying and love 'Applied Information Retrieval'

Authors :
Parry, Andrew
Ganguly, Debasis
Chandra, Manish
Publication Year :
2024

Abstract

With the increasing ability of large language models (LLMs), in-context learning (ICL) has evolved as a new paradigm for natural language processing (NLP), where instead of fine-tuning the parameters of an LLM specific to a downstream task with labeled examples, a small number of such examples is appended to a prompt instruction for controlling the decoder's generation process. ICL, thus, is conceptually similar to a non-parametric approach, such as $k$-NN, where the prediction for each instance essentially depends on the local topology, i.e., on a localised set of similar instances and their labels (called few-shot examples). This suggests that a test instance in ICL is analogous to a query in IR, and similar examples in ICL retrieved from a training set relate to a set of documents retrieved from a collection in IR. While standard unsupervised ranking models can be used to retrieve these few-shot examples from a training set, the effectiveness of the examples can potentially be improved by re-defining the notion of relevance specific to its utility for the downstream task, i.e., considering an example to be relevant if including it in the prompt instruction leads to a correct prediction. With this task-specific notion of relevance, it is possible to train a supervised ranking model (e.g., a bi-encoder or cross-encoder), which potentially learns to optimally select the few-shot examples. We believe that the recent advances in neural rankers can potentially find a use case for this task of optimally choosing examples for more effective downstream ICL predictions.<br />Comment: 9 Pages, 3 Figures, Accepted as Perspective paper to SIGIR 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.01116
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3626772.3657842