Key points are not available for this paper at this time.
With the increasing ability of large language models (LLMs), in-context learning (ICL) has evolved as a new paradigm for natural language processing (NLP), where instead of fine-tuning the parameters of an LLM specific to a downstream task with labeled examples, a small number of such examples is appended to a prompt instruction for controlling the decoder's generation process. ICL, thus, is conceptually similar to a non-parametric approach, such as k-NN, where the prediction for each instance essentially depends on the local topology, i. e. , on a localised set of similar instances and their labels (called few-shot examples). This suggests that a test instance in ICL is analogous to a query in IR, and similar examples in ICL retrieved from a training set relate to a set of documents retrieved from a collection in IR. While standard unsupervised ranking models can be used to retrieve these few-shot examples from a training set, the effectiveness of the examples can potentially be improved by re-defining the notion of relevance specific to its utility for the downstream task, i. e. , considering an example to be relevant if including it in the prompt instruction leads to a correct prediction. With this task-specific notion of relevance, it is possible to train a supervised ranking model (e. g. , a bi-encoder or cross-encoder), which potentially learns to optimally select the few-shot examples. We believe that the recent advances in neural rankers can potentially find a use case for this task of optimally choosing examples for more effective downstream ICL predictions.
Building similarity graph...
Analyzing shared references across papers
Loading...
Andrew Parry
Debasis Ganguly
Manish Chandra
University of Glasgow
Building similarity graph...
Analyzing shared references across papers
Loading...
Parry et al. (Wed,) studied this question.
www.synapsesocial.com/papers/68e60be9b6db64358759ea8c — DOI: https://doi.org/10.1145/3626772.3657842
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: