Few Shot

Few shot technique

Best practice


Build your prompt iteratively, take care of edge cases & test thoroughly

Let’s break down why each component is important:

Build your prompt iteratively:

Iterative development allows you to refine and improve your prompt gradually.

  • Start with a basic prompt and progressively add complexity or specificity based on the model’s responses.

This approach helps you understand how the model interprets the prompt and refine it for better performance.

Take care of edge cases:

Considering edge cases is crucial for robustness. These are situations that may not be covered by the initial prompt but could arise in real-world scenarios.

Addressing edge cases ensures that your model performs well across a wide range of inputs and scenarios.

Test thoroughly:

  • Thorough testing is essential to identify any unexpected behavior or limitations in the model’s understanding.

Test the model with a diverse set of examples, including both common cases and edge cases, to assess its generalization capabilities.

Regular testing helps catch any unintended biases or inaccuracies in the model’s responses.

By following this best practice, you enhance the reliability and effectiveness of your few-shot learning approach. It allows you to adapt and improve your prompts based on real-world performance and challenges, leading to a more robust and reliable response for the intended tasks.


arxiv

Example Selector : Complementary Explanations for Effective In-Context Learning

Read the research paper on Arxiv.org

Introduction

The paper aims to better understand how explanations are used for in-context learning (ICL) by language models (LMs). Prior work has shown explanations improve ICL performance but little is known about what makes explanations effective. The authors conduct probing experiments to study the impact of computation traces and natural language in explanations. They also examine how exemplar sets function together to solve a query in ICL.

Background

The paper discusses ICL, where LMs are prompted with exemplar input-output pairs to predict answers for new queries. Explanations can also be included in prompts. The paper focuses on understanding explanations in ICL rather than standard prompting. Three symbolic reasoning datasets and several LMs are used.

Do LLMs Follow Explanations?

Experiments perturbing explanations show both computation traces and natural language contribute to effectiveness. Perturbations hurt performance but partial explanations still help, showing LMs follow explanations to an extent rather than patterns.

What Makes A Good Exemplar Set?

Experiments show LMs can fuse reasoning from complementary exemplars, benefiting performance. Relevant exemplars also help via three similarity metrics, with LM-based selection working best.

MMR for Exemplar Selection

The paper proposes selecting exemplars with maximal marginal relevance to balance relevance and complementarity. This outperforms nearest neighbors across datasets and models.

Datasets:

  • LETTER CONCATENATION,
  • COIN FLIPS,
  • GRADE SCHOOL MATH,
  • ECQA,
  • E-SNLI

Models:

  • OPT-175B,
  • GPT-3,
  • InstructGPT,
  • text-davinci-002,
  • GPT-3 Codex models

Results:

  • Perturbing explanations harms performance but partial explanations still help.
  • Combining complementary exemplars improves performance. MMR selection outperforms nearest neighbors.
  • In a raw sense, this means that LangChain MaxMarginalRelevanceExampleSelector will provide better results compared to SemanticSimilarityExampleSelector.

References

LangChain : Few Shot Prompt Template : Example

LangChain : Few Shot Prompt Template

LangChain : Example Selector

Deeper dive into ExampleSelector: MaxMarginalRelevanceExampleSelector