Few-shot prompting is a technique where you provide a small number of input-output examples before presenting the actual task, enabling language models to learn the pattern and apply it to new inputs without any parameter updates. The prompt structure follows a pattern: Example 1 input, Example 1 output, Example 2 input, Example 2 output, then the actual input. The model recognizes the pattern from examples and generates an appropriate output. This in-context learning happens at inference time, not training time. Few-shot prompting clearly outperforms zero-shot on many tasks because the examples clarify ambiguous instructions, demonstrate the desired format, and provide implicit information about edge cases. The choice of examples significantly affects performance. Examples should be representative, covering different aspects of the task. They should be correctly formatted, since the model will mimic the format. They should be diverse enough to prevent overfitting to one pattern. Example order matters too, models sometimes weight recent examples more heavily. Few-shot prompting has limits: very complex tasks may require more examples than fit in the context window, and some tasks require reasoning abilities beyond pattern matching. But for many practical applications, three to five well-chosen examples transform model performance from mediocre to excellent.
Back to Glossary