Zero-Shot / Few-Shot Learning
Also known as: Zero-Shot, Few-Shot, In-Context Learning, Prompt-Based Learning
LLMs' ability to perform new tasks without examples (zero-shot) or with very few examples (few-shot) in the prompt.
Zero-Shot and Few-Shot Learning are capabilities of modern language models to perform new tasks without additional specific training.
In Zero-Shot, the model solves a task with only a text instruction, without seeing any prior examples. For instance, classifying verbatims into thematic categories by simply describing those categories.
In Few-Shot, the model receives 2-10 examples of the desired input-output before the main task, which significantly improves accuracy. For example, showing 5 already-coded verbatims before asking it to code 500 more.
In research, these capabilities are tremendously useful because they allow applying models to client-specific taxonomies and coding frameworks without retraining the model. The difference between zero-shot and few-shot can be the difference between mediocre results and professional-quality results.
Atlantia systematically uses few-shot prompting in its automated open-end coding processes.
See related solution →