# few-shot
9 articlestagged with “few-shot”
Few-Shot Fine-Tuning Risks
Security risks associated with few-shot fine-tuning where a small number of carefully crafted examples can significantly alter model safety properties.
In-Context Learning Exploitation
Exploiting few-shot and in-context learning capabilities for prompt injection, behavioral modification, and task hijacking.
Few-Shot Injection Fundamentals
Craft few-shot examples that prime the model to follow attacker instructions in subsequent turns.
Lab: Few-Shot Manipulation Attacks
Craft fake few-shot examples that teach the model to bypass its safety training by demonstrating the desired adversarial behavior through fabricated conversation examples.
Lab: Few-Shot Example Injection
Hands-on lab exploring how injected few-shot examples can steer language model outputs toward attacker-chosen behaviors by exploiting in-context learning.
Few-Shot Manipulation
Using crafted in-context examples to steer model behavior, including many-shot jailbreaking, poisoned demonstrations, and example-based conditioning.
Few-Shot Injection
Using crafted few-shot examples within user input to steer LLM behavior toward unintended outputs, exploiting in-context learning to override safety training.
Few-Shot Example Poisoning Walkthrough
Poison few-shot examples in prompts to establish behavioral patterns that override system instructions.
Few-Shot Attack Scaling Analysis
Detailed analysis of how few-shot examples scale to influence model behavior, from 2-shot to many-shot regime.