# in-context-learning
7 articlestagged with “in-context-learning”
In-Context Learning Exploitation
Exploiting few-shot and in-context learning capabilities for prompt injection, behavioral modification, and task hijacking.
Lab: Few-Shot Manipulation Attacks
Craft fake few-shot examples that teach the model to bypass its safety training by demonstrating the desired adversarial behavior through fabricated conversation examples.
Few-Shot Manipulation
Using crafted in-context examples to steer model behavior, including many-shot jailbreaking, poisoned demonstrations, and example-based conditioning.
Many-Shot Jailbreaking
Power-law scaling of in-context jailbreaks: why 5 shots fail but 256 succeed, context window size as attack surface, and mitigations for long-context exploitation.
Few-Shot Injection
Using crafted few-shot examples within user input to steer LLM behavior toward unintended outputs, exploiting in-context learning to override safety training.
Few-Shot Example Poisoning Walkthrough
Poison few-shot examples in prompts to establish behavioral patterns that override system instructions.
Many-Shot Jailbreaking (Attack Walkthrough)
Using large numbers of examples in a single prompt to overwhelm LLM safety training through in-context learning, exploiting long context windows to shift model behavior.