# many-shot
9 articlestagged with “many-shot”
Case Study: Many-Shot Jailbreaking Discovery
Deep analysis of Anthropic's many-shot jailbreaking research and its implications for long-context model safety.
Jailbreak Research & Automation
Taxonomy of jailbreak primitives, crescendo attacks, many-shot jailbreaking, and automated jailbreak generation with TAP and PAIR.
Implementing Many-Shot Jailbreaking
Implement Anthropic's many-shot jailbreaking technique with scaling analysis across conversation lengths.
Claude Known Vulnerabilities
Documented Claude vulnerabilities including many-shot jailbreaking, alignment faking research, crescendo attacks, prompt injection via artifacts, and system prompt extraction techniques.
Few-Shot Manipulation
Using crafted in-context examples to steer model behavior, including many-shot jailbreaking, poisoned demonstrations, and example-based conditioning.
Many-Shot Jailbreaking Analysis
Deep analysis of the many-shot jailbreaking technique and its implications for in-context learning.
Many-Shot Jailbreaking
Power-law scaling of in-context jailbreaks: why 5 shots fail but 256 succeed, context window size as attack surface, and mitigations for long-context exploitation.
Many-Shot Jailbreaking Walkthrough
Walkthrough implementing Anthropic's many-shot jailbreaking technique with analysis of scaling behavior.
Many-Shot Jailbreaking (Attack Walkthrough)
Using large numbers of examples in a single prompt to overwhelm LLM safety training through in-context learning, exploiting long context windows to shift model behavior.