# vulnerabilities
13 articlestagged with “vulnerabilities”
Case Study: ChatGPT Plugin Vulnerabilities
Analysis of real vulnerabilities discovered in ChatGPT plugins including data exfiltration and prompt injection.
Case Study: Early MCP Vulnerability Disclosures
Analysis of early MCP vulnerability disclosures including Invariant Labs tool poisoning research.
Insecure Code Generation Patterns
Common patterns of insecure code generated by LLMs including injection, authentication, and crypto flaws.
OWASP LLM Top 10 Deep Dive
Each OWASP LLM Top 10 item explained with real-world examples, testing methodology for each category, and how to map red team findings to OWASP classifications.
Responsible Disclosure for AI Vulnerabilities
Processes and best practices for responsible disclosure of vulnerabilities in AI systems.
Claude Known Vulnerabilities
Documented Claude vulnerabilities including many-shot jailbreaking, alignment faking research, crescendo attacks, prompt injection via artifacts, and system prompt extraction techniques.
Gemini Known Vulnerabilities
Documented Gemini vulnerabilities including image generation bias incidents, system prompt extraction, safety filter inconsistencies, multimodal injection exploits, and grounding abuse.
GPT-4 Known Vulnerabilities
Documented GPT-4 vulnerabilities including DAN jailbreaks, data extraction incidents, system prompt leaks, tool-use exploits, and fine-tuning safety removal.
Tokenizer Vulnerabilities Across Models
Comprehensive analysis of tokenizer vulnerabilities across major model families.
CVE Tracking for AI Systems
Guide to tracking and analyzing CVEs affecting AI systems and frameworks, with historical analysis and trending vulnerability classes.
OWASP LLM Top 10 Quick Reference
Quick reference for the OWASP Top 10 for LLM Applications with definitions, attack examples, and key mitigations for each risk category.
Chaining AI Vulnerabilities
Techniques for chaining multiple AI vulnerabilities into reliable multi-step exploitation paths.
DPO Training Vulnerabilities
Security analysis of Direct Preference Optimization training and its vulnerability to preference poisoning.