# llm-security
5 articlestagged with “llm-security”
Defense-in-Depth for LLM Apps
Layered defense strategy for AI applications covering network, application, model, and output layers, how each layer contributes, and why single-layer defense always fails.
Prompt Injection & Jailbreaks
A comprehensive introduction to prompt injection — the most fundamental vulnerability class in LLM applications — and its relationship to jailbreak techniques.
Building a Production Input Sanitizer
Step-by-step walkthrough for building a production-grade input sanitizer that cleans, normalizes, and validates user prompts before they reach an LLM, covering encoding normalization, injection pattern stripping, length enforcement, and integration testing.
Threat Modeling for LLM-Powered Applications
Step-by-step walkthrough for conducting threat modeling sessions specifically tailored to LLM-powered applications, covering data flow analysis, trust boundary identification, AI-specific threat enumeration, risk assessment, and mitigation planning.
Using Burp Suite for LLM API Endpoint Testing
Walkthrough for using Burp Suite to intercept, analyze, and attack LLM API endpoints, covering proxy configuration, request manipulation, automated scanning for injection flaws, and custom extensions for AI-specific testing.