# techniques
21 articlestagged with “techniques”
AI Threat Hunting Techniques
Proactive threat hunting techniques for identifying ongoing attacks against AI systems.
Attack Attribution Techniques
Techniques for attributing AI attacks to specific actors including behavioral analysis, infrastructure tracking, and technique fingerprinting.
LLM Log Analysis Techniques
Techniques for analyzing LLM application logs to identify attack patterns and compromised sessions.
Code Assistant Exploitation Techniques
Techniques for exploiting AI code assistants to generate insecure code or leak repository information.
February 2026: Jailbreak Innovation Challenge
Develop novel jailbreak techniques against hardened language models and document them with reproducibility evidence. Judged on novelty, reliability, and transferability.
Embedding Poisoning Techniques
Techniques for poisoning embedding spaces to manipulate retrieval and similarity search.
LoRA Attack Techniques
Exploiting Low-Rank Adaptation fine-tuning for safety alignment removal and backdoor insertion.
Lab: Injection Techniques Survey
Survey and test ten fundamental prompt injection techniques against a local LLM, measuring effectiveness and cataloging behavioral patterns for each approach.
Lab: Basic Jailbreak Techniques
Hands-on exploration of jailbreak techniques including role-play, DAN-style prompts, and academic framing against multiple models.
Lab: Simple Payload Encoding Techniques
Practice encoding injection payloads using Base64, hex, URL encoding, and Unicode to bypass basic input filters.
Lab: Guardrail Bypass Technique Laboratory
Practice guardrail bypass techniques against NeMo Guardrails, LLM Guard, and custom classifier-based defenses.
Prompt Leakage Technique Lab
Practice multiple system prompt extraction techniques and measure their effectiveness across different targets.
Image-Based Prompt Injection Techniques
Techniques for embedding adversarial prompts in images consumed by vision-language models.
Multimodal Defense Bypass Techniques
Techniques for bypassing safety filters that only analyze individual modalities.
Competition-Style Injection Techniques
Injection techniques commonly used in AI red team competitions and CTF challenges.
Universal Jailbreak Techniques
Analysis of jailbreak techniques that transfer across multiple models and providers.
Jailbreak Technique Catalog
Comprehensive catalog of jailbreak techniques with effectiveness ratings, model compatibility notes, and evolution history.
Prompt Injection Cheat Sheet
Quick reference for prompt injection techniques organized by category, with example payloads and defensive considerations for each technique.
Evasion Techniques for AI Classifiers
Advanced techniques for evading input/output safety classifiers in LLM applications.
Attack Execution Workflow
Step-by-step workflow for executing AI red team attacks: selecting techniques from recon findings, building attack chains, documenting findings in real-time, managing evidence, and knowing when to escalate or stop.
Using MITRE ATLAS for AI Attack Mapping
Walkthrough for mapping AI red team activities and findings to the MITRE ATLAS framework, covering tactic and technique identification, attack chain construction, and navigator visualization.