# injection
97 articlestagged with “injection”
Memory Schema Injection
Injecting structured data into memory systems that alters agent behavior when retrieved in future interactions.
Persistent Memory Injection
Injecting persistent false memories into agent memory systems to influence future behavior.
Memory Compression Injection
Inject persistent instructions through memory compression and summarization processes in long-running agents.
Function Schema Injection
Injecting malicious instructions through function parameter descriptions and schema definitions.
Parameter Manipulation
Techniques for crafting malicious parameter values in LLM function calls, including type confusion, boundary violations, injection through parameters, and exploiting weak validation.
Structured Output Tool Injection
Exploiting structured output mode to inject tool call directives into model responses.
MCP Configuration Injection
Injecting malicious configuration into MCP server initialization for persistent compromise.
MCP Prompt Template Injection
Exploiting MCP prompt templates to inject instructions through template variables and arguments.
MCP Tool Description Injection
Techniques for injecting adversarial instructions via MCP tool descriptions and parameter schemas.
MCP Resource Template Injection
Inject adversarial content through MCP resource URI templates and parameter expansion mechanisms.
Log Analysis for Injection Detection
Analyzing application and model logs to detect prompt injection attacks including pattern matching, anomaly detection, and behavioral indicators.
Injection Defense Assessment
Assessment on defending against prompt injection including classifiers, guardrails, and output filtering.
Case Study: Prompt Injection in the Wild
Collection of documented prompt injection incidents in production applications.
Commit Message Injection Attacks
Using crafted commit messages to inject adversarial instructions into AI code review tools that process git history for context.
Copilot Injection Attacks
Prompt injection through repository context that influences code generation suggestions.
Documentation-Based Code Injection
Embedding adversarial instructions in code comments, docstrings, and documentation files that influence AI code generation.
IDE Plugin Injection Attacks
Exploiting IDE-integrated AI coding assistants through workspace context poisoning, configuration manipulation, and extension-based injection vectors.
May 2026: RAG Poisoning Challenge
Inject malicious documents into a retrieval-augmented generation system to control responses for specific queries without disrupting normal operation.
Monthly Challenge: Injection Olympics
Monthly community challenge focused on creative prompt injection techniques across multiple models.
Prompt Injection Canary System
Deploy canary strings in system prompts to detect and alert on prompt injection and extraction attempts.
Vector Database Injection Attacks (Embedding Vector Security)
Comprehensive techniques for injecting adversarial vectors into vector databases to manipulate retrieval results and influence RAG system outputs.
Vector Database Injection Attacks (Embedding Vector Security Overview)
Injecting adversarial documents into vector databases to influence retrieval results.
Screen Capture Injection
Techniques for injecting malicious content through screen capture pipelines used by computer use AI agents, including frame manipulation, capture timing attacks, and pixel-level payload delivery through the visual channel.
Robot Control Injection
Techniques for injecting malicious commands into LLM-controlled robotic systems: prompt injection through task descriptions, code generation exploitation, parameter manipulation, and action sequence hijacking.
Output Handling Exploits
Deep dive into XSS, SQL injection, command injection, SSTI, and path traversal attacks that weaponize LLM output as an injection vector against downstream systems.
Cross-Architecture Injection Transfer
Research into how injection techniques transfer across model architectures and what architectural properties determine transferability.
Cross-Lingual Injection Transfer Research
Research on how injection techniques transfer across languages and multilingual models.
Defense-Aware Payload Design
Designing injection payloads that adapt to and evade specific defense mechanisms through probing and feedback-based optimization.
Defense-Informed Injection Design
Methodology for designing injections that account for known defensive mechanisms.
Injection Research
Advanced research in prompt injection, jailbreak automation, and multimodal attack vectors, covering cutting-edge techniques that push beyond standard injection approaches.
Injection in Reasoning Models
Research into injection attacks specific to reasoning-augmented models that exploit chain-of-thought processes and self-reflection mechanisms.
Injection Attack Surface Taxonomy
Comprehensive taxonomy of all known injection attack surfaces in LLM-powered applications.
Multi-Agent Injection Research
Research into how injections propagate through multi-agent systems and what properties determine infection spread rates.
Novel Injection Classes
Exploring emerging injection classes that don't fit traditional taxonomies, including structural, temporal, and cross-system injection vectors.
Semantic Injection Research
Research on semantically coherent injections that are indistinguishable from normal input.
Semantic Space Injection Research
Research into injections that operate in semantic embedding space rather than token space, exploiting learned representations directly.
Temporal Dynamics of Injection Success
Research on how injection success rates change over time with model updates and defense evolution.
Multimodal Image Injection
Embed adversarial text in images that triggers prompt injection in vision-language models.
Audio Injection via Speech-to-Text Models
Craft adversarial audio that embeds prompt injection payloads when transcribed by speech-to-text models.
Agent Memory Injection for Persistent Access
Inject persistent instructions into agent memory systems that survive across conversation sessions.
Few-Shot Injection Fundamentals
Craft few-shot examples that prime the model to follow attacker instructions in subsequent turns.
Hello World Prompt Injection
Write and test your first prompt injection payload against a simple chatbot to understand the fundamental attack mechanism.
Emoji and Unicode Injection Techniques
Use emoji sequences and Unicode special characters to bypass text-based input filters.
JSON Injection Basics
Inject adversarial content through JSON-formatted inputs to exploit structured data processing.
Prompt Injection via File Names
Embed prompt injection payloads in filenames and metadata of uploaded documents.
Prompt Injection via Translation
Exploit LLM translation capabilities to smuggle instructions through language boundaries.
XML Injection in LLM Contexts
Exploit XML tag handling in LLM applications to manipulate instruction parsing.
Cross-Context Injection
Inject prompts that persist across separate conversation contexts in shared deployments.
Document-Based RAG Injection Lab
Inject adversarial content into documents that will be processed by a RAG system to influence model responses.
Lab: Few-Shot Example Injection
Hands-on lab exploring how injected few-shot examples can steer language model outputs toward attacker-chosen behaviors by exploiting in-context learning.
Lab: Function Calling Injection
Hands-on lab for exploiting function calling mechanisms by crafting inputs that manipulate which functions get called and with what parameters.
Lab: Image-Based Prompt Injection
Hands-on lab exploring how text instructions embedded in images can be used to perform prompt injection against vision-language models (VLMs) that process visual input.
Lab: JSON Input Injection
Hands-on lab exploring how adversarial payloads injected through structured JSON inputs can manipulate language model behavior, bypass schema validation, and exploit parsing inconsistencies.
Lab: Markdown-Based Injection
Hands-on lab exploring how Markdown rendering in AI-generated outputs can be exploited to inject hidden content, exfiltrate data through image tags, and manipulate displayed information.
Lab: RAG Metadata Injection
Hands-on lab for exploiting metadata fields like titles, descriptions, and timestamps to manipulate RAG retrieval ranking and influence responses.
Multi-Language Injection Attacks
Exploit language switching and low-resource language gaps to bypass safety training.
Semantic Injection Crafting
Craft semantically coherent injections that evade both classifiers and human review.
Lab: Tool Result Injection Attacks
Inject adversarial content through tool call results to poison model reasoning and redirect subsequent actions.
Assistant Prefill Injection Attacks
Exploit assistant message prefilling to prime model responses and bypass safety alignment.
PDF Document Injection for RAG Systems
Craft adversarial PDF documents that inject instructions when processed by RAG document loaders.
Tool Result Injection Attacks
Craft malicious tool return values that inject instructions back into the model's reasoning chain.
Audio-Based Injection Attacks
Attacking speech-to-text and audio-language models through adversarial audio crafting.
Document Parsing Attacks
Malicious PDFs, DOCXs, and other documents with hidden instructions designed to exploit AI document processors: invisible text injection, metadata poisoning, and rendering discrepancies.
Modality-Bridging Injection Attacks
Techniques for encoding prompt injection payloads in non-text modalities to bypass text-focused safety filters, including visual injection, audio injection, and cross-modal encoding strategies.
Document Metadata Injection
Inject adversarial content through document metadata fields processed by multimodal AI systems.
Image-Based Prompt Injection Techniques
Techniques for embedding adversarial prompts in images consumed by vision-language models.
Image Steganography for AI Attacks
Using steganographic techniques to embed adversarial payloads in images that evade human inspection and automated detection while influencing AI model behavior.
Image Steganography for LLM Injection
Use image steganography to embed prompt injection payloads invisible to human viewers.
Screenshot and UI Injection Attacks
Injecting prompts through screenshots and UI elements processed by computer-use AI agents.
Steganographic Prompt Injection
Hiding prompt injection payloads using steganographic techniques in images and audio.
Typography-Based Prompt Injection
Exploiting text rendering in images to deliver prompt injection payloads through typography recognition in VLMs.
Video Frame Injection
Injecting adversarial content into video frames processed by video-understanding AI models.
Cognitive Load Injection
Exploiting model capacity through cognitive load attacks that overwhelm safety reasoning.
Instruction Hierarchy Exploitation
Exploiting ambiguities in instruction priority hierarchies across different model providers.
Meta-Prompt Injection
Injecting instructions about how the model should process future instructions.
Temporal Injection Attacks
Exploiting time-dependent behavior in models including seasonal safety variations and update window exploitation.
Universal Suffix Attacks
Research and practice of universal adversarial suffixes that transfer across models and prompts.
Chunk Boundary Attacks
Exploiting document splitting and chunking mechanisms in RAG pipelines, including payload injection at chunk boundaries, cross-chunk instruction injection, and chunk size manipulation.
Metadata Injection
Manipulating document metadata to influence RAG retrieval ranking, bypass filtering, spoof source attribution, and exploit metadata-based access controls.
Injection Payload Cheat Sheet
Quick reference of proven injection payloads organized by technique category, encoding method, and target defense type.
A2A Protocol Injection Walkthrough
Walkthrough of exploiting Google's Agent-to-Agent protocol for inter-agent prompt injection.
Batch Processing Injection Walkthrough
Inject payloads through batch processing pipelines where individual items are processed without isolation.
Computer Use Agent Injection Walkthrough
Walkthrough of injecting prompts through UI elements and screenshots processed by computer-use agents.
Document-Based Injection Walkthrough
Inject prompts through documents processed by LLM applications including PDFs, spreadsheets, and presentations.
JSON Injection Attack Walkthrough
Exploit JSON parsing and generation in LLM applications to inject payloads through structured data boundaries.
Advanced Markdown Injection Walkthrough
Inject Markdown that triggers data exfiltration through image rendering, link generation, and code block escape.
Memory Poisoning Step by Step
Walkthrough of persisting injection payloads in agent memory systems to achieve long-term compromise of LLM-based agents.
Model Context Window Overflow Walkthrough
Overflow the context window to push safety instructions outside the effective attention range.
Multimodal Image Injection Walkthrough
Step-by-step walkthrough of embedding adversarial prompts in images for vision model exploitation.
Supply Chain Prompt Injection Walkthrough
Plant injection payloads in upstream data sources consumed by LLM applications including packages and documentation.
Synthetic Identity Injection Walkthrough
Create synthetic identities that exploit LLM trust mechanisms to achieve elevated instruction priority.
Tool Call Injection
Step-by-step walkthrough of injecting malicious parameters into LLM tool and function calls to execute unauthorized actions in agent systems.
Knowledge Graph Injection Attack Walkthrough
Walkthrough of injecting adversarial facts into knowledge graphs consumed by LLM-based reasoning systems.
Recursive Prompt Injection Walkthrough
Walkthrough of creating self-replicating injection payloads that persist through model output-to-input loops.
Voice AI Prompt Injection Walkthrough
Walkthrough of injecting prompts into voice-based AI assistants through adversarial audio and ultrasonic signals.
XML Injection in LLM Systems Walkthrough
Exploit XML parsing in LLM application pipelines to inject instructions through entity expansion and CDATA sections.
XML and JSON Injection in LLM Apps
Walkthrough of exploiting XML and JSON parsing in LLM applications for injection and data manipulation.