# injection
標記為「injection」的 197 篇文章
Function Calling Exploitation
Practical attacks against OpenAI function calling, Anthropic tool use, and similar APIs -- injecting function calls through prompt injection, exploiting parameter validation gaps, and chaining calls.
Memory Schema Injection
Injecting structured data into memory systems that alters agent behavior when retrieved in future interactions.
Persistent Memory Injection
Injecting persistent false memories into agent memory systems to influence future behavior.
Memory Compression Injection
Inject persistent instructions through memory compression and summarization processes in long-running agents.
Function Schema Injection
Injecting malicious instructions through function parameter descriptions and schema definitions.
Parameter Manipulation
Techniques for crafting malicious parameter values in LLM function calls, including type confusion, boundary violations, injection through parameters, and exploiting weak validation.
Structured Output Tool Injection
Exploiting structured output mode to inject tool call directives into model responses.
MCP Configuration Injection
Injecting malicious configuration into MCP server initialization for persistent compromise.
MCP Prompt Template Injection
Exploiting MCP prompt templates to inject instructions through template variables and arguments.
MCP Tool Description Injection
Techniques for injecting adversarial instructions via MCP tool descriptions and parameter schemas.
MCP Resource Template Injection
Inject adversarial content through MCP resource URI templates and parameter expansion mechanisms.
Log Analysis for Injection Detection
Analyzing application and model logs to detect prompt injection attacks including pattern matching, anomaly detection, and behavioral indicators.
Injection Defense Assessment
Assessment on defending against prompt injection including classifiers, guardrails, and output filtering.
Case Study: Prompt Injection in the Wild
Collection of documented prompt injection incidents in production applications.
Commit Message Injection Attacks
Using crafted commit messages to inject adversarial instructions into AI code review tools that process git history for context.
Copilot Injection Attacks
Prompt injection through repository context that influences code generation suggestions.
Documentation-Based Code Injection
Embedding adversarial instructions in code comments, docstrings, and documentation files that influence AI code generation.
IDE Plugin Injection Attacks
Exploiting IDE-integrated AI coding assistants through workspace context poisoning, configuration manipulation, and extension-based injection vectors.
May 2026: RAG Poisoning Challenge
Inject malicious documents into a retrieval-augmented generation system to control responses for specific queries without disrupting normal operation.
Monthly Challenge: Injection Olympics
Monthly community challenge focused on creative prompt injection techniques across multiple models.
Prompt Injection Canary System
Deploy canary strings in system prompts to detect and alert on prompt injection and extraction attempts.
Vector Database Injection Attacks (Embedding Vector Security)
Comprehensive techniques for injecting adversarial vectors into vector databases to manipulate retrieval results and influence RAG system outputs.
Vector Database Injection Attacks (Embedding Vector Security Overview)
Injecting adversarial documents into vector databases to influence retrieval results.
Screen Capture Injection
Techniques for injecting malicious content through screen capture pipelines used by computer use AI agents, including frame manipulation, capture timing attacks, and pixel-level payload delivery through the visual channel.
Robot Control Injection
Techniques for injecting malicious commands into LLM-controlled robotic systems: prompt injection through task descriptions, code generation exploitation, parameter manipulation, and action sequence hijacking.
Output Handling Exploits
Deep dive into XSS, SQL injection, command injection, SSTI, and path traversal attacks that weaponize LLM output as an injection vector against downstream systems.
Cross-Architecture Injection Transfer
Research into how injection techniques transfer across model architectures and what architectural properties determine transferability.
Cross-Lingual Injection Transfer Research
Research on how injection techniques transfer across languages and multilingual models.
Defense-Aware Payload Design
Designing injection payloads that adapt to and evade specific defense mechanisms through probing and feedback-based optimization.
Defense-Informed Injection Design
Methodology for designing injections that account for known defensive mechanisms.
Injection Research
Advanced research in prompt injection, jailbreak automation, and multimodal attack vectors, covering cutting-edge techniques that push beyond standard injection approaches.
Injection in Reasoning Models
Research into injection attacks specific to reasoning-augmented models that exploit chain-of-thought processes and self-reflection mechanisms.
Injection Attack Surface Taxonomy
Comprehensive taxonomy of all known injection attack surfaces in LLM-powered applications.
Multi-Agent Injection Research
Research into how injections propagate through multi-agent systems and what properties determine infection spread rates.
Novel Injection Classes
Exploring emerging injection classes that don't fit traditional taxonomies, including structural, temporal, and cross-system injection vectors.
Semantic Injection Research
Research on semantically coherent injections that are indistinguishable from normal input.
Semantic Space Injection Research
Research into injections that operate in semantic embedding space rather than token space, exploiting learned representations directly.
Temporal Dynamics of Injection Success
Research on how injection success rates change over time with model updates and defense evolution.
Multimodal Image Injection
Embed adversarial text in images that triggers prompt injection in vision-language models.
Audio Injection via Speech-to-Text Models
Craft adversarial audio that embeds prompt injection payloads when transcribed by speech-to-text models.
Agent Memory Injection for Persistent Access
Inject persistent instructions into agent memory systems that survive across conversation sessions.
Few-Shot Injection Fundamentals
Craft few-shot examples that prime the model to follow attacker instructions in subsequent turns.
Hello World Prompt Injection
Write and test your first prompt injection payload against a simple chatbot to understand the fundamental attack mechanism.
Emoji and Unicode Injection Techniques
Use emoji sequences and Unicode special characters to bypass text-based input filters.
JSON Injection Basics
Inject adversarial content through JSON-formatted inputs to exploit structured data processing.
Prompt Injection via File Names
Embed prompt injection payloads in filenames and metadata of uploaded documents.
Prompt Injection via Translation
Exploit LLM translation capabilities to smuggle instructions through language boundaries.
XML Injection in LLM Contexts
Exploit XML tag handling in LLM applications to manipulate instruction parsing.
Cross-Context Injection
Inject prompts that persist across separate conversation contexts in shared deployments.
Document-Based RAG Injection Lab
Inject adversarial content into documents that will be processed by a RAG system to influence model responses.
Lab: Few-Shot Example Injection
Hands-on lab exploring how injected few-shot examples can steer language model outputs toward attacker-chosen behaviors by exploiting in-context learning.
Lab: Function Calling Injection
Hands-on lab for exploiting function calling mechanisms by crafting inputs that manipulate which functions get called and with what parameters.
Lab: Image-Based Prompt Injection
Hands-on lab exploring how text instructions embedded in images can be used to perform prompt injection against vision-language models (VLMs) that process visual input.
Lab: JSON Input Injection
Hands-on lab exploring how adversarial payloads injected through structured JSON inputs can manipulate language model behavior, bypass schema validation, and exploit parsing inconsistencies.
Lab: Markdown-Based Injection
Hands-on lab exploring how Markdown rendering in AI-generated outputs can be exploited to inject hidden content, exfiltrate data through image tags, and manipulate displayed information.
Lab: RAG Metadata Injection
Hands-on lab for exploiting metadata fields like titles, descriptions, and timestamps to manipulate RAG retrieval ranking and influence responses.
Multi-Language Injection Attacks
Exploit language switching and low-resource language gaps to bypass safety training.
Semantic Injection Crafting
Craft semantically coherent injections that evade both classifiers and human review.
Lab: Tool Result Injection Attacks
Inject adversarial content through tool call results to poison model reasoning and redirect subsequent actions.
Assistant Prefill Injection Attacks
Exploit assistant message prefilling to prime model responses and bypass safety alignment.
PDF Document Injection for RAG Systems
Craft adversarial PDF documents that inject instructions when processed by RAG document loaders.
Tool Result Injection Attacks
Craft malicious tool return values that inject instructions back into the model's reasoning chain.
Audio-Based Injection Attacks
Attacking speech-to-text and audio-language models through adversarial audio crafting.
Document Parsing Attacks
Malicious PDFs, DOCXs, and other documents with hidden instructions designed to exploit AI document processors: invisible text injection, metadata poisoning, and rendering discrepancies.
Modality-Bridging Injection Attacks
Techniques for encoding prompt injection payloads in non-text modalities to bypass text-focused safety filters, including visual injection, audio injection, and cross-modal encoding strategies.
Document Metadata Injection
Inject adversarial content through document metadata fields processed by multimodal AI systems.
Image-Based Prompt Injection Techniques
Techniques for embedding adversarial prompts in images consumed by vision-language models.
Image Steganography for AI Attacks
Using steganographic techniques to embed adversarial payloads in images that evade human inspection and automated detection while influencing AI model behavior.
Image Steganography for LLM Injection
Use image steganography to embed prompt injection payloads invisible to human viewers.
Screenshot and UI Injection Attacks
Injecting prompts through screenshots and UI elements processed by computer-use AI agents.
Steganographic Prompt Injection
Hiding prompt injection payloads using steganographic techniques in images and audio.
Typography-Based Prompt Injection
Exploiting text rendering in images to deliver prompt injection payloads through typography recognition in VLMs.
Video Frame Injection
Injecting adversarial content into video frames processed by video-understanding AI models.
Cognitive Load Injection
Exploiting model capacity through cognitive load attacks that overwhelm safety reasoning.
Instruction Hierarchy Exploitation
Exploiting ambiguities in instruction priority hierarchies across different model providers.
Meta-Prompt Injection
Injecting instructions about how the model should process future instructions.
Temporal Injection Attacks
Exploiting time-dependent behavior in models including seasonal safety variations and update window exploitation.
Universal Suffix Attacks
Research and practice of universal adversarial suffixes that transfer across models and prompts.
Chunk Boundary Attacks
Exploiting document splitting and chunking mechanisms in RAG pipelines, including payload injection at chunk boundaries, cross-chunk instruction injection, and chunk size manipulation.
Metadata Injection
Manipulating document metadata to influence RAG retrieval ranking, bypass filtering, spoof source attribution, and exploit metadata-based access controls.
Injection Payload Cheat Sheet
Quick reference of proven injection payloads organized by technique category, encoding method, and target defense type.
A2A Protocol Injection Walkthrough
Walkthrough of exploiting Google's Agent-to-Agent protocol for inter-agent prompt injection.
Batch Processing Injection Walkthrough
Inject payloads through batch processing pipelines where individual items are processed without isolation.
Computer Use Agent Injection Walkthrough
Walkthrough of injecting prompts through UI elements and screenshots processed by computer-use agents.
Document-Based Injection Walkthrough
Inject prompts through documents processed by LLM applications including PDFs, spreadsheets, and presentations.
JSON Injection Attack Walkthrough
Exploit JSON parsing and generation in LLM applications to inject payloads through structured data boundaries.
Advanced Markdown Injection Walkthrough
Inject Markdown that triggers data exfiltration through image rendering, link generation, and code block escape.
Memory Poisoning Step by Step
Walkthrough of persisting injection payloads in agent memory systems to achieve long-term compromise of LLM-based agents.
Model Context Window Overflow Walkthrough
Overflow the context window to push safety instructions outside the effective attention range.
Multimodal Image Injection Walkthrough
Step-by-step walkthrough of embedding adversarial prompts in images for vision model exploitation.
Supply Chain Prompt Injection Walkthrough
Plant injection payloads in upstream data sources consumed by LLM applications including packages and documentation.
Synthetic Identity Injection Walkthrough
Create synthetic identities that exploit LLM trust mechanisms to achieve elevated instruction priority.
Tool Call Injection
Step-by-step walkthrough of injecting malicious parameters into LLM tool and function calls to execute unauthorized actions in agent systems.
Knowledge Graph Injection Attack Walkthrough
Walkthrough of injecting adversarial facts into knowledge graphs consumed by LLM-based reasoning systems.
Recursive Prompt Injection Walkthrough
Walkthrough of creating self-replicating injection payloads that persist through model output-to-input loops.
Voice AI Prompt Injection Walkthrough
Walkthrough of injecting prompts into voice-based AI assistants through adversarial audio and ultrasonic signals.
XML Injection in LLM Systems Walkthrough
Exploit XML parsing in LLM application pipelines to inject instructions through entity expansion and CDATA sections.
XML and JSON Injection in LLM Apps
Walkthrough of exploiting XML and JSON parsing in LLM applications for injection and data manipulation.
Function Calling 利用ation
Practical attacks against OpenAI function calling, Anthropic tool use, and similar APIs -- injecting function calls through prompt injection, exploiting parameter validation gaps, and chaining calls.
記憶體 Schema Injection
Injecting structured data into memory systems that alters agent behavior when retrieved in future interactions.
Persistent 記憶體 Injection
Injecting persistent false memories into agent memory systems to influence future behavior.
記憶體 Compression Injection
Inject persistent instructions through memory compression and summarization processes in long-running agents.
Function Schema Injection
Injecting malicious instructions through function parameter descriptions and schema definitions.
參數操弄
打造 LLM 函式呼叫中惡意參數值的技術,包含型別混淆、邊界越界、經由參數之注入,以及利用不完善之驗證。
Structured Output 工具 Injection
利用ing structured output mode to inject tool call directives into model responses.
MCP Configuration Injection
Injecting malicious configuration into MCP server initialization for persistent compromise.
MCP Prompt Template Injection
利用ing MCP prompt templates to inject instructions through template variables and arguments.
MCP 工具 Description Injection
Techniques for injecting adversarial instructions via MCP tool descriptions and parameter schemas.
MCP Resource Template Injection
Inject adversarial content through MCP resource URI templates and parameter expansion mechanisms.
Log Analysis for Injection Detection
Analyzing application and model logs to detect prompt injection attacks including pattern matching, anomaly detection, and behavioral indicators.
Injection 防禦 評量
評量 on defending against prompt injection including classifiers, guardrails, and output filtering.
Case Study: 提示詞注入 in the Wild
Collection of documented prompt injection incidents in production applications.
Commit Message Injection 攻擊s
Using crafted commit messages to inject adversarial instructions into AI code review tools that process git history for context.
Copilot Injection 攻擊s
Prompt injection through repository context that influences code generation suggestions.
Documentation-Based Code Injection
Embedding adversarial instructions in code comments, docstrings, and documentation files that influence AI code generation.
IDE Plugin Injection 攻擊s
利用ing IDE-integrated AI coding assistants through workspace context poisoning, configuration manipulation, and extension-based injection vectors.
May 2026: RAG 投毒 Challenge
Inject malicious documents into a retrieval-augmented generation system to control responses for specific queries without disrupting normal operation.
Monthly Challenge: Injection Olympics
Monthly community challenge focused on creative prompt injection techniques across multiple models.
提示詞注入 Canary System
Deploy canary strings in system prompts to detect and alert on prompt injection and extraction attempts.
Vector Database Injection 攻擊s (Embedding Vector 安全)
Comprehensive techniques for injecting adversarial vectors into vector databases to manipulate retrieval results and influence RAG system outputs.
Vector Database Injection 攻擊s (Embedding Vector 安全 概覽)
Injecting adversarial documents into vector databases to influence retrieval results.
Screen Capture Injection
Techniques for injecting malicious content through screen capture pipelines used by computer use AI agents, including frame manipulation, capture timing attacks, and pixel-level payload delivery through the visual channel.
機器人控制注入
將惡意指令注入 LLM 控制之機器人系統之技術:經任務描述之提示注入、程式碼生成利用、參數操弄與動作序列劫持。
Output Handling 利用s
Deep dive into XSS, SQL injection, command injection, SSTI, and path traversal attacks that weaponize LLM output as an injection vector against downstream systems.
Cross-Architecture Injection Transfer
Research into how injection techniques transfer across model architectures and what architectural properties determine transferability.
Cross-Lingual Injection Transfer Research
Research on how injection techniques transfer across languages and multilingual models.
防禦-Aware Payload Design
Designing injection payloads that adapt to and evade specific defense mechanisms through probing and feedback-based optimization.
防禦-Informed Injection Design
Methodology for designing injections that account for known defensive mechanisms.
注入研究
提示詞注入、越獄自動化與多模態攻擊向量的進階研究,涵蓋超越標準注入方法的尖端技術。
Injection in Reasoning 模型s
Research into injection attacks specific to reasoning-augmented models that exploit chain-of-thought processes and self-reflection mechanisms.
Injection 攻擊 Surface Taxonomy
Comprehensive taxonomy of all known injection attack surfaces in LLM-powered applications.
Multi-代理 Injection Research
Research into how injections propagate through multi-agent systems and what properties determine infection spread rates.
Novel Injection Classes
Exploring emerging injection classes that don't fit traditional taxonomies, including structural, temporal, and cross-system injection vectors.
Semantic Injection Research
Research on semantically coherent injections that are indistinguishable from normal input.
Semantic Space Injection Research
Research into injections that operate in semantic embedding space rather than token space, exploiting learned representations directly.
Temporal Dynamics of Injection Success
Research on how injection success rates change over time with model updates and defense evolution.
Multimodal Image Injection
Embed adversarial text in images that triggers prompt injection in vision-language models.
Audio Injection via Speech-to-Text 模型s
Craft adversarial audio that embeds prompt injection payloads when transcribed by speech-to-text models.
代理 記憶體 Injection for Persistent Access
Inject persistent instructions into agent memory systems that survive across conversation sessions.
Few-Shot Injection 基礎
Craft few-shot examples that prime the model to follow attacker instructions in subsequent turns.
Hello World 提示詞注入
Write and test your first prompt injection payload against a simple chatbot to understand the fundamental attack mechanism.
Emoji and Unicode Injection Techniques
Use emoji sequences and Unicode special characters to bypass text-based input filters.
JSON Injection Basics
Inject adversarial content through JSON-formatted inputs to exploit structured data processing.
提示詞注入 via File Names
Embed prompt injection payloads in filenames and metadata of uploaded documents.
提示詞注入 via Translation
利用 LLM translation capabilities to smuggle instructions through language boundaries.
XML Injection in LLM Contexts
利用 XML tag handling in LLM applications to manipulate instruction parsing.
Cross-Context Injection
Inject prompts that persist across separate conversation contexts in shared deployments.
Document-Based RAG Injection 實驗室
Inject adversarial content into documents that will be processed by a RAG system to influence model responses.
實驗室: Few-Shot Example Injection
Hands-on lab exploring how injected few-shot examples can steer language model outputs toward attacker-chosen behaviors by exploiting in-context learning.
實驗室: Function Calling Injection
Hands-on lab for exploiting function calling mechanisms by crafting inputs that manipulate which functions get called and with what parameters.
實驗室: Image-Based 提示詞注入
Hands-on lab exploring how text instructions embedded in images can be used to perform prompt injection against vision-language models (VLMs) that process visual input.
實驗室: JSON Input Injection
Hands-on lab exploring how adversarial payloads injected through structured JSON inputs can manipulate language model behavior, bypass schema validation, and exploit parsing inconsistencies.
實驗室: Markdown-Based Injection
Hands-on lab exploring how Markdown rendering in AI-generated outputs can be exploited to inject hidden content, exfiltrate data through image tags, and manipulate displayed information.
實驗室: RAG Metadata Injection
Hands-on lab for exploiting metadata fields like titles, descriptions, and timestamps to manipulate RAG retrieval ranking and influence responses.
Multi-Language Injection 攻擊s
利用 language switching and low-resource language gaps to bypass safety training.
Semantic Injection Crafting
Craft semantically coherent injections that evade both classifiers and human review.
實驗室: 工具 Result Injection 攻擊s
Inject adversarial content through tool call results to poison model reasoning and redirect subsequent actions.
Assistant Prefill Injection 攻擊s
利用 assistant message prefilling to prime model responses and bypass safety alignment.
PDF Document Injection for RAG Systems
Craft adversarial PDF documents that inject instructions when processed by RAG document loaders.
工具 Result Injection 攻擊s
Craft malicious tool return values that inject instructions back into the model's reasoning chain.
Audio-Based Injection 攻擊s
攻擊ing speech-to-text and audio-language models through adversarial audio crafting.
Document Parsing 攻擊s
Malicious PDFs, DOCXs, and other documents with hidden instructions designed to exploit AI document processors: invisible text injection, metadata poisoning, and rendering discrepancies.
模態橋接注入攻擊
於非文字模態編碼提示注入 payload 以繞過聚焦於文字之安全過濾器的技術,包括視覺注入、音訊注入與跨模態編碼策略。
Document Metadata Injection
Inject adversarial content through document metadata fields processed by multimodal AI systems.
Image-Based 提示詞注入 Techniques
Techniques for embedding adversarial prompts in images consumed by vision-language models.
Image Steganography for AI 攻擊s
Using steganographic techniques to embed adversarial payloads in images that evade human inspection and automated detection while influencing AI model behavior.
Image Steganography for LLM Injection
Use image steganography to embed prompt injection payloads invisible to human viewers.
Screenshot and UI Injection 攻擊s
Injecting prompts through screenshots and UI elements processed by computer-use AI agents.
Steganographic 提示詞注入
Hiding prompt injection payloads using steganographic techniques in images and audio.
Typography-Based 提示詞注入
利用ing text rendering in images to deliver prompt injection payloads through typography recognition in VLMs.
Video Frame Injection
Injecting adversarial content into video frames processed by video-understanding AI models.
Cognitive Load Injection
利用ing model capacity through cognitive load attacks that overwhelm safety reasoning.
Instruction Hierarchy 利用ation
利用ing ambiguities in instruction priority hierarchies across different model providers.
Meta-提示詞注入
Injecting instructions about how the model should process future instructions.
Temporal Injection 攻擊s
利用ing time-dependent behavior in models including seasonal safety variations and update window exploitation.
Universal Suffix 攻擊s
Research and practice of universal adversarial suffixes that transfer across models and prompts.
Chunk Boundary 攻擊s
利用ing document splitting and chunking mechanisms in RAG pipelines, including payload injection at chunk boundaries, cross-chunk instruction injection, and chunk size manipulation.
Metadata Injection
Manipulating document metadata to influence RAG retrieval ranking, bypass filtering, spoof source attribution, and exploit metadata-based access controls.
RAG 管線投毒
透過投毒檢索增強生成管線以操控 AI 回應的技術——涵蓋文件注入、嵌入操控、檢索排名攻擊與持久投毒策略。
Injection Payload Cheat Sheet
Quick reference of proven injection payloads organized by technique category, encoding method, and target defense type.
A2A Protocol Injection 導覽
Walkthrough of exploiting Google's Agent-to-Agent protocol for inter-agent prompt injection.
Batch Processing Injection 導覽
Inject payloads through batch processing pipelines where individual items are processed without isolation.
Computer Use 代理 Injection 導覽
導覽 of injecting prompts through UI elements and screenshots processed by computer-use agents.
Document-Based Injection 導覽
Inject prompts through documents processed by LLM applications including PDFs, spreadsheets, and presentations.
JSON Injection 攻擊 導覽
利用 JSON parsing and generation in LLM applications to inject payloads through structured data boundaries.
進階 Markdown Injection 導覽
Inject Markdown that triggers data exfiltration through image rendering, link generation, and code block escape.
記憶體 投毒 Step by Step
導覽 of persisting injection payloads in agent memory systems to achieve long-term compromise of LLM-based agents.
模型 Context Window Overflow 導覽
Overflow the context window to push safety instructions outside the effective attention range.
Multimodal Image Injection 導覽
Step-by-step walkthrough of embedding adversarial prompts in images for vision model exploitation.
Supply Chain 提示詞注入 導覽
Plant injection payloads in upstream data sources consumed by LLM applications including packages and documentation.
Synthetic Identity Injection 導覽
Create synthetic identities that exploit LLM trust mechanisms to achieve elevated instruction priority.
工具 Call Injection
Step-by-step walkthrough of injecting malicious parameters into LLM tool and function calls to execute unauthorized actions in agent systems.
Knowledge Graph Injection 攻擊 導覽
導覽 of injecting adversarial facts into knowledge graphs consumed by LLM-based reasoning systems.
Recursive 提示詞注入 導覽
導覽 of creating self-replicating injection payloads that persist through model output-to-input loops.
Voice AI 提示詞注入 導覽
導覽 of injecting prompts into voice-based AI assistants through adversarial audio and ultrasonic signals.
XML Injection in LLM Systems 導覽
利用 XML parsing in LLM application pipelines to inject instructions through entity expansion and CDATA sections.
XML and JSON Injection in LLM Apps
導覽 of exploiting XML and JSON parsing in LLM applications for injection and data manipulation.