# indirect-injection
標記為「indirect-injection」的 27 篇文章
Agent Goal Hijacking
Techniques for redirecting AI agent objectives through poisoned inputs, indirect prompt injection, and multi-step manipulation -- the #1 ranked risk in OWASP's 2026 Agentic Top 10.
Function Result Poisoning (Agentic Exploitation)
Techniques for manipulating function return values to influence LLM behavior, inject instructions via tool results, and chain poisoned results into multi-step exploitation.
Prompt Injection Chain Analysis
Analyzing chains of prompt injection attacks across multi-step AI systems, including indirect injection propagation, agentic exploitation, and cross-system attack correlation.
Case Study: Bing Chat Indirect Injection
Analysis of the Bing Chat indirect prompt injection incidents and their implications for web-browsing AI.
Case Study: Indirect Prompt Injection in Bing Chat
Detailed analysis of indirect prompt injection attacks demonstrated against Bing Chat through web content manipulation.
Advanced Prompt Injection
Expert techniques for instruction hierarchy exploitation, multi-stage injection chains, indirect injection via structured data, payload obfuscation, and quantitative attack measurement.
Basic Indirect Prompt Injection
Plant and trigger a basic indirect prompt injection payload in content consumed by an LLM.
Lab: Indirect Prompt Injection
Inject instructions through external data sources including documents, web pages, and emails that a target AI system processes as context.
Indirect Injection via Web Content
Plant prompt injection payloads in web pages consumed by RAG-enabled LLM applications.
Lab: Indirect Prompt Injection Chains
Hands-on lab for setting up indirect prompt injection scenarios through web pages, emails, and documents, testing multi-hop injection chains against AI systems.
Lab: Tool Result Poisoning
Hands-on lab for poisoning tool outputs to redirect agent behavior by injecting malicious content through tool results.
Indirect Prompt Injection
How attackers embed malicious instructions in external data sources that LLMs process, enabling attacks without direct access to the model's input.
RAG Retrieval Poisoning (Rag Data Attacks)
Techniques for poisoning RAG knowledge bases to inject malicious content into LLM context, including embedding manipulation, document crafting, and retrieval hijacking.
Real-World Indirect Prompt Injection
Walkthrough of planting and triggering indirect prompt injection in web-browsing AI assistants.
代理 Goal Hijacking
Techniques for redirecting AI agent objectives through poisoned inputs, indirect prompt injection, and multi-step manipulation -- the #1 ranked risk in OWASP's 2026 Agentic Top 10.
函式結果投毒(代理式攻擊)
操弄函式回傳值以影響 LLM 行為的技術、透過工具結果注入指令,以及將被投毒結果串接為多步攻擊。
提示詞注入 Chain Analysis
Analyzing chains of prompt injection attacks across multi-step AI systems, including indirect injection propagation, agentic exploitation, and cross-system attack correlation.
Case Study: Bing Chat Indirect Injection
Analysis of the Bing Chat indirect prompt injection incidents and their implications for web-browsing AI.
Case Study: Indirect 提示詞注入 in Bing Chat
Detailed analysis of indirect prompt injection attacks demonstrated against Bing Chat through web content manipulation.
進階 提示詞注入
專家 techniques for instruction hierarchy exploitation, multi-stage injection chains, indirect injection via structured data, payload obfuscation, and quantitative attack measurement.
Basic Indirect 提示詞注入
Plant and trigger a basic indirect prompt injection payload in content consumed by an LLM.
實驗室: Indirect 提示詞注入
Inject instructions through external data sources including documents, web pages, and emails that a target AI system processes as context.
Indirect Injection via Web Content
Plant prompt injection payloads in web pages consumed by RAG-enabled LLM applications.
實驗室: Indirect 提示詞注入 Chains
Hands-on lab for setting up indirect prompt injection scenarios through web pages, emails, and documents, testing multi-hop injection chains against AI systems.
實驗室: 工具 Result 投毒
Hands-on lab for poisoning tool outputs to redirect agent behavior by injecting malicious content through tool results.
間接提示詞注入
攻擊者如何在大型語言模型處理的外部資料來源中嵌入惡意指令,無需直接存取模型輸入即可發動攻擊。
Real-World Indirect 提示詞注入
導覽 of planting and triggering indirect prompt injection in web-browsing AI assistants.