# indirect-injection
13 articlestagged with “indirect-injection”
Function Result Poisoning (Agentic Exploitation)
Techniques for manipulating function return values to influence LLM behavior, inject instructions via tool results, and chain poisoned results into multi-step exploitation.
Prompt Injection Chain Analysis
Analyzing chains of prompt injection attacks across multi-step AI systems, including indirect injection propagation, agentic exploitation, and cross-system attack correlation.
Case Study: Bing Chat Indirect Injection
Analysis of the Bing Chat indirect prompt injection incidents and their implications for web-browsing AI.
Case Study: Indirect Prompt Injection in Bing Chat
Detailed analysis of indirect prompt injection attacks demonstrated against Bing Chat through web content manipulation.
Advanced Prompt Injection
Expert techniques for instruction hierarchy exploitation, multi-stage injection chains, indirect injection via structured data, payload obfuscation, and quantitative attack measurement.
Basic Indirect Prompt Injection
Plant and trigger a basic indirect prompt injection payload in content consumed by an LLM.
Lab: Indirect Prompt Injection
Inject instructions through external data sources including documents, web pages, and emails that a target AI system processes as context.
Indirect Injection via Web Content
Plant prompt injection payloads in web pages consumed by RAG-enabled LLM applications.
Lab: Indirect Prompt Injection Chains
Hands-on lab for setting up indirect prompt injection scenarios through web pages, emails, and documents, testing multi-hop injection chains against AI systems.
Lab: Tool Result Poisoning
Hands-on lab for poisoning tool outputs to redirect agent behavior by injecting malicious content through tool results.
Indirect Prompt Injection
How attackers embed malicious instructions in external data sources that LLMs process, enabling attacks without direct access to the model's input.
RAG Retrieval Poisoning (Rag Data Attacks)
Techniques for poisoning RAG knowledge bases to inject malicious content into LLM context, including embedding manipulation, document crafting, and retrieval hijacking.
Real-World Indirect Prompt Injection
Walkthrough of planting and triggering indirect prompt injection in web-browsing AI assistants.