# tool-poisoning
標記為「tool-poisoning」的 16 篇文章
Tool Use Exploitation
Comprehensive techniques for exploiting how AI agents call external tools and APIs, including tool description poisoning, overly permissive access abuse, and tool output manipulation.
MCP & Tool Surface Exploitation
Attack methodology for the Model Context Protocol (MCP) covering tool schema manipulation, tool poisoning, resource URI traversal, cross-server pivoting, and sampling API abuse.
Case Study: Early MCP Vulnerability Disclosures
Analysis of early MCP vulnerability disclosures including Invariant Labs tool poisoning research.
Case Study: MCP Tool Poisoning Attacks (Invariant Labs 2025)
Analysis of tool poisoning vulnerabilities in the Model Context Protocol (MCP) discovered by Invariant Labs, where malicious tool descriptions manipulate AI agents into data exfiltration and unauthorized actions.
MCP Tool Poisoning Attack
Exploit MCP tool descriptions to inject instructions that redirect agent behavior.
Lab: Tool Result Poisoning
Hands-on lab for poisoning tool outputs to redirect agent behavior by injecting malicious content through tool results.
MCP Tool Poisoning Attack Walkthrough
Walkthrough of exploiting MCP tool descriptions to redirect agent behavior via hidden instructions.
MCP Tool Shadowing
Advanced walkthrough of creating shadow tools that override legitimate MCP (Model Context Protocol) tools, enabling interception and manipulation of agent-tool interactions.
工具 Use 利用ation
Comprehensive techniques for exploiting how AI agents call external tools and APIs, including tool description poisoning, overly permissive access abuse, and tool output manipulation.
MCP & 工具 Surface 利用ation
攻擊 methodology for the 模型 Context Protocol (MCP) covering tool schema manipulation, tool poisoning, resource URI traversal, cross-server pivoting, and sampling API abuse.
Case Study: Early MCP 漏洞 Disclosures
Analysis of early MCP vulnerability disclosures including Invariant 實驗室s tool poisoning research.
Case Study: MCP 工具 投毒 攻擊s (Invariant 實驗室s 2025)
Analysis of tool poisoning vulnerabilities in the 模型 Context Protocol (MCP) discovered by Invariant 實驗室s, where malicious tool descriptions manipulate AI agents into data exfiltration and unauthorized actions.
MCP 工具 投毒 攻擊
利用 MCP tool descriptions to inject instructions that redirect agent behavior.
實驗室: 工具 Result 投毒
Hands-on lab for poisoning tool outputs to redirect agent behavior by injecting malicious content through tool results.
MCP 工具 投毒 攻擊 導覽
導覽 of exploiting MCP tool descriptions to redirect agent behavior via hidden instructions.
MCP 工具 Shadowing
進階 walkthrough of creating shadow tools that override legitimate MCP (模型 Context Protocol) tools, enabling interception and manipulation of agent-tool interactions.