# agent
標記為「agent」的 72 篇文章
Phidata Agent Attacks
Security analysis of Phidata agent framework including knowledge base poisoning and tool abuse.
Inter-Agent Communication Interception
Intercept and manipulate communication channels between agents in multi-agent architectures.
Planning Agent Manipulation
Manipulating LLM-based planning agents to execute adversarial action sequences.
Router Agent Confusion
Confusing router/dispatcher agents to misdirect tasks to inappropriate specialist agents.
Supervisor Agent Override
Techniques for overriding supervisor agents in hierarchical multi-agent architectures.
Agent Security Practice Exam
Practice exam focused on agentic AI security including MCP, A2A, function calling, and multi-agent threats.
Agent Architecture Security Assessment
Assessment covering agent design patterns, tool sandboxing, multi-agent trust, and MCP security.
Skill Verification: Agent Exploitation
Practical skill verification for agent and MCP exploitation techniques.
Agent Security Study Guide
Comprehensive study guide for agent and agentic exploitation topics including MCP and A2A protocols.
Capstone: Autonomous Agent Assessment
Capstone exercise: red team assessment of a fully autonomous agent system with multi-tool access.
Case Study: LLM Agent Tool Abuse in Production
Analysis of incidents where LLM agents misused connected tools causing data exposure and unauthorized actions.
March 2026: Agent Exploitation Challenge
Compromise a multi-tool agent system through prompt injection and tool abuse, completing multiple objectives with escalating difficulty and point values.
Monthly Challenge: Agent Hunter
Monthly challenge focused on discovering and exploiting vulnerabilities in agent-based AI systems.
Agent Sandboxing Strategies
Sandboxing and isolation strategies for limiting the blast radius of compromised LLM agents.
Secure Function Calling Design
Designing secure function calling interfaces that prevent unauthorized tool use and data exfiltration.
Multi-Agent Deception Research
Research on deceptive behavior emerging in multi-agent systems without explicit training for deception.
Agent Tool Chain Exploitation
Chain multiple tool calls in an agent system to achieve multi-step exploitation.
CTF: Agent Escalation
Five-flag CTF challenge focused on exploiting agentic AI systems. Progress from basic tool abuse through privilege escalation, indirect injection, memory poisoning, and multi-agent chain attacks.
Agent Heist: Level 2 — MCP Server Takeover
Take control of an MCP-enabled agent by poisoning tool descriptions and chaining exploits.
CTF: Agent Heist
A multi-stage agent exploitation challenge where you infiltrate an AI agent's tool ecosystem, escalate privileges, and exfiltrate target data without triggering security alerts.
Shadow Agent Challenge
Take covert control of a multi-agent system by poisoning inter-agent communication without triggering monitors.
Agent Maze Runner: Multi-Tool Navigation
Navigate a maze of agent tools, each with unique vulnerabilities, to reach and exfiltrate a hidden flag.
AI Escape Room: Agent Breakout Challenge
Break an AI agent out of its sandboxed environment by chaining tool-use vulnerabilities and injection techniques.
Lab: Build Agent Security Scanner
Build an automated security scanner for agentic AI systems that detects vulnerabilities in tool use, permission handling, memory management, and multi-step execution flows. Cover agent-specific attack surfaces that traditional LLM testing misses.
Agent Memory Manipulation
Exploit persistent memory in LLM agents to plant false context that persists across sessions.
Agent Goal Hijacking
Redirect an AI agent's objectives through carefully crafted inputs that override its primary task.
Customer Service Agent Red Team
Red team a customer service agent with tool access to order systems, refunds, and customer data.
Attacks via Screen Capture and Computer-Use AI
Techniques for attacking AI systems that process screen captures, including computer-use agents, screen-reading assistants, and automated UI testing systems.
Cross-Plugin Data Exfiltration Walkthrough
Walkthrough of chaining multiple plugins/tools to exfiltrate data from LLM agent systems.
Function Calling Exploitation Guide
Complete walkthrough of exploiting function calling in OpenAI, Anthropic, and Google AI APIs.
MCP Tool Poisoning Attack Walkthrough
Walkthrough of exploiting MCP tool descriptions to redirect agent behavior via hidden instructions.
Tool Shadowing Attack Walkthrough
Register shadow tools that override legitimate tool definitions to intercept and manipulate agent actions.
Memory Injection and Persistence Walkthrough
Walkthrough of injecting persistent instructions into agent memory systems that survive across sessions.
Secure Agent Architecture Design
Design a secure architecture for LLM agent systems with sandboxing, capability controls, and audit trails.
Agent Tool Access Control Implementation
Implement fine-grained tool access control for LLM agents with capability-based security and approval workflows.
Agent System Red Team Engagement
Complete walkthrough for testing tool-using AI agents: scoping agent capabilities, exploiting function calling, testing permission boundaries, multi-step attack chains, and session manipulation.
Phidata 代理 攻擊s
安全 analysis of Phidata agent framework including knowledge base poisoning and tool abuse.
Inter-代理 Communication Interception
Intercept and manipulate communication channels between agents in multi-agent architectures.
Planning 代理 Manipulation
Manipulating LLM-based planning agents to execute adversarial action sequences.
Router 代理 Confusion
Confusing router/dispatcher agents to misdirect tasks to inappropriate specialist agents.
Supervisor 代理 Override
Techniques for overriding supervisor agents in hierarchical multi-agent architectures.
代理 安全 Practice Exam
Practice exam focused on agentic AI security including MCP, A2A, function calling, and multi-agent threats.
代理 Architecture 安全 評量
評量 covering agent design patterns, tool sandboxing, multi-agent trust, and MCP security.
Skill Verification: 代理 利用ation
Practical skill verification for agent and MCP exploitation techniques.
代理 安全 Study 指南
Comprehensive study guide for agent and agentic exploitation topics including MCP and A2A protocols.
Capstone: Autonomous 代理 評量
Capstone exercise: red team assessment of a fully autonomous agent system with multi-tool access.
Case Study: LLM 代理 工具 Abuse in Production
Analysis of incidents where LLM agents misused connected tools causing data exposure and unauthorized actions.
2026 年 3 月:代理利用挑戰
經提示注入與工具濫用破壞多工具代理系統,以升級之難度與分數值完成多個目標。
Monthly Challenge: 代理 Hunter
Monthly challenge focused on discovering and exploiting vulnerabilities in agent-based AI systems.
代理 Sandboxing Strategies
Sandboxing and isolation strategies for limiting the blast radius of compromised LLM agents.
Secure Function Calling Design
Designing secure function calling interfaces that prevent unauthorized tool use and data exfiltration.
Multi-代理 Deception Research
Research on deceptive behavior emerging in multi-agent systems without explicit training for deception.
代理 工具 Chain 利用ation
Chain multiple tool calls in an agent system to achieve multi-step exploitation.
CTF:代理升級
聚焦利用代理 AI 系統之五旗幟 CTF 挑戰。自基礎工具濫用漸進至特權升級、間接注入、記憶投毒,與多代理鏈攻擊。
代理 Heist: Level 2 — MCP Server Takeover
Take control of an MCP-enabled agent by poisoning tool descriptions and chaining exploits.
CTF:代理劫案
多階段代理利用挑戰,你滲透 AI 代理之工具生態系、提升權限並於不觸發安全警報下外洩目標資料。
Shadow 代理 Challenge
Take covert control of a multi-agent system by poisoning inter-agent communication without triggering monitors.
代理 Maze Runner: Multi-工具 Navigation
Navigate a maze of agent tools, each with unique vulnerabilities, to reach and exfiltrate a hidden flag.
AI Escape Room: 代理 Breakout Challenge
Break an AI agent out of its sandboxed environment by chaining tool-use vulnerabilities and injection techniques.
實驗室: Build 代理 安全 Scanner
Build an automated security scanner for agentic AI systems that detects vulnerabilities in tool use, permission handling, memory management, and multi-step execution flows. Cover agent-specific attack surfaces that traditional LLM testing misses.
代理 記憶體 Manipulation
利用 persistent memory in LLM agents to plant false context that persists across sessions.
代理 Goal Hijacking
Redirect an AI agent's objectives through carefully crafted inputs that override its primary task.
Customer Service 代理 紅隊
Red team a customer service agent with tool access to order systems, refunds, and customer data.
攻擊s via Screen Capture and Computer-Use AI
Techniques for attacking AI systems that process screen captures, including computer-use agents, screen-reading assistants, and automated UI testing systems.
Cross-Plugin Data Exfiltration 導覽
導覽 of chaining multiple plugins/tools to exfiltrate data from LLM agent systems.
Function Calling 利用ation 指南
Complete walkthrough of exploiting function calling in OpenAI, Anthropic, and Google AI APIs.
MCP 工具 投毒 攻擊 導覽
導覽 of exploiting MCP tool descriptions to redirect agent behavior via hidden instructions.
工具 Shadowing 攻擊 導覽
Register shadow tools that override legitimate tool definitions to intercept and manipulate agent actions.
記憶體 Injection and Persistence 導覽
導覽 of injecting persistent instructions into agent memory systems that survive across sessions.
Secure 代理 Architecture Design
Design a secure architecture for LLM agent systems with sandboxing, capability controls, and audit trails.
代理 工具 Access Control Implementation
Implement fine-grained tool access control for LLM agents with capability-based security and approval workflows.
代理 System 紅隊 Engagement
Complete walkthrough for testing tool-using AI agents: scoping agent capabilities, exploiting function calling, testing permission boundaries, multi-step attack chains, and session manipulation.