# agent
36 articlestagged with “agent”
Phidata Agent Attacks
Security analysis of Phidata agent framework including knowledge base poisoning and tool abuse.
Inter-Agent Communication Interception
Intercept and manipulate communication channels between agents in multi-agent architectures.
Planning Agent Manipulation
Manipulating LLM-based planning agents to execute adversarial action sequences.
Router Agent Confusion
Confusing router/dispatcher agents to misdirect tasks to inappropriate specialist agents.
Supervisor Agent Override
Techniques for overriding supervisor agents in hierarchical multi-agent architectures.
Agent Security Practice Exam
Practice exam focused on agentic AI security including MCP, A2A, function calling, and multi-agent threats.
Agent Architecture Security Assessment
Assessment covering agent design patterns, tool sandboxing, multi-agent trust, and MCP security.
Skill Verification: Agent Exploitation
Practical skill verification for agent and MCP exploitation techniques.
Agent Security Study Guide
Comprehensive study guide for agent and agentic exploitation topics including MCP and A2A protocols.
Capstone: Autonomous Agent Assessment
Capstone exercise: red team assessment of a fully autonomous agent system with multi-tool access.
Case Study: LLM Agent Tool Abuse in Production
Analysis of incidents where LLM agents misused connected tools causing data exposure and unauthorized actions.
March 2026: Agent Exploitation Challenge
Compromise a multi-tool agent system through prompt injection and tool abuse, completing multiple objectives with escalating difficulty and point values.
Monthly Challenge: Agent Hunter
Monthly challenge focused on discovering and exploiting vulnerabilities in agent-based AI systems.
Agent Sandboxing Strategies
Sandboxing and isolation strategies for limiting the blast radius of compromised LLM agents.
Secure Function Calling Design
Designing secure function calling interfaces that prevent unauthorized tool use and data exfiltration.
Multi-Agent Deception Research
Research on deceptive behavior emerging in multi-agent systems without explicit training for deception.
Agent Tool Chain Exploitation
Chain multiple tool calls in an agent system to achieve multi-step exploitation.
CTF: Agent Escalation
Five-flag CTF challenge focused on exploiting agentic AI systems. Progress from basic tool abuse through privilege escalation, indirect injection, memory poisoning, and multi-agent chain attacks.
Agent Heist: Level 2 — MCP Server Takeover
Take control of an MCP-enabled agent by poisoning tool descriptions and chaining exploits.
CTF: Agent Heist
A multi-stage agent exploitation challenge where you infiltrate an AI agent's tool ecosystem, escalate privileges, and exfiltrate target data without triggering security alerts.
Shadow Agent Challenge
Take covert control of a multi-agent system by poisoning inter-agent communication without triggering monitors.
Agent Maze Runner: Multi-Tool Navigation
Navigate a maze of agent tools, each with unique vulnerabilities, to reach and exfiltrate a hidden flag.
AI Escape Room: Agent Breakout Challenge
Break an AI agent out of its sandboxed environment by chaining tool-use vulnerabilities and injection techniques.
Lab: Build Agent Security Scanner
Build an automated security scanner for agentic AI systems that detects vulnerabilities in tool use, permission handling, memory management, and multi-step execution flows. Cover agent-specific attack surfaces that traditional LLM testing misses.
Agent Memory Manipulation
Exploit persistent memory in LLM agents to plant false context that persists across sessions.
Agent Goal Hijacking
Redirect an AI agent's objectives through carefully crafted inputs that override its primary task.
Customer Service Agent Red Team
Red team a customer service agent with tool access to order systems, refunds, and customer data.
Attacks via Screen Capture and Computer-Use AI
Techniques for attacking AI systems that process screen captures, including computer-use agents, screen-reading assistants, and automated UI testing systems.
Cross-Plugin Data Exfiltration Walkthrough
Walkthrough of chaining multiple plugins/tools to exfiltrate data from LLM agent systems.
Function Calling Exploitation Guide
Complete walkthrough of exploiting function calling in OpenAI, Anthropic, and Google AI APIs.
MCP Tool Poisoning Attack Walkthrough
Walkthrough of exploiting MCP tool descriptions to redirect agent behavior via hidden instructions.
Tool Shadowing Attack Walkthrough
Register shadow tools that override legitimate tool definitions to intercept and manipulate agent actions.
Memory Injection and Persistence Walkthrough
Walkthrough of injecting persistent instructions into agent memory systems that survive across sessions.
Secure Agent Architecture Design
Design a secure architecture for LLM agent systems with sandboxing, capability controls, and audit trails.
Agent Tool Access Control Implementation
Implement fine-grained tool access control for LLM agents with capability-based security and approval workflows.
Agent System Red Team Engagement
Complete walkthrough for testing tool-using AI agents: scoping agent capabilities, exploiting function calling, testing permission boundaries, multi-step attack chains, and session manipulation.