# agentic
標記為「agentic」的 78 篇文章
Haystack Pipeline Exploitation
Exploiting Haystack's pipeline architecture for component injection and data flow manipulation.
Phidata Agent Attacks
Security analysis of Phidata agent framework including knowledge base poisoning and tool abuse.
Vector-Based Memory Poisoning
Poisoning vector-based memory stores in agent systems to inject false context into retrieval.
Structured Output Tool Injection
Exploiting structured output mode to inject tool call directives into model responses.
Tool Disambiguation Attacks
Exploiting tool selection ambiguity to redirect function calls to unintended tools.
Agentic Exploitation
Comprehensive coverage of security vulnerabilities in agentic AI systems, including MCP tool exploitation, multi-agent protocol attacks, function calling abuse, memory system compromise, framework-specific weaknesses, and workflow pattern attacks.
MCP Configuration Injection
Injecting malicious configuration into MCP server initialization for persistent compromise.
MCP Dynamic Tool Registration Attacks
Attacking dynamic tool registration in MCP to inject malicious tools at runtime.
MCP Root Listing Exploitation
Exploiting MCP root listing capabilities to discover and access sensitive file system resources.
MCP SSE Transport Security Analysis
Security analysis of Server-Sent Events transport in MCP including reconnection attacks and event injection.
MCP stdio Transport Exploitation
Exploiting the stdio transport mechanism in MCP for inter-process communication attacks and data interception.
A2A Artifact Manipulation
Manipulating artifacts exchanged between agents in A2A protocol for data poisoning and injection.
A2A Agent Discovery Exploitation
Exploiting the A2A agent discovery mechanism to register malicious agents or impersonate trusted ones.
A2A Push Notification Abuse
Abusing A2A push notification mechanisms for out-of-band data exfiltration and command injection.
A2A Task State Manipulation
Manipulating task states in A2A to skip validation, bypass approval, or redirect task completion.
Supervisor Agent Override
Techniques for overriding supervisor agents in hierarchical multi-agent architectures.
Tool Chain Amplification Attacks
Amplifying attack impact by chaining tool calls in agent workflows for cascading exploitation.
Workflow Checkpoint Manipulation
Manipulating workflow checkpoints and savepoints for state rollback attacks.
Advanced Practice Exam
25-question practice exam covering advanced AI red team techniques: multimodal attacks, training pipeline exploitation, agentic system attacks, embedding manipulation, and fine-tuning security.
Agentic AI Security Practice Exam 1
Practice exam focused on MCP exploitation, function calling attacks, and multi-agent security vulnerabilities.
Agentic AI Security Practice Exam 2
Advanced practice exam covering agent memory poisoning, workflow exploitation, and A2A protocol attacks.
Agentic Security Specialist Practice Exam
Specialized practice exam focusing on agent security, MCP, A2A, and multi-agent systems.
Agent Memory Security Assessment
Assessment covering memory poisoning, context manipulation, exfiltration, and cross-session persistence attacks.
Agentic Exploitation Assessment
Assessment covering MCP exploitation, function calling abuse, agent memory attacks, and A2A injection.
Agentic Exploitation Assessment (Assessment)
Test your knowledge of agentic AI attacks, MCP exploitation, function calling abuse, and multi-agent system vulnerabilities with 15 intermediate-level questions.
Function Calling Security Assessment
Assessment focused on JSON schema injection, parameter manipulation, recursive calling, and result poisoning attacks.
Advanced MCP Security Assessment
Comprehensive assessment of MCP protocol vulnerabilities including transport attacks, tool poisoning, and capability escalation.
Workflow Patterns Security Assessment
Assessment of sequential, parallel, and hierarchical agent workflow exploitation techniques.
Advanced Agentic Exploitation Assessment
Advanced assessment covering MCP exploitation chains, multi-agent attacks, and A2A protocol injection.
Skill Verification: Function Calling Attacks
Skill verification for schema injection, parameter manipulation, and result poisoning techniques.
Agentic Security Study Guide
Study guide for agentic security assessments covering MCP, A2A, function calling, and multi-agent attacks.
Capstone: Pentest an Agentic AI System End-to-End
Conduct a full penetration test of an agentic AI system with tool use, multi-step reasoning, and autonomous decision-making capabilities.
Capstone: Multi-Agent System Assessment (Capstone)
Assessing security of a complex multi-agent system with tool use, memory, and inter-agent communication covering the full agentic attack surface.
Capstone: Agentic System Red Team
Red team a multi-agent system with MCP servers, function calling, and inter-agent communication, producing an attack tree and comprehensive findings report.
Summer 2026 CTF: Agentic AI Security
An agentic AI security focused CTF with escalating agent challenges covering tool exploitation, multi-agent attacks, indirect injection, and agent persistence.
Agentic AI Alignment Challenges
Analysis of alignment challenges specific to tool-using, planning, and autonomous AI agents in production environments.
Simulation: Agentic Workflow Full Engagement
Expert-level red team simulation targeting a multi-tool AI agent with code execution, file access, and API integration capabilities.
AI-Specific Threat Modeling (Tradecraft)
Applying ATLAS, STRIDE, and attack tree methodologies to AI systems. Trust boundary analysis for agentic architectures, data flow analysis, and MCP threat modeling.
Agentic System Assessment Methodology
Comprehensive methodology for assessing agentic AI systems including tool use, memory, and multi-agent interactions.
Haystack Pipeline 利用ation
Exploiting Haystack's pipeline architecture for component injection and data flow manipulation.
Phidata 代理 攻擊s
安全 analysis of Phidata agent framework including knowledge base poisoning and tool abuse.
Vector-Based 記憶體 投毒
投毒 vector-based memory stores in agent systems to inject false context into retrieval.
Structured Output 工具 Injection
利用ing structured output mode to inject tool call directives into model responses.
工具 Disambiguation 攻擊s
利用ing tool selection ambiguity to redirect function calls to unintended tools.
代理式利用
代理式 AI 系統中安全漏洞的完整涵蓋,包含 MCP 工具利用、多代理協議攻擊、函式呼叫濫用、記憶體系統入侵、框架特定弱點與工作流程模式攻擊。
MCP Configuration Injection
Injecting malicious configuration into MCP server initialization for persistent compromise.
MCP Dynamic 工具 Registration 攻擊s
攻擊ing dynamic tool registration in MCP to inject malicious tools at runtime.
MCP Root Listing 利用ation
利用ing MCP root listing capabilities to discover and access sensitive file system resources.
MCP SSE Transport 安全 Analysis
安全 analysis of Server-Sent Events transport in MCP including reconnection attacks and event injection.
MCP stdio Transport 利用ation
利用ing the stdio transport mechanism in MCP for inter-process communication attacks and data interception.
A2A Artifact Manipulation
Manipulating artifacts exchanged between agents in A2A protocol for data poisoning and injection.
A2A 代理 Discovery 利用ation
利用ing the A2A agent discovery mechanism to register malicious agents or impersonate trusted ones.
A2A Push Notification Abuse
Abusing A2A push notification mechanisms for out-of-band data exfiltration and command injection.
A2A Task State Manipulation
Manipulating task states in A2A to skip validation, bypass approval, or redirect task completion.
Supervisor 代理 Override
Techniques for overriding supervisor agents in hierarchical multi-agent architectures.
工具 Chain Amplification 攻擊s
Amplifying attack impact by chaining tool calls in agent workflows for cascading exploitation.
Workflow Checkpoint Manipulation
Manipulating workflow checkpoints and savepoints for state rollback attacks.
進階 Practice Exam
25-question practice exam covering advanced AI red team techniques: multimodal attacks, training pipeline exploitation, agentic system attacks, embedding manipulation, and fine-tuning security.
代理式 AI 安全 Practice Exam 1
Practice exam focused on MCP exploitation, function calling attacks, and multi-agent security vulnerabilities.
代理式 AI 安全 Practice Exam 2
進階 practice exam covering agent memory poisoning, workflow exploitation, and A2A protocol attacks.
代理式 安全 Specialist Practice Exam
Specialized practice exam focusing on agent security, MCP, A2A, and multi-agent systems.
代理 記憶體 安全 評量
評量 covering memory poisoning, context manipulation, exfiltration, and cross-session persistence attacks.
代理式 利用ation 評量
評量 covering MCP exploitation, function calling abuse, agent memory attacks, and A2A injection.
章節評量:代理式
15 題校準評量,測試你對代理式 AI 利用的理解。
Function Calling 安全 評量
評量 focused on JSON schema injection, parameter manipulation, recursive calling, and result poisoning attacks.
進階 MCP 安全 評量
Comprehensive assessment of MCP protocol vulnerabilities including transport attacks, tool poisoning, and capability escalation.
Workflow Patterns 安全 評量
評量 of sequential, parallel, and hierarchical agent workflow exploitation techniques.
進階 代理式 利用ation 評量
進階 assessment covering MCP exploitation chains, multi-agent attacks, and A2A protocol injection.
Skill Verification: Function Calling 攻擊s
Skill verification for schema injection, parameter manipulation, and result poisoning techniques.
代理式 安全 Study 指南
Study guide for agentic security assessments covering MCP, A2A, function calling, and multi-agent attacks.
Capstone: Pentest an 代理式 AI System End-to-End
Conduct a full penetration test of an agentic AI system with tool use, multi-step reasoning, and autonomous decision-making capabilities.
Capstone: Multi-代理 System 評量 (Capstone)
Assessing security of a complex multi-agent system with tool use, memory, and inter-agent communication covering the full agentic attack surface.
Capstone: 代理式 System 紅隊
Red team a multi-agent system with MCP servers, function calling, and inter-agent communication, producing an attack tree and comprehensive findings report.
2026 年夏季 CTF:代理 AI 安全
以代理 AI 安全為焦點之 CTF,具涵蓋工具利用、多代理攻擊、間接注入與代理持久性之升級代理挑戰。
代理式 AI Alignment Challenges
Analysis of alignment challenges specific to tool-using, planning, and autonomous AI agents in production environments.
模擬:代理式工作流程完整案件
針對具程式碼執行、檔案存取與 API 整合能力之多工具 AI 代理的專家級紅隊模擬。
AI 特有威脅建模(Tradecraft)
將 ATLAS、STRIDE 與攻擊樹方法論套用於 AI 系統。代理式架構的信任邊界分析、資料流分析,以及 MCP 威脅建模。
代理式 System 評量 Methodology
Comprehensive methodology for assessing agentic AI systems including tool use, memory, and multi-agent interactions.