# trust-boundaries
標記為「trust-boundaries」的 12 篇文章
Attacking Multi-Agent Systems
Exploitation techniques for multi-agent architectures including inter-agent injection, trust boundary violations, cascading compromises, and A2A protocol attacks.
Trust Boundary Attacks
Methodology for exploiting trust relationships between agents in multi-agent systems, including rogue agent registration, capability spoofing, transitive trust chain exploitation, and lateral movement techniques.
Security of Multi-Agent Coding Systems
Security analysis of multi-agent AI coding systems covering inter-agent trust, privilege escalation, tool-use chains, and emergent behavior risks.
LLM Trust Boundaries
Understanding trust boundaries in LLM applications: where data crosses privilege levels and how the lack of native trust enforcement creates attack surfaces.
Model Registry Security (Llmops Security)
Security overview of model registries: how registries manage model lifecycle, access control models, trust boundaries, and the unique security challenges of storing and distributing opaque ML artifacts.
AI-Specific Threat Modeling (Tradecraft)
Applying ATLAS, STRIDE, and attack tree methodologies to AI systems. Trust boundary analysis for agentic architectures, data flow analysis, and MCP threat modeling.
攻擊多代理系統
針對多代理架構的攻擊技術,涵蓋代理間注入、信任邊界破壞、連鎖入侵,以及 A2A 協定攻擊。
信任邊界攻擊
為利用多代理系統中代理間信任關係之方法論,含流氓代理註冊、能力偽造、傳遞信任鏈利用與橫向移動技術。
安全 of Multi-代理 Coding Systems
安全 analysis of multi-agent AI coding systems covering inter-agent trust, privilege escalation, tool-use chains, and emergent behavior risks.
LLM Trust Boundaries
Understanding trust boundaries in LLM applications: where data crosses privilege levels and how the lack of native trust enforcement creates attack surfaces.
模型登錄安全(LLMOps 安全)
模型登錄之安全概觀:登錄如何管理模型生命週期、存取控制模型、信任邊界,以及儲存與散發不透明 ML 產物的獨特安全挑戰。
AI 特有威脅建模(Tradecraft)
將 ATLAS、STRIDE 與攻擊樹方法論套用於 AI 系統。代理式架構的信任邊界分析、資料流分析,以及 MCP 威脅建模。