# threat-modeling
標記為「threat-modeling」的 21 篇文章
Thinking Like a Defender
Mental models for defensive thinking, risk assessment frameworks, defense tradeoffs, and why understanding the defender's perspective makes you a better red teamer.
Threat Modeling for AI Systems
How to identify assets, threats, and attack vectors specific to AI systems using simplified threat modeling frameworks adapted for machine learning.
Threat Modeling for AI Infrastructure Using STRIDE
Systematic threat modeling methodology for AI/ML systems using STRIDE, data flow diagrams, and attack trees tailored to machine learning pipelines
AI Attack Surface Mapping
Systematic methodology for identifying all attack vectors in AI systems: input channels, data flows, tool integrations, and trust boundaries.
Tradecraft
Advanced AI red team tradecraft covering reconnaissance techniques, AI-specific threat modeling, and structured engagement methodology for professional adversarial assessments.
AI-Specific Threat Modeling
Adapting STRIDE for AI systems, building attack trees for LLM applications, identifying AI-specific threat categories, and producing actionable threat models that drive red team test plans.
AI-Specific Threat Modeling (Tradecraft)
Applying ATLAS, STRIDE, and attack tree methodologies to AI systems. Trust boundary analysis for agentic architectures, data flow analysis, and MCP threat modeling.
Building AI-Specific Threat Models
Step-by-step walkthrough for creating threat models tailored to AI and LLM systems, covering asset identification, threat enumeration, attack tree construction, and risk prioritization.
Threat Modeling for LLM-Powered Applications
Step-by-step walkthrough for conducting threat modeling sessions specifically tailored to LLM-powered applications, covering data flow analysis, trust boundary identification, AI-specific threat enumeration, risk assessment, and mitigation planning.
AI Threat Modeling Workshop Walkthrough
Step-by-step guide to running an AI-focused threat modeling workshop: adapting STRIDE for AI systems, constructing attack trees for LLM applications, participant facilitation techniques, and producing actionable threat models.
以防禦者視角思考
防禦思維的心智模型、風險評估框架、防禦取捨,以及為何理解防禦者視角能使你成為更好的紅隊。
AI 安全方法論
AI 安全評估的結構化方法論——涵蓋威脅建模、攻擊面描繪、風險評估與報告框架。
為 AI 系統之威脅建模
如何使用為機器學習適配之簡化威脅建模框架辨識 AI 系統特有之資產、威脅與攻擊向量。
Threat 模型ing for AI Infrastructure Using STRIDE
Systematic threat modeling methodology for AI/ML systems using STRIDE, data flow diagrams, and attack trees tailored to machine learning pipelines
AI 攻擊面繪製
辨識 AI 系統中所有攻擊向量之系統化方法論:輸入通道、資料流、工具整合與信任邊界。
技藝
涵蓋偵察技術、AI 特定威脅建模,以及專業對抗性評估之結構化案件方法論的進階 AI 紅隊技藝。
AI-Specific Threat 模型ing
Adapting STRIDE for AI systems, building attack trees for LLM applications, identifying AI-specific threat categories, and producing actionable threat models that drive red team test plans.
AI 特有威脅建模(Tradecraft)
將 ATLAS、STRIDE 與攻擊樹方法論套用於 AI 系統。代理式架構的信任邊界分析、資料流分析,以及 MCP 威脅建模。
Building AI-Specific Threat 模型s
Step-by-step walkthrough for creating threat models tailored to AI and LLM systems, covering asset identification, threat enumeration, attack tree construction, and risk prioritization.
Threat 模型ing for LLM-Powered Applications
Step-by-step walkthrough for conducting threat modeling sessions specifically tailored to LLM-powered applications, covering data flow analysis, trust boundary identification, AI-specific threat enumeration, risk assessment, and mitigation planning.
AI Threat 模型ing Workshop 導覽
Step-by-step guide to running an AI-focused threat modeling workshop: adapting STRIDE for AI systems, constructing attack trees for LLM applications, participant facilitation techniques, and producing actionable threat models.