# risk-assessment
標記為「risk-assessment」的 28 篇文章
AI Incident Severity Scoring
Frameworks and methodologies for scoring the severity of AI security incidents, integrating NIST AI RMF, MITRE ATLAS, and traditional CVSS approaches.
AI-Specific Severity Scoring Framework
Severity scoring framework designed for AI security incidents: model integrity impact, data exposure scope, blast radius analysis, reversibility assessment, and composite scoring methodology.
Thinking Like a Defender
Mental models for defensive thinking, risk assessment frameworks, defense tradeoffs, and why understanding the defender's perspective makes you a better red teamer.
AI Risk Assessment Methodologies
Structured methodologies for assessing AI system risks including quantitative, qualitative, and hybrid approaches.
AI Compliance Tools Overview
Overview of tools, methodologies, and frameworks for maintaining AI compliance, including risk assessment, audit methodology, and continuous compliance monitoring.
AI Risk Assessment Methodology
Structured approaches to evaluating AI system risks including identification, scoring frameworks, treatment planning, and templates for conducting comprehensive AI risk assessments.
Impact Categories
Overview of the real-world consequences of successful AI attacks, from misinformation and harmful content to financial fraud and regulatory violations.
Threat Modeling for AI Infrastructure Using STRIDE
Systematic threat modeling methodology for AI/ML systems using STRIDE, data flow diagrams, and attack trees tailored to machine learning pipelines
AI Supply Chain Security Overview
Comprehensive overview of the AI/ML supply chain attack surface, covering model poisoning, data poisoning, dependency attacks, and risk assessment frameworks aligned with OWASP LLM03:2025.
Building AI-Specific Threat Models
Step-by-step walkthrough for creating threat models tailored to AI and LLM systems, covering asset identification, threat enumeration, attack tree construction, and risk prioritization.
Classifying AI Vulnerability Severity
Framework for consistently classifying the severity of AI and LLM vulnerabilities, with scoring criteria, impact assessment, and examples across common finding categories.
How to Scope an AI Red Team Engagement
Comprehensive walkthrough for scoping AI red team engagements from initial client contact through statement of work, covering target enumeration, risk-based prioritization, resource estimation, boundary definition, and legal considerations.
Threat Modeling for LLM-Powered Applications
Step-by-step walkthrough for conducting threat modeling sessions specifically tailored to LLM-powered applications, covering data flow analysis, trust boundary identification, AI-specific threat enumeration, risk assessment, and mitigation planning.
AI Incident Severity Scoring
Frameworks and methodologies for scoring the severity of AI security incidents, integrating NIST AI RMF, MITRE ATLAS, and traditional CVSS approaches.
AI 特定嚴重性評分框架
為 AI 安全事件設計之嚴重性評分框架:模型完整性影響、資料暴露範圍、爆炸半徑分析、可逆性評估與複合評分方法論。
章節評量:影響
15 題校準評量,測試你對 AI 攻擊之真實世界影響類別的理解。
以防禦者視角思考
防禦思維的心智模型、風險評估框架、防禦取捨,以及為何理解防禦者視角能使你成為更好的紅隊。
AI 安全方法論
AI 安全評估的結構化方法論——涵蓋威脅建模、攻擊面描繪、風險評估與報告框架。
AI Risk 評量 Methodologies
Structured methodologies for assessing AI system risks including quantitative, qualitative, and hybrid approaches.
AI 合規工具概觀
用以維持 AI 合規的工具、方法論與框架概觀,包括風險評估、稽核方法論,以及持續性合規監控。
AI Risk 評量 Methodology
Structured approaches to evaluating AI system risks including identification, scoring frameworks, treatment planning, and templates for conducting comprehensive AI risk assessments.
影響類別
成功 AI 攻擊之真實世界後果的概覽,從錯誤資訊與有害內容到金融詐欺與法規違規。
Threat 模型ing for AI Infrastructure Using STRIDE
Systematic threat modeling methodology for AI/ML systems using STRIDE, data flow diagrams, and attack trees tailored to machine learning pipelines
AI Supply Chain 安全 概覽
Comprehensive overview of the AI/ML supply chain attack surface, covering model poisoning, data poisoning, dependency attacks, and risk assessment frameworks aligned with OWASP LLM03:2025.
Building AI-Specific Threat 模型s
Step-by-step walkthrough for creating threat models tailored to AI and LLM systems, covering asset identification, threat enumeration, attack tree construction, and risk prioritization.
Classifying AI 漏洞 Severity
Framework for consistently classifying the severity of AI and LLM vulnerabilities, with scoring criteria, impact assessment, and examples across common finding categories.
How to Scope an AI 紅隊 Engagement
Comprehensive walkthrough for scoping AI red team engagements from initial client contact through statement of work, covering target enumeration, risk-based prioritization, resource estimation, boundary definition, and legal considerations.
Threat 模型ing for LLM-Powered Applications
Step-by-step walkthrough for conducting threat modeling sessions specifically tailored to LLM-powered applications, covering data flow analysis, trust boundary identification, AI-specific threat enumeration, risk assessment, and mitigation planning.