# risk-assessment
12 articlestagged with “risk-assessment”
AI Incident Severity Scoring
Frameworks and methodologies for scoring the severity of AI security incidents, integrating NIST AI RMF, MITRE ATLAS, and traditional CVSS approaches.
AI-Specific Severity Scoring Framework
Severity scoring framework designed for AI security incidents: model integrity impact, data exposure scope, blast radius analysis, reversibility assessment, and composite scoring methodology.
Thinking Like a Defender
Mental models for defensive thinking, risk assessment frameworks, defense tradeoffs, and why understanding the defender's perspective makes you a better red teamer.
AI Risk Assessment Methodologies
Structured methodologies for assessing AI system risks including quantitative, qualitative, and hybrid approaches.
AI Compliance Tools Overview
Overview of tools, methodologies, and frameworks for maintaining AI compliance, including risk assessment, audit methodology, and continuous compliance monitoring.
AI Risk Assessment Methodology
Structured approaches to evaluating AI system risks including identification, scoring frameworks, treatment planning, and templates for conducting comprehensive AI risk assessments.
Impact Categories
Overview of the real-world consequences of successful AI attacks, from misinformation and harmful content to financial fraud and regulatory violations.
Threat Modeling for AI Infrastructure Using STRIDE
Systematic threat modeling methodology for AI/ML systems using STRIDE, data flow diagrams, and attack trees tailored to machine learning pipelines
Building AI-Specific Threat Models
Step-by-step walkthrough for creating threat models tailored to AI and LLM systems, covering asset identification, threat enumeration, attack tree construction, and risk prioritization.
Classifying AI Vulnerability Severity
Framework for consistently classifying the severity of AI and LLM vulnerabilities, with scoring criteria, impact assessment, and examples across common finding categories.
How to Scope an AI Red Team Engagement
Comprehensive walkthrough for scoping AI red team engagements from initial client contact through statement of work, covering target enumeration, risk-based prioritization, resource estimation, boundary definition, and legal considerations.
Threat Modeling for LLM-Powered Applications
Step-by-step walkthrough for conducting threat modeling sessions specifically tailored to LLM-powered applications, covering data flow analysis, trust boundary identification, AI-specific threat enumeration, risk assessment, and mitigation planning.