# methodology
119 articlestagged with “methodology”
Timeline Reconstruction Methodology
Systematic methodology for reconstructing attack timelines from AI system logs, API records, and model behavior observations.
Fundamentals Practice Exam
25-question practice exam covering LLM fundamentals, prompt injection basics, safety mechanisms, red team methodology, and AI threat landscape at an intermediate level.
Professional Practice Exam
25-question practice exam on professional AI red teaming: engagement methodology, scoping, report writing, governance frameworks, client communication, and ethical considerations.
Practice Exam 1: AI Red Team Fundamentals
25-question practice exam covering LLM architecture, prompt injection, agent exploitation, defense mechanisms, and red team methodology at an intermediate level.
Red Team Methodology Practice Exam
Practice exam on engagement planning, scoping, execution, reporting, and responsible disclosure.
Red Team Methodology Assessment
Test your understanding of AI red team engagement methodology, from scoping through reporting, including structured approaches, attack planning, and finding documentation with 9 intermediate-level questions.
Professional Skills Assessment
Test your knowledge of AI red teaming methodology, report writing, client engagement, and professional practice with 15 intermediate-level questions.
Red Team Methodology Assessment (Assessment)
Assessment on scoping, planning, execution, and reporting of AI red team engagements.
Red Team Engagement Planning Assessment
Assessment of planning, scoping, authorization, and execution methodology for AI red team engagements.
Red Team Methodology Assessment (Assessment - W2)
Assessment covering scoping, attack trees, evidence collection, and professional reporting.
Professional Practice Study Guide
Study guide covering AI red teaming methodology, engagement management, report writing, governance frameworks, and professional ethics.
Full Red Team Engagement: End-to-End
Complete guide to AI red team engagements from scoping through attack execution, evidence collection, impact assessment, report delivery, and remediation validation.
Full Engagement Methodology
A comprehensive methodology for conducting full AI red teaming engagements, integrating all techniques from previous sections into a structured professional assessment.
Engagement Planning and Scoping
How to plan and scope an AI red teaming engagement, including defining objectives, rules of engagement, success criteria, and methodology selection.
AI Incident Analysis Methodology
A structured methodology for analyzing AI security incidents. Learn to reconstruct timelines, identify root causes, assess impact, and extract actionable lessons from real-world AI failures across chatbots, data leaks, and alignment failures.
Published Red Team Reports Analysis
Deep analysis of published red team reports from Anthropic, OpenAI, Google DeepMind, and METR. Methodology breakdowns, key findings, and how to read and learn from professional red team assessments.
Red Team-Driven Defense Improvement
Using red team findings to systematically improve LLM application defenses.
Defense Evaluation Methodology
Systematic methodology for evaluating the effectiveness of AI defenses against known attack categories.
AI Exploit Development Overview
An introduction to developing exploits and tooling for AI red teaming, covering the unique challenges of building reliable attacks against probabilistic systems.
Lab: Tool Comparison — Same Target, 4 Tools
Hands-on lab comparing Garak, PyRIT, promptfoo, and Inspect AI against the same target model. Evaluate coverage, speed, finding quality, and practical trade-offs of each tool.
Foundations
Essential building blocks for AI red teaming, covering red team methodology, the AI landscape, how LLMs work, embeddings and vector systems, AI system architecture, and adversarial machine learning concepts.
Red Team Methodology Fundamentals
What AI red teaming is, how it differs from traditional security testing, and the complete engagement lifecycle from scoping to reporting.
Threat Modeling for AI Systems
How to identify assets, threats, and attack vectors specific to AI systems using simplified threat modeling frameworks adapted for machine learning.
Red Teaming Fundamentals for AI
Fundamental concepts and methodology for AI red teaming including goal setting, scope definition, technique selection, and reporting.
AI Impact Assessment Methodology
Methodology for conducting algorithmic impact assessments required by emerging regulations.
AI Risk Assessment Methodologies
Structured methodologies for assessing AI system risks including quantitative, qualitative, and hybrid approaches.
AI Audit Methodology
Comprehensive methodology for auditing AI systems including planning, evidence collection, testing procedures, report templates, and integration with red team assessments.
AI Risk Assessment Methodology
Structured approaches to evaluating AI system risks including identification, scoring frameworks, treatment planning, and templates for conducting comprehensive AI risk assessments.
Red Team Metrics Beyond ASR
Comprehensive metrics methodology for AI red teaming beyond Attack Success Rate: severity-weighted scoring, defense depth metrics, coverage analysis, and stakeholder-appropriate reporting frameworks.
Statistical Rigor in AI Red Teaming
Statistical methodology for AI red teaming: sample size determination, confidence intervals, hypothesis testing for safety claims, handling non-determinism, and avoiding common statistical pitfalls.
AI Impact Assessment Methodology (Governance Compliance)
Methodology for conducting AI impact assessments including human rights, environmental, and social dimensions.
Penetration Testing Methodology for AI Infrastructure
A structured methodology for penetration testing AI/ML systems covering reconnaissance, vulnerability assessment, exploitation, and reporting
Threat Modeling for AI Infrastructure Using STRIDE
Systematic threat modeling methodology for AI/ML systems using STRIDE, data flow diagrams, and attack trees tailored to machine learning pipelines
Injection Benchmarking Methodology
Standardized methodologies for benchmarking injection attacks and defenses to enable meaningful comparison across research papers and tools.
Lab: Designing LLM Red Team Test Cases
Design effective red team test cases with clear objectives, success criteria, and reproducible execution procedures.
Lab: Vulnerability Research Methodology
Systematic methodology lab for discovering novel AI vulnerabilities including hypothesis generation, attack surface mapping, experimental design, validation protocols, and responsible disclosure.
Claude Testing Methodology
Systematic methodology for red teaming Claude models, including API probing, model card analysis, safety boundary mapping, and comparative testing across Opus, Sonnet, and Haiku tiers.
Cross-Model Comparison
Methodology for systematically comparing LLM security across model families, including standardized evaluation frameworks, architectural difference analysis, and comparative testing approaches.
Gemini Testing Methodology
Systematic methodology for red teaming Gemini, including Vertex AI API probing, Google AI Studio testing, multimodal test case design, and grounding attack validation.
GPT-4 Testing Methodology
Systematic methodology for red teaming GPT-4, including API-based probing techniques, rate limit considerations, content policy mapping, and safety boundary discovery.
Model Deep Dives
Why model-specific knowledge matters for AI red teaming, how different architectures create different attack surfaces, and a systematic methodology for profiling any new model.
Methodology for Red Teaming Multimodal Systems
Structured methodology for conducting security assessments of multimodal AI systems, covering scoping, attack surface enumeration, test execution, and reporting with MITRE ATLAS mappings.
After-Action Reviews for AI Red Team Operations
Structured frameworks for conducting after-action reviews that capture lessons learned, improve methodology, and demonstrate value from AI red team engagements.
AI Red Team Methodology Standard
Standardized methodology for conducting AI red team assessments from scoping through reporting.
AI Security Consulting Methodology
Structured consulting methodology for delivering AI security assessments, from client acquisition through engagement delivery.
AI Attack Surface Mapping
Systematic methodology for identifying all attack vectors in AI systems: input channels, data flows, tool integrations, and trust boundaries.
AI Red Teaming Methodology
A structured methodology for AI red teaming engagements, covering reconnaissance, target profiling, attack planning, and the tradecraft that distinguishes professional assessments.
AI Red Teaming Cheat Sheet
A condensed quick reference for AI red team engagements covering the full lifecycle, attack categories, common tools, reconnaissance, and reporting.
Tradecraft
Advanced AI red team tradecraft covering reconnaissance techniques, AI-specific threat modeling, and structured engagement methodology for professional adversarial assessments.
Evidence Collection & Chain of Custody (Tradecraft)
Standards for capturing, preserving, and documenting AI red team findings: conversation logs, API traces, bypass rate measurement, and evidence packaging for reproducible reporting.
Red Team Methodology Overview
A structured methodology for AI red team engagements: phases, deliverables, role definitions, and how AI-specific testing differs from traditional penetration testing.
Purple Teaming for AI
Collaborative attack-defense exercises for AI systems: structuring purple team engagements, real-time knowledge transfer, joint attack simulation, and measuring defensive improvement through iterative testing.
Scoping & Rules of Engagement
Defining scope, rules of engagement, authorization boundaries, and success criteria for AI red team engagements, with templates and checklists for common engagement types.
AI-Specific Threat Modeling
Adapting STRIDE for AI systems, building attack trees for LLM applications, identifying AI-specific threat categories, and producing actionable threat models that drive red team test plans.
Tool Selection for AI Red Teaming
Framework for selecting and configuring tools for AI red team engagements based on target architecture, engagement scope, and team capabilities.
Engagement Walkthroughs Overview
Step-by-step walkthroughs for complete AI red team engagements: from scoping and reconnaissance through attack execution to reporting, organized by target system type.
Walkthroughs
Step-by-step guided walkthroughs covering red team tools, engagement methodology, defense implementation, platform-specific testing, and full engagement workflows.
Adversarial Simulation Design
Design realistic adversarial simulations that model real-world threat actors and attack scenarios for AI systems.
Agentic System Assessment Methodology
Comprehensive methodology for assessing agentic AI systems including tool use, memory, and multi-agent interactions.
AI Penetration Test Planning
Complete methodology for planning AI-specific penetration tests including scope definition, resource allocation, and timeline.
AI Red Team Maturity Model (Methodology Walkthrough)
Maturity model for assessing and improving an organization's AI red teaming capabilities.
Measuring and Reporting AI Red Team Effectiveness
Walkthrough for defining, collecting, and reporting metrics that measure the effectiveness of AI red teaming programs, covering coverage metrics, detection rates, time-to-find analysis, remediation tracking, and ROI calculation.
AI Security Metrics Framework
Framework for measuring and reporting on AI security posture using quantitative metrics.
Building AI-Specific Threat Models
Step-by-step walkthrough for creating threat models tailored to AI and LLM systems, covering asset identification, threat enumeration, attack tree construction, and risk prioritization.
AI Vulnerability Classification System
Structured system for classifying AI-specific vulnerabilities by type, impact, and exploitability.
Attack Prioritization Framework
Prioritize attack techniques based on target architecture, time constraints, and likelihood of success.
Mapping the Attack Surface of AI Systems
Systematic walkthrough for identifying and mapping every attack surface in an AI system, from user inputs through model inference to output delivery and tool integrations.
Attack Tree Construction for LLM Systems
Build systematic attack trees for LLM system assessments using MITRE ATLAS and OWASP mappings.
Automated AI Reconnaissance Workflow
Build an automated reconnaissance workflow that maps AI application architecture, models, and defense configurations.
Collaborative AI Red Team Assessment
Coordinate multi-person red team assessments with role assignments, communication protocols, and finding deconfliction.
Competitive Analysis of AI Security Tools
Methodology for evaluating and comparing AI security tools for red team operations.
Compliance-Driven Testing Methodology
Map regulatory requirements to specific test cases for compliance-driven AI red team assessments.
Setting Up Continuous AI Red Teaming Pipelines
Walkthrough for building continuous AI red teaming pipelines that automatically test LLM applications on every deployment, covering automated scan configuration, CI/CD integration, alert thresholds, regression testing, and dashboard reporting.
Continuous Monitoring Integration Methodology
Integrate red team findings into continuous monitoring systems for ongoing threat detection and defense validation.
Engagement Kickoff Walkthrough
Step-by-step guide to launching an AI red team engagement: initial client meetings, scope definition, rules of engagement, legal agreements, environment setup, and tool selection.
Testing for EU AI Act Compliance
Walkthrough for conducting red team assessments that evaluate compliance with the EU AI Act requirements, covering risk classification, mandatory testing obligations, and documentation requirements.
Evidence Collection and Documentation Best Practices
Walkthrough for systematic evidence collection during AI red team engagements, covering request/response capture, screenshot methodology, chain-of-custody documentation, reproducibility requirements, and evidence organization for reports.
Evidence Collection Methods for AI Red Teams
Comprehensive methods for collecting, preserving, and organizing red team evidence from AI system assessments, including API logs, screenshots, reproduction scripts, and chain-of-custody procedures.
Writing Executive Summaries for AI Red Team Reports
Guide to writing clear, impactful executive summaries for AI red team assessment reports that communicate risk to non-technical stakeholders and drive remediation decisions.
Finding Deduplication and Triage
Deduplicate and triage findings from automated and manual testing into actionable, prioritized vulnerability reports.
Classifying AI Vulnerability Severity
Framework for consistently classifying the severity of AI and LLM vulnerabilities, with scoring criteria, impact assessment, and examples across common finding categories.
Methodology Walkthroughs
Step-by-step walkthroughs for each phase of an AI red team engagement: kickoff, reconnaissance, attack execution, and report writing.
Mapping Findings to OWASP LLM Top 10
Walkthrough for mapping AI red team findings to the OWASP Top 10 for LLM Applications, with classification guidance, reporting templates, and remediation mapping.
Comparative Security Testing Across Multiple LLMs
Walkthrough for conducting systematic comparative security testing across multiple LLM providers and configurations, covering test standardization, parallel execution, cross-model analysis, and differential vulnerability reporting.
Multi-Model Testing Methodology
Structured methodology for testing applications that use multiple LLM models in their processing pipeline.
Post-Engagement Analysis Methodology
Conduct thorough post-engagement analysis including lessons learned, technique effectiveness, and methodology refinement.
Pre-Engagement Preparation Checklist
Complete pre-engagement preparation checklist for AI red team operations covering team readiness, infrastructure setup, legal requirements, and initial reconnaissance planning.
Purple Team AI Assessment Methodology
Conduct collaborative purple team AI assessments with real-time feedback between red and blue team operations.
AI Security Regression Testing Methodology
Design regression testing suites that verify security fixes remain effective across model updates and deployments.
Verifying That Remediations Are Effective
Walkthrough for planning and executing remediation verification testing (retesting) to confirm that AI vulnerability fixes are effective and do not introduce regressions.
Risk-Based AI Testing Approach
Apply risk-based testing approaches to focus assessment effort on the highest-impact vulnerability categories.
Risk Scoring Frameworks for AI Vulnerabilities
Walkthrough for applying risk scoring frameworks to AI and LLM vulnerabilities, covering CVSS adaptation for AI, custom AI risk scoring matrices, severity classification, business impact assessment, and integration with existing vulnerability management processes.
Rules of Engagement Template for AI Red Team Operations
Step-by-step guide to creating comprehensive rules of engagement documents for AI red team assessments, covering authorization, scope, constraints, communication, and legal protections.
How to Scope an AI Red Team Engagement
Comprehensive walkthrough for scoping AI red team engagements from initial client contact through statement of work, covering target enumeration, risk-based prioritization, resource estimation, boundary definition, and legal considerations.
AI Red Team Scoping Checklist Walkthrough
Systematic walkthrough of the pre-engagement scoping process for AI red team assessments: stakeholder identification, target enumeration, scope boundary definition, resource estimation, and rules of engagement documentation.
Stakeholder Management in AI Red Teaming
Managing stakeholder expectations and communication throughout AI red team engagements.
Stakeholder-Specific Reporting Methodology
Tailor red team reports for different stakeholders including executives, developers, security teams, and compliance officers.
AI Security Tabletop Exercises
Designing and facilitating tabletop exercises focused on AI security incident scenarios.
Creating Detailed Technical Appendices
Guide to building comprehensive technical appendices for AI red team reports, including evidence formatting, reproduction procedures, tool output presentation, and raw data organization.
Developing Comprehensive AI Security Test Plans
Step-by-step guide to developing structured test plans for AI red team engagements, covering test case design, automation strategy, coverage mapping, and execution scheduling.
Threat Modeling for LLM-Powered Applications
Step-by-step walkthrough for conducting threat modeling sessions specifically tailored to LLM-powered applications, covering data flow analysis, trust boundary identification, AI-specific threat enumeration, risk assessment, and mitigation planning.
AI Threat Modeling Workshop Walkthrough
Step-by-step guide to running an AI-focused threat modeling workshop: adapting STRIDE for AI systems, constructing attack trees for LLM applications, participant facilitation techniques, and producing actionable threat models.
Time-Boxed AI Assessment Methodology
Maximize testing coverage within strict time constraints using prioritized attack trees and parallel testing.
AI Attack Surface Enumeration Methodology
Systematic methodology for enumerating the complete attack surface of an AI-powered application.
AI Compliance Testing Methodology
Methodology for testing AI systems against regulatory compliance requirements including EU AI Act and NIST.
Mapping Findings to MITRE ATLAS
Methodology for mapping AI red team findings to MITRE ATLAS tactics, techniques, and procedures.
AI Penetration Test Report Structure
Detailed report structure for AI penetration tests with finding templates and severity scoring.
AI Red Team Scoping Templates
Templates and procedures for scoping AI red team engagements across different application types.
AI Risk Quantification Methodology
Quantitative risk assessment methodology for AI vulnerabilities with probability and impact scoring.
AI Security Tabletop Exercise Design
Design and facilitate AI security tabletop exercises for organizational preparedness assessment.
Evidence Collection During AI Testing
Best practices for collecting, organizing, and preserving evidence during AI red team assessments.
Multi-Model Assessment Methodology
Methodology for assessing applications that use multiple AI models in pipelines or ensemble configurations.
OWASP LLM Top 10 Testing Methodology
Comprehensive testing methodology for each vulnerability in the OWASP LLM Top 10 2025.
Purple Team Operations for AI Security
Methodology for conducting purple team operations that combine red team attacks with blue team defense improvement.
AI Security Regression Testing Methodology (Methodology Walkthrough)
Methodology for continuous regression testing of AI application security after updates and model changes.
Communicating AI Risks to Stakeholders
Guide for communicating AI security risks to technical and non-technical stakeholders effectively.
Threat Intelligence for AI Systems
Methodology for gathering and applying threat intelligence specific to AI system attacks and defenses.
AI Vulnerability Prioritization Framework
Framework for prioritizing AI vulnerabilities by exploitability, impact, and remediation cost.
Writing AI Red Team Reports
Guide to writing clear, actionable AI red team assessment reports with findings and recommendations.