# reporting
32 articlestagged with “reporting”
Regulatory Reporting for AI Incidents
Requirements and procedures for regulatory reporting of AI security incidents across jurisdictions.
Professional Practice Exam
25-question practice exam on professional AI red teaming: engagement methodology, scoping, report writing, governance frameworks, client communication, and ethical considerations.
Red Team Methodology Assessment
Test your understanding of AI red team engagement methodology, from scoping through reporting, including structured approaches, attack planning, and finding documentation with 9 intermediate-level questions.
Professional Skills Assessment
Test your knowledge of AI red teaming methodology, report writing, client engagement, and professional practice with 15 intermediate-level questions.
Skill Verification: Red Team Reporting
Practical assessment of red team report writing and finding communication skills.
Skill Verification: Advanced Report Writing
Verification of advanced red team report writing including executive summaries, technical details, and remediation.
Skill Verification: Report Writing
Timed skill verification lab: write a professional AI red team finding report from provided evidence within 30 minutes.
Professional Practice Study Guide
Study guide covering AI red teaming methodology, engagement management, report writing, governance frameworks, and professional ethics.
Execution and Reporting
How to execute an AI red teaming engagement and deliver professional findings, including evidence collection, statistical reporting, and remediation guidance.
Red Team Findings → Remediation
How to map offensive findings to defensive recommendations, severity scoring for AI vulnerabilities, actionable remediation guidance, and the report-to-fix pipeline.
Red Team Reporting Automation
Automating report generation from red team testing data and findings.
Reporting Tool Development
Building automated reporting tools that transform raw test results into professional assessment reports with reproducible findings.
Red Team Metrics Beyond ASR
Comprehensive metrics methodology for AI red teaming beyond Attack Success Rate: severity-weighted scoring, defense depth metrics, coverage analysis, and stakeholder-appropriate reporting frameworks.
Security Finding Documentation Exercise
Practice documenting security findings in a professional format with reproducible steps and impact assessment.
Simulation: AI Bug Bounty
Find and report vulnerabilities in a simulated AI bug bounty program, practicing professional vulnerability disclosure and bounty-eligible reporting.
AI Penetration Testing Report Writing
Comprehensive guide to writing effective penetration testing reports for AI system assessments.
Finding Severity Classification
Standardized framework for classifying AI security findings by severity, including risk scoring methodology and business impact assessment.
Professional Practice
Professional skills for AI red team practitioners, covering red team operations, report writing and communication, career development, and building organizational AI red team programs.
Red Team Metrics Dashboard
What to measure in AI red team programs: key performance indicators, risk metrics, dashboard design, stakeholder reporting, and using data to demonstrate program value.
Red Team Reporting Masterclass
Comprehensive guide to AI red team reporting: executive summaries, technical findings, visualizations, client communication, and professional report templates.
Evidence Collection & Chain of Custody (Tradecraft)
Standards for capturing, preserving, and documenting AI red team findings: conversation logs, API traces, bypass rate measurement, and evidence packaging for reproducible reporting.
Engagement Walkthroughs Overview
Step-by-step walkthroughs for complete AI red team engagements: from scoping and reconnaissance through attack execution to reporting, organized by target system type.
Measuring and Reporting AI Red Team Effectiveness
Walkthrough for defining, collecting, and reporting metrics that measure the effectiveness of AI red teaming programs, covering coverage metrics, detection rates, time-to-find analysis, remediation tracking, and ROI calculation.
Communicating AI Red Team Findings to Stakeholders
Walkthrough for effectively communicating AI red team findings to diverse stakeholders, covering executive summaries, technical deep dives, live demonstrations, risk narratives, and remediation roadmaps tailored to audience expertise levels.
Evidence Collection and Documentation Best Practices
Walkthrough for systematic evidence collection during AI red team engagements, covering request/response capture, screenshot methodology, chain-of-custody documentation, reproducibility requirements, and evidence organization for reports.
Evidence Collection Methods for AI Red Teams
Comprehensive methods for collecting, preserving, and organizing red team evidence from AI system assessments, including API logs, screenshots, reproduction scripts, and chain-of-custody procedures.
Writing Executive Summaries for AI Red Team Reports
Guide to writing clear, impactful executive summaries for AI red team assessment reports that communicate risk to non-technical stakeholders and drive remediation decisions.
Creating Detailed Technical Appendices
Guide to building comprehensive technical appendices for AI red team reports, including evidence formatting, reproduction procedures, tool output presentation, and raw data organization.
Writing AI Red Team Reports
Guide to writing clear, actionable AI red team assessment reports with findings and recommendations.
Deep Dive into Garak Scan Report Analysis
Intermediate walkthrough on analyzing garak scan reports, including JSONL parsing, false positive identification, vulnerability categorization, executive summary generation, and trend tracking.
Generating Professional Reports from PyRIT Campaigns
Intermediate walkthrough on generating professional red team reports from PyRIT campaign data, including executive summaries, technical findings, remediation guidance, and visual dashboards.
Python Red Team Automation
Building custom AI red team automation with Python: test harnesses with httpx and aiohttp, result collection and analysis, automated reporting, and integration with existing tools like promptfoo and garak.