# reporting
標記為「reporting」的 62 篇文章
Regulatory Reporting for AI Incidents
Requirements and procedures for regulatory reporting of AI security incidents across jurisdictions.
Professional Practice Exam
25-question practice exam on professional AI red teaming: engagement methodology, scoping, report writing, governance frameworks, client communication, and ethical considerations.
Red Team Methodology Assessment
Test your understanding of AI red team engagement methodology, from scoping through reporting, including structured approaches, attack planning, and finding documentation with 9 intermediate-level questions.
Professional Skills Assessment
Test your knowledge of AI red teaming methodology, report writing, client engagement, and professional practice with 15 intermediate-level questions.
Skill Verification: Red Team Reporting
Practical assessment of red team report writing and finding communication skills.
Skill Verification: Advanced Report Writing
Verification of advanced red team report writing including executive summaries, technical details, and remediation.
Skill Verification: Report Writing
Timed skill verification lab: write a professional AI red team finding report from provided evidence within 30 minutes.
Professional Practice Study Guide
Study guide covering AI red teaming methodology, engagement management, report writing, governance frameworks, and professional ethics.
Execution and Reporting
How to execute an AI red teaming engagement and deliver professional findings, including evidence collection, statistical reporting, and remediation guidance.
Red Team Findings → Remediation
How to map offensive findings to defensive recommendations, severity scoring for AI vulnerabilities, actionable remediation guidance, and the report-to-fix pipeline.
Red Team Reporting Automation
Automating report generation from red team testing data and findings.
Reporting Tool Development
Building automated reporting tools that transform raw test results into professional assessment reports with reproducible findings.
Red Team Metrics Beyond ASR
Comprehensive metrics methodology for AI red teaming beyond Attack Success Rate: severity-weighted scoring, defense depth metrics, coverage analysis, and stakeholder-appropriate reporting frameworks.
Security Finding Documentation Exercise
Practice documenting security findings in a professional format with reproducible steps and impact assessment.
Simulation: AI Bug Bounty
Find and report vulnerabilities in a simulated AI bug bounty program, practicing professional vulnerability disclosure and bounty-eligible reporting.
AI Penetration Testing Report Writing
Comprehensive guide to writing effective penetration testing reports for AI system assessments.
Finding Severity Classification
Standardized framework for classifying AI security findings by severity, including risk scoring methodology and business impact assessment.
Professional Practice
Professional skills for AI red team practitioners, covering red team operations, report writing and communication, career development, and building organizational AI red team programs.
Red Team Metrics Dashboard
What to measure in AI red team programs: key performance indicators, risk metrics, dashboard design, stakeholder reporting, and using data to demonstrate program value.
Red Team Reporting Masterclass
Comprehensive guide to AI red team reporting: executive summaries, technical findings, visualizations, client communication, and professional report templates.
Evidence Collection & Chain of Custody (Tradecraft)
Standards for capturing, preserving, and documenting AI red team findings: conversation logs, API traces, bypass rate measurement, and evidence packaging for reproducible reporting.
Engagement Walkthroughs Overview
Step-by-step walkthroughs for complete AI red team engagements: from scoping and reconnaissance through attack execution to reporting, organized by target system type.
Measuring and Reporting AI Red Team Effectiveness
Walkthrough for defining, collecting, and reporting metrics that measure the effectiveness of AI red teaming programs, covering coverage metrics, detection rates, time-to-find analysis, remediation tracking, and ROI calculation.
Communicating AI Red Team Findings to Stakeholders
Walkthrough for effectively communicating AI red team findings to diverse stakeholders, covering executive summaries, technical deep dives, live demonstrations, risk narratives, and remediation roadmaps tailored to audience expertise levels.
Evidence Collection and Documentation Best Practices
Walkthrough for systematic evidence collection during AI red team engagements, covering request/response capture, screenshot methodology, chain-of-custody documentation, reproducibility requirements, and evidence organization for reports.
Evidence Collection Methods for AI Red Teams
Comprehensive methods for collecting, preserving, and organizing red team evidence from AI system assessments, including API logs, screenshots, reproduction scripts, and chain-of-custody procedures.
Writing Executive Summaries for AI Red Team Reports
Guide to writing clear, impactful executive summaries for AI red team assessment reports that communicate risk to non-technical stakeholders and drive remediation decisions.
Creating Detailed Technical Appendices
Guide to building comprehensive technical appendices for AI red team reports, including evidence formatting, reproduction procedures, tool output presentation, and raw data organization.
Writing AI Red Team Reports
Guide to writing clear, actionable AI red team assessment reports with findings and recommendations.
Deep Dive into Garak Scan Report Analysis
Intermediate walkthrough on analyzing garak scan reports, including JSONL parsing, false positive identification, vulnerability categorization, executive summary generation, and trend tracking.
Generating Professional Reports from PyRIT Campaigns
Intermediate walkthrough on generating professional red team reports from PyRIT campaign data, including executive summaries, technical findings, remediation guidance, and visual dashboards.
Python Red Team Automation
Building custom AI red team automation with Python: test harnesses with httpx and aiohttp, result collection and analysis, automated reporting, and integration with existing tools like promptfoo and garak.
Regulatory Reporting for AI Incidents
Requirements and procedures for regulatory reporting of AI security incidents across jurisdictions.
Professional Practice Exam
25-question practice exam on professional AI red teaming: engagement methodology, scoping, report writing, governance frameworks, client communication, and ethical considerations.
Skill Verification: 紅隊 Reporting
Practical assessment of red team report writing and finding communication skills.
Skill Verification: 進階 Report Writing
Verification of advanced red team report writing including executive summaries, technical details, and remediation.
技能驗證:報告撰寫
限時技能驗證實驗室:在 30 分鐘內從提供的證據撰寫專業 AI 紅隊發現報告。
Professional Practice Study 指南
Study guide covering AI red teaming methodology, engagement management, report writing, governance frameworks, and professional ethics.
執行與報告
如何執行 AI 紅隊委任並交付專業發現,包括證據蒐集、統計回報,以及修復指引。
紅隊發現 → 修復建議
如何將攻擊端發現對應到防禦建議、AI 漏洞的嚴重性評分、可執行的修復指引,以及「報告到修復」的完整流程。
紅隊 Reporting Automation
Automating report generation from red team testing data and findings.
Reporting 工具 Development
Building automated reporting tools that transform raw test results into professional assessment reports with reproducible findings.
超越 ASR 之紅隊指標
AI 紅隊之完整指標方法論,超越攻擊成功率:嚴重性加權評分、防禦深度指標、覆蓋分析,與適合利害關係人之報告框架。
安全 Finding Documentation 練習
Practice documenting security findings in a professional format with reproducible steps and impact assessment.
Simulation: AI Bug Bounty
Find and report vulnerabilities in a simulated AI bug bounty program, practicing professional vulnerability disclosure and bounty-eligible reporting.
AI Penetration Testing Report Writing
Comprehensive guide to writing effective penetration testing reports for AI system assessments.
Finding Severity Classification
Standardized framework for classifying AI security findings by severity, including risk scoring methodology and business impact assessment.
專業實務
AI 紅隊實務人員的專業技能,涵蓋紅隊營運、報告撰寫與溝通、職涯發展,以及建構組織 AI 紅隊計畫。
紅隊 Metrics Dashboard
What to measure in AI red team programs: key performance indicators, risk metrics, dashboard design, stakeholder reporting, and using data to demonstrate program value.
紅隊報告大師班
AI 紅隊報告的完整指南:執行摘要、技術發現、視覺化、客戶溝通與專業報告範本。
Evidence Collection & Chain of Custody (Tradecraft)
Standards for capturing, preserving, and documenting AI red team findings: conversation logs, API traces, bypass rate measurement, and evidence packaging for reproducible reporting.
案件導覽概覽
完整 AI 紅隊案件的逐步導覽:從範圍界定與偵察、透過攻擊執行到報告,依目標系統類型組織。
Measuring and Reporting AI 紅隊 Effectiveness
導覽 for defining, collecting, and reporting metrics that measure the effectiveness of AI red teaming programs, covering coverage metrics, detection rates, time-to-find analysis, remediation tracking, and ROI calculation.
Communicating AI 紅隊 Findings to Stakeholders
導覽 for effectively communicating AI red team findings to diverse stakeholders, covering executive summaries, technical deep dives, live demonstrations, risk narratives, and remediation roadmaps tailored to audience expertise levels.
Evidence Collection and Documentation Best Practices
導覽 for systematic evidence collection during AI red team engagements, covering request/response capture, screenshot methodology, chain-of-custody documentation, reproducibility requirements, and evidence organization for reports.
Evidence Collection Methods for AI 紅隊s
Comprehensive methods for collecting, preserving, and organizing red team evidence from AI system assessments, including API logs, screenshots, reproduction scripts, and chain-of-custody procedures.
Writing Executive Summaries for AI 紅隊 Reports
指南 to writing clear, impactful executive summaries for AI red team assessment reports that communicate risk to non-technical stakeholders and drive remediation decisions.
Creating Detailed Technical Appendices
指南 to building comprehensive technical appendices for AI red team reports, including evidence formatting, reproduction procedures, tool output presentation, and raw data organization.
Writing AI 紅隊 Reports
指南 to writing clear, actionable AI red team assessment reports with findings and recommendations.
Deep Dive into Garak Scan Report Analysis
中階 walkthrough on analyzing garak scan reports, including JSONL parsing, false positive identification, vulnerability categorization, executive summary generation, and trend tracking.
Generating Professional Reports from PyRIT Campaigns
中階 walkthrough on generating professional red team reports from PyRIT campaign data, including executive summaries, technical findings, remediation guidance, and visual dashboards.
Python 紅隊 Automation
Building custom AI red team automation with Python: test harnesses with httpx and aiohttp, result collection and analysis, automated reporting, and integration with existing tools like promptfoo and garak.