# documentation
標記為「documentation」的 38 篇文章
AI Incident Post-Mortem Template
Comprehensive post-mortem template for AI security incidents covering timeline, impact assessment, root cause, and remediation tracking.
Skill Verification: Advanced Report Writing
Verification of advanced red team report writing including executive summaries, technical details, and remediation.
Skill Verification: Report Writing
Timed skill verification lab: write a professional AI red team finding report from provided evidence within 30 minutes.
AI Red Team Report Writing
Writing AI red team reports: executive summaries, finding templates, AI-adapted risk ratings, remediation recommendations, and common mistakes to avoid.
Documentation-Based Code Injection
Embedding adversarial instructions in code comments, docstrings, and documentation files that influence AI code generation.
Security Considerations in Model Cards
Comprehensive guide to incorporating security assessments, red team findings, vulnerability disclosures, and threat model documentation into model cards, enabling downstream consumers to make informed security decisions.
AI Transparency and Documentation
Requirements and best practices for AI system transparency including model cards and datasheets.
Lab: Ethical Red Teaming
Practice responsible AI red teaming with proper documentation, scope management, and ethical decision-making frameworks.
Lab: Ethical Red Teaming (Beginner Lab)
Hands-on lab for practicing responsible AI red teaming with proper documentation, scope management, ethical boundaries, and disclosure procedures.
Lab: Red Team Report Writing Basics
Practice writing clear, actionable red team findings reports with evidence, risk ratings, and remediation guidance.
AI Penetration Testing Report Writing
Comprehensive guide to writing effective penetration testing reports for AI system assessments.
Evidence Collection & Chain of Custody
How to collect and preserve evidence during AI red team engagements: screenshots, API logs, reproducibility requirements, and chain-of-custody procedures.
Technical Findings Documentation
How to document AI-specific vulnerabilities: reproduction steps, severity assessment with AI-adapted frameworks, remediation recommendations, and finding templates.
Evidence Handling Procedures
Proper procedures for collecting, documenting, and preserving evidence during AI red team engagements to ensure findings are defensible.
Evidence Collection & Chain of Custody (Tradecraft)
Standards for capturing, preserving, and documenting AI red team findings: conversation logs, API traces, bypass rate measurement, and evidence packaging for reproducible reporting.
Evidence Collection and Documentation Best Practices
Walkthrough for systematic evidence collection during AI red team engagements, covering request/response capture, screenshot methodology, chain-of-custody documentation, reproducibility requirements, and evidence organization for reports.
Evidence Collection Methods for AI Red Teams
Comprehensive methods for collecting, preserving, and organizing red team evidence from AI system assessments, including API logs, screenshots, reproduction scripts, and chain-of-custody procedures.
Creating Detailed Technical Appendices
Guide to building comprehensive technical appendices for AI red team reports, including evidence formatting, reproduction procedures, tool output presentation, and raw data organization.
Generating Professional Reports from PyRIT Campaigns
Intermediate walkthrough on generating professional red team reports from PyRIT campaign data, including executive summaries, technical findings, remediation guidance, and visual dashboards.
AI Incident Post-Mortem Template
Comprehensive post-mortem template for AI security incidents covering timeline, impact assessment, root cause, and remediation tracking.
Skill Verification: 進階 Report Writing
Verification of advanced red team report writing including executive summaries, technical details, and remediation.
技能驗證:報告撰寫
限時技能驗證實驗室:在 30 分鐘內從提供的證據撰寫專業 AI 紅隊發現報告。
AI 紅隊報告寫作
撰寫 AI 紅隊報告:執行摘要、發現範本、AI 適配風險評級、補救建議,與要避免之常見錯誤。
Documentation-Based Code Injection
Embedding adversarial instructions in code comments, docstrings, and documentation files that influence AI code generation.
安全 Considerations in 模型 Cards
Comprehensive guide to incorporating security assessments, red team findings, vulnerability disclosures, and threat model documentation into model cards, enabling downstream consumers to make informed security decisions.
AI Transparency and Documentation
Requirements and best practices for AI system transparency including model cards and datasheets.
實驗室: Ethical 紅隊演練
Practice responsible AI red teaming with proper documentation, scope management, and ethical decision-making frameworks.
實驗室: Ethical 紅隊演練 (初階 實驗室)
Hands-on lab for practicing responsible AI red teaming with proper documentation, scope management, ethical boundaries, and disclosure procedures.
實驗室: 紅隊 Report Writing Basics
Practice writing clear, actionable red team findings reports with evidence, risk ratings, and remediation guidance.
AI Penetration Testing Report Writing
Comprehensive guide to writing effective penetration testing reports for AI system assessments.
證據蒐集與保管鏈
在 AI 紅隊委任期間如何蒐集並保存證據:截圖、API 日誌、可重現性要求,以及保管鏈程序。
技術發現文件
如何記錄 AI 特定漏洞:重現步驟、使用適用於 AI 的嚴重性框架進行評估、修復建議,以及發現範本。
Evidence Handling Procedures
Proper procedures for collecting, documenting, and preserving evidence during AI red team engagements to ensure findings are defensible.
Evidence Collection & Chain of Custody (Tradecraft)
Standards for capturing, preserving, and documenting AI red team findings: conversation logs, API traces, bypass rate measurement, and evidence packaging for reproducible reporting.
Evidence Collection and Documentation Best Practices
導覽 for systematic evidence collection during AI red team engagements, covering request/response capture, screenshot methodology, chain-of-custody documentation, reproducibility requirements, and evidence organization for reports.
Evidence Collection Methods for AI 紅隊s
Comprehensive methods for collecting, preserving, and organizing red team evidence from AI system assessments, including API logs, screenshots, reproduction scripts, and chain-of-custody procedures.
Creating Detailed Technical Appendices
指南 to building comprehensive technical appendices for AI red team reports, including evidence formatting, reproduction procedures, tool output presentation, and raw data organization.
Generating Professional Reports from PyRIT Campaigns
中階 walkthrough on generating professional red team reports from PyRIT campaign data, including executive summaries, technical findings, remediation guidance, and visual dashboards.