# documentation
19 articlestagged with “documentation”
AI Incident Post-Mortem Template
Comprehensive post-mortem template for AI security incidents covering timeline, impact assessment, root cause, and remediation tracking.
Skill Verification: Advanced Report Writing
Verification of advanced red team report writing including executive summaries, technical details, and remediation.
Skill Verification: Report Writing
Timed skill verification lab: write a professional AI red team finding report from provided evidence within 30 minutes.
AI Red Team Report Writing
Writing AI red team reports: executive summaries, finding templates, AI-adapted risk ratings, remediation recommendations, and common mistakes to avoid.
Documentation-Based Code Injection
Embedding adversarial instructions in code comments, docstrings, and documentation files that influence AI code generation.
Security Considerations in Model Cards
Comprehensive guide to incorporating security assessments, red team findings, vulnerability disclosures, and threat model documentation into model cards, enabling downstream consumers to make informed security decisions.
AI Transparency and Documentation
Requirements and best practices for AI system transparency including model cards and datasheets.
Lab: Ethical Red Teaming
Practice responsible AI red teaming with proper documentation, scope management, and ethical decision-making frameworks.
Lab: Ethical Red Teaming (Beginner Lab)
Hands-on lab for practicing responsible AI red teaming with proper documentation, scope management, ethical boundaries, and disclosure procedures.
Lab: Red Team Report Writing Basics
Practice writing clear, actionable red team findings reports with evidence, risk ratings, and remediation guidance.
AI Penetration Testing Report Writing
Comprehensive guide to writing effective penetration testing reports for AI system assessments.
Evidence Collection & Chain of Custody
How to collect and preserve evidence during AI red team engagements: screenshots, API logs, reproducibility requirements, and chain-of-custody procedures.
Technical Findings Documentation
How to document AI-specific vulnerabilities: reproduction steps, severity assessment with AI-adapted frameworks, remediation recommendations, and finding templates.
Evidence Handling Procedures
Proper procedures for collecting, documenting, and preserving evidence during AI red team engagements to ensure findings are defensible.
Evidence Collection & Chain of Custody (Tradecraft)
Standards for capturing, preserving, and documenting AI red team findings: conversation logs, API traces, bypass rate measurement, and evidence packaging for reproducible reporting.
Evidence Collection and Documentation Best Practices
Walkthrough for systematic evidence collection during AI red team engagements, covering request/response capture, screenshot methodology, chain-of-custody documentation, reproducibility requirements, and evidence organization for reports.
Evidence Collection Methods for AI Red Teams
Comprehensive methods for collecting, preserving, and organizing red team evidence from AI system assessments, including API logs, screenshots, reproduction scripts, and chain-of-custody procedures.
Creating Detailed Technical Appendices
Guide to building comprehensive technical appendices for AI red team reports, including evidence formatting, reproduction procedures, tool output presentation, and raw data organization.
Generating Professional Reports from PyRIT Campaigns
Intermediate walkthrough on generating professional red team reports from PyRIT campaign data, including executive summaries, technical findings, remediation guidance, and visual dashboards.