Writing Executive Summaries
Executive summary structure for AI red team reports: risk communication for non-technical stakeholders, templates, examples, and common mistakes.
Writing Executive Summaries
The executive summary is the most-read and highest-impact section of any red team report. Many stakeholders -- including the people who control security budgets -- read only this section. A strong executive summary drives action; a weak one gets the report shelved.
Audience Analysis
The executive summary is written for people who:
| Characteristic | Implication for Writing |
|---|---|
| Have limited technical background | No jargon without plain-language explanation |
| Make budget and priority decisions | Frame everything in terms of business risk and investment |
| Are time-constrained | Keep it to 1-2 pages maximum |
| Are risk-aware but not threat-aware | Explain the "so what" for each finding category |
| Need to brief their own leadership | Make key points quotable and self-contained |
Structure
A strong executive summary follows this structure:
Engagement Context (2-3 sentences)
Who hired you, what was tested, when, and under what constraints. Establish credibility by showing you understand the business context.
Overall Risk Assessment (1 paragraph)
A clear, direct statement of the system's security posture. Use a rating (Critical / High / Medium / Low) with a plain-language explanation.
Key Findings (3-5 bullet points)
The most important findings, each stated as a business risk. Lead with impact, not technique.
What Worked Well (2-3 bullet points)
Controls that were effective. This builds credibility and avoids the "everything is broken" tone that causes stakeholders to disengage.
Priority Recommendations (3-5 items)
The most important actions, prioritized by risk reduction. Include rough effort estimates.
Investment Ask (if applicable)
What resources are needed to address the findings. Tie to risk reduction.
Writing Principles
Lead with Impact, Not Technique
"Finding F001: We performed a multi-turn prompt injection attack using a role-play DAN variant that bypasses the RLHF-trained refusal behavior of the model. The attack exploits the model's instruction hierarchy by establishing a fictional context in which safety guidelines do not apply."
"Finding F001: An attacker can bypass the chatbot's safety controls through a short conversation, causing it to reveal confidential customer data, generate harmful content, or ignore business rules. This attack requires no technical skill and takes approximately 2 minutes."
Quantify Everything
| Instead of... | Write... |
|---|---|
| "User data could be exposed" | "Records of approximately 50,000 customers could be accessed" |
| "The attack is easy to perform" | "The attack takes 2 minutes and requires no technical skill" |
| "There could be regulatory implications" | "This exposure triggers GDPR Article 33 breach notification requirements" |
| "Significant business impact" | "Estimated incident cost: $2-5M including response, remediation, and reputational damage" |
Use Concrete Severity Language
| Severity | Executive Language | Implies |
|---|---|---|
| Critical | "Immediate action required. Active exploitation likely." | Stop-ship, executive escalation |
| High | "Address within 30 days. Exploitation is straightforward." | Priority engineering work |
| Medium | "Address within 90 days. Exploitation requires moderate effort." | Planned remediation |
| Low | "Address in next development cycle. Limited real-world impact." | Backlog item |
Template
## Executive Summary
### Engagement Overview
[Company] engaged [Red Team] to assess the security of [target system]
from [start date] to [end date]. The assessment focused on [scope: e.g.,
"the customer-facing AI chatbot deployed at chat.example.com"]. Testing
was conducted under [constraints: e.g., "black-box conditions simulating
an external attacker with no privileged access"].
### Overall Risk Assessment
**Overall Rating: [CRITICAL / HIGH / MEDIUM / LOW]**
[1-2 sentences summarizing the overall security posture in business terms.
E.g., "The AI chatbot contains critical vulnerabilities that allow an
unskilled attacker to bypass safety controls, access confidential data,
and manipulate the system into generating harmful content. Immediate
remediation is required before these vulnerabilities are discovered
and exploited."]
### Key Findings
1. **[Business impact statement]** -- [Brief explanation]. Severity: [Rating].
2. **[Business impact statement]** -- [Brief explanation]. Severity: [Rating].
3. **[Business impact statement]** -- [Brief explanation]. Severity: [Rating].
### Effective Controls
- [Control that worked well and should be maintained]
- [Control that worked well and should be maintained]
### Priority Recommendations
1. **[Action]** -- [Expected risk reduction]. Estimated effort: [time/cost].
2. **[Action]** -- [Expected risk reduction]. Estimated effort: [time/cost].
3. **[Action]** -- [Expected risk reduction]. Estimated effort: [time/cost].
### Recommended Investment
To address the findings in this report, we recommend an investment of
approximately [amount] over [timeframe], which is expected to reduce
AI-related security risk by [percentage/metric].Worked Example
## Executive Summary
### Engagement Overview
Acme Corp engaged RedTeam Security to assess the security of the Acme AI
Customer Service Assistant from February 24 to March 7, 2026. The
assessment focused on the production chatbot deployed at
support.acme.com/chat, which handles approximately 15,000 customer
interactions daily. Testing was conducted under black-box conditions
simulating an external attacker with a standard customer account.
### Overall Risk Assessment
**Overall Rating: HIGH**
The AI assistant contains vulnerabilities that allow an unskilled attacker
to bypass safety controls and extract confidential information within
minutes. While the system effectively blocks direct harmful content
requests, multiple bypass techniques exist that require no technical
knowledge to exploit.
### Key Findings
1. **Customer data exposure** -- An attacker can trick the assistant into
revealing other customers' order histories and contact information.
This affects approximately 50,000 records. Severity: Critical.
2. **Safety control bypass** -- The assistant's content restrictions can
be bypassed through role-play scenarios, allowing generation of content
that violates Acme's acceptable use policy. Severity: High.
3. **Internal process disclosure** -- The assistant can be manipulated into
revealing internal refund policies, escalation procedures, and employee
names. Severity: Medium.
### Effective Controls
- The assistant consistently refused requests for financial information
(credit card numbers, bank details) across all tested bypass techniques
- Rate limiting effectively prevented automated large-scale attacks
### Priority Recommendations
1. **Implement input/output filtering for PII** -- Prevents customer data
from appearing in responses. Expected to eliminate Finding 1.
Estimated effort: 2-3 engineering weeks.
2. **Add a safety classifier layer** -- Deploy a secondary model to screen
outputs before delivery. Expected to reduce Finding 2 success rate
from 60% to under 5%. Estimated effort: 3-4 engineering weeks.
3. **Restrict system prompt scope** -- Remove internal process details
from the system prompt. Estimated effort: 1 week.
### Recommended Investment
Total remediation investment: approximately $150,000-200,000 over 8 weeks,
which is expected to reduce AI-related security risk by 80%. For context,
a single data exposure incident involving 50,000 customer records carries
an estimated cost of $3-7M in regulatory fines, legal fees, and
reputational damage.Related Topics
- Red Team Reporting Masterclass -- the full reporting framework
- Technical Findings Documentation -- the detailed findings the summary distills
- Client Communication & Difficult Conversations -- presenting the summary to stakeholders
References
- "Writing for Impact: Security Report Executive Summaries" - SANS Institute (2024) - Best practices for translating technical findings into business-level communication
- "NIST AI Risk Management Framework (AI RMF 1.0)" - National Institute of Standards and Technology (2023) - Risk communication guidance for presenting AI risks to organizational leadership
- "The Art of the Executive Briefing" - CREST International (2024) - Frameworks for presenting security assessment results to non-technical stakeholders
What is the most effective way to describe a prompt injection finding to an executive audience?