Pre-Engagement Preparation Checklist
Complete pre-engagement preparation checklist for AI red team operations covering team readiness, infrastructure setup, legal requirements, and initial reconnaissance planning.
The gap between signing a Rules of Engagement and actually starting an engagement is where many red teams stumble. Missing credentials, unconfigured tools, unclear escalation paths, or incomplete legal paperwork can delay an engagement by days or create confusion during active 測試. This walkthrough provides a comprehensive pre-engagement checklist that ensures your team is fully prepared before the first 測試 request is sent.
The checklist is organized into seven sections that can be worked in parallel by different team members. Assign an owner to each section and set a deadline for completion before the engagement start date.
Step 1: Legal and Administrative Preparation
Every engagement must have its legal foundation in place before any technical work begins. 這是 not optional and is not something that can be "caught up on" later.
Contract and Authorization Verification
# Legal Readiness Checklist
## Contracts
- [ ] Master services agreement (MSA) executed
- [ ] Statement of work (SOW) signed with specific AI 測試 scope
- [ ] Non-disclosure agreement (NDA) covering findings and client data
- [ ] Data processing agreement (DPA) if handling personal data
- [ ] Cyber insurance coverage verified for AI 測試 activities
## Authorization
- [ ] Written 授權 letter received from engagement sponsor
- [ ] Authorization explicitly covers AI model interaction 測試
- [ ] Authorization covers all environments listed in scope
- [ ] Third-party AI provider 測試 policies reviewed and documented
- [ ] Provider notification submitted (if required by provider policy)
- [ ] Get-out-of-jail letter prepared (授權 to 測試 document)
## Compliance
- [ ] 測試 activities reviewed against relevant regulations
- [ ] Data residency requirements documented
- [ ] PII handling procedures defined for evidence collection
- [ ] Retention and destruction schedule agreed uponInsurance Considerations
AI 紅隊演練 introduces risks that traditional penetration 測試 insurance may not cover. Verify your policy addresses these scenarios:
| Scenario | Traditional Pentest Coverage | AI-Specific Consideration |
|---|---|---|
| Accidental data exposure | Usually covered | May not cover AI-generated PII in outputs |
| Service disruption | Usually covered | Token exhaustion or model degradation may not be covered |
| Third-party provider impact | Varies | 測試 via OpenAI/Anthropic APIs may require separate coverage |
| Content generation liability | Not applicable | Generating harmful content during 測試 needs coverage |
| Intellectual property exposure | Usually covered | Model extraction or 訓練資料 exposure may be excluded |
Step 2: Team Assembly and Role Assignment
A well-organized team structure prevents duplication of effort and ensures coverage across all attack categories.
Team Roster Template
# Engagement Team Roster
| Role | Name | Specialty | Availability | Clearance |
|------|------|-----------|-------------|-----------|
| Engagement lead | | Overall coordination, client comms | | |
| Prompt specialist | | Injection, 越獄, evasion | | |
| Application tester | | API 安全, auth, data flow | | |
| Automation engineer | | Custom tooling, scan orchestration | | |
| Report writer | | Finding documentation, executive comms | | |Role Responsibilities
Engagement Lead responsibilities include maintaining the engagement timeline, serving as the single point of contact with the client, making scope decisions when ambiguity arises, reviewing all findings before documentation, and managing escalation of critical findings.
Prompt Specialist responsibilities include designing and executing prompt-level attacks, 測試 護欄 effectiveness across attack categories, documenting successful and unsuccessful bypass techniques, and maintaining the prompt library for the engagement.
Application Tester responsibilities include 測試 API 認證 and 授權, evaluating data flow 安全 through the AI pipeline, 測試 RAG implementations for data leakage, and assessing 函式呼叫 and 工具使用 安全.
Automation Engineer responsibilities include configuring and running automated scanning tools, building custom scripts for engagement-specific 測試 cases, managing 測試 data and results collection, and 監控 API usage and costs during automated 測試.
For smaller engagements, one or two people may fill multiple roles. The key is that every responsibility has a named owner.
Step 3: Infrastructure Setup
Your 測試 infrastructure must be configured, tested, and secured before the engagement begins. Scrambling to install tools during active 測試 wastes billable time and introduces errors.
測試 Environment Configuration
#!/bin/bash
# pre-engagement-setup.sh
# Run this script to verify your 測試 environment is ready
echo "=== AI 紅隊 Pre-Engagement Environment Check ==="
# Python environment
echo "[*] Checking Python version..."
python3 --version || echo "FAIL: Python 3 not found"
# Required packages
echo "[*] Checking required Python packages..."
pip3 list 2>/dev/null | grep -i "openai\|anthropic\|requests\|httpx" || echo "WARN: Missing AI SDK packages"
# 測試 tools
echo "[*] Checking 測試 tools..."
command -v garak >/dev/null 2>&1 && echo "OK: garak found" || echo "WARN: garak not installed"
command -v promptfoo >/dev/null 2>&1 && echo "OK: promptfoo found" || echo "WARN: promptfoo not installed"
command -v burpsuite >/dev/null 2>&1 && echo "OK: Burp Suite found" || echo "INFO: Burp Suite not in PATH"
# Network connectivity
echo "[*] Checking API connectivity..."
curl -s -o /dev/null -w "%{http_code}" https://api.openai.com/v1/models && echo " OK: OpenAI API reachable" || echo " FAIL: OpenAI API unreachable"
# Evidence directory structure
echo "[*] Creating engagement directory structure..."
ENGAGEMENT_DIR="./engagement_$(date +%Y%m%d)"
mkdir -p "$ENGAGEMENT_DIR"/{evidence,logs,scripts,reports,artifacts}
echo "OK: Created $ENGAGEMENT_DIR"
echo "=== Environment check complete ==="Tool Configuration Checklist
# Tool Setup Checklist
## HTTP Proxy
- [ ] Burp Suite or mitmproxy installed and licensed
- [ ] SSL certificates configured for HTTPS interception
- [ ] Scope rules configured to only intercept target traffic
- [ ] Logging configured to capture all AI API interactions
## AI 測試 Frameworks
- [ ] Garak installed with latest probes and detectors
- [ ] Promptfoo configured with target provider
- [ ] PyRIT installed (if multi-turn 測試 required)
- [ ] Custom prompt libraries loaded and organized
## API Clients
- [ ] OpenAI SDK configured with engagement API key
- [ ] Anthropic SDK configured (if 測試 Claude-based systems)
- [ ] Custom API client scripts tested against target endpoints
- [ ] Rate limiting configured in client to respect constraints
## Evidence Collection
- [ ] Screen recording software configured
- [ ] API request/response logging enabled
- [ ] Timestamp synchronization verified across tools
- [ ] Evidence naming convention documented and shared with teamStep 4: Communication and Escalation Setup
Clear communication channels prevent misunderstandings during 測試 and ensure critical findings reach the right people quickly.
Communication Plan
# Communication Plan
## Channels
| Purpose | Channel | Participants |
|---------|---------|-------------|
| Daily status updates | [Slack/Teams/Email] | Red team + client POC |
| Technical questions | [Slack/Teams] | Red team + client tech POC |
| Critical finding escalation | [Phone + Email] | Engagement lead + client escalation contact |
| Internal team coordination | [Internal Slack/Signal] | Red team only |
| Evidence sharing | [Secure file share] | Red team + approved recipients |
## Status Update Schedule
- Daily: Brief summary of 測試 activity and preliminary findings
- Weekly: Detailed progress report against 測試 plan
- Ad hoc: Critical finding notification (within agreed SLA)
## Escalation Matrix
| Severity | Response Time | Notification Method | Recipient |
|----------|--------------|---------------------|-----------|
| Critical | 2 hours | Phone + email | Engagement sponsor + 安全 lead |
| High | 24 hours | Email | 安全 lead + tech POC |
| Medium | End of week | Weekly report | Tech POC |
| Low | Final report | Report only | All stakeholders |Secure Evidence Handling
# Evidence Handling Procedures
## Classification
- All engagement evidence is classified as CONFIDENTIAL
- Evidence containing PII requires additional handling per DPA
## Storage
- Evidence stored in encrypted storage (AES-256 minimum)
- No evidence on personal devices or unencrypted media
- 雲端 storage only in approved regions per data residency requirements
## Naming Convention
- Format: [DATE]_[TARGET]_[ATTACK-TYPE]_[SEQUENCE]
- 範例: 20260315_chatbot-api_prompt-injection_001
- Screenshots: same prefix + _screenshot suffix
- API logs: same prefix + _request/_response suffix
## Retention
- Active engagement: full evidence retained
- Post-delivery: evidence retained for [agreed period]
- Destruction: secure deletion with verificationStep 5: Credential and Access Verification
Do not assume credentials work until you have verified them. Stale API keys, expired 符元, and misconfigured access are common causes of engagement delays.
Access Verification Checklist
"""
access_verification.py
Run this script to verify all engagement credentials and access.
Update the configuration section with your engagement-specific details.
"""
import sys
import json
import requests
from datetime import datetime
# Configuration - update 對每個 engagement
TARGET_ENDPOINTS = [
{
"name": "Primary Chat API",
"url": "https://example.com/api/v1/chat",
"method": "POST",
"headers": {"Authorization": "Bearer YOUR_API_KEY"},
"body": {"message": "Hello, 這是 a connectivity 測試."},
"expected_status": 200
},
{
"name": "Health Check",
"url": "https://example.com/api/health",
"method": "GET",
"headers": {},
"body": None,
"expected_status": 200
}
]
def verify_endpoint(endpoint: dict) -> dict:
"""Verify a single endpoint is accessible."""
result = {
"name": endpoint["name"],
"url": endpoint["url"],
"timestamp": datetime.utcnow().isoformat(),
"status": "UNKNOWN"
}
try:
if endpoint["method"] == "GET":
resp = requests.get(
endpoint["url"],
headers=endpoint["headers"],
timeout=10
)
else:
resp = requests.post(
endpoint["url"],
headers=endpoint["headers"],
json=endpoint["body"],
timeout=10
)
result["http_status"] = resp.status_code
result["status"] = (
"PASS" if resp.status_code == endpoint["expected_status"]
else "FAIL"
)
result["response_time_ms"] = resp.elapsed.total_seconds() * 1000
except requests.exceptions.ConnectionError:
result["status"] = "FAIL"
result["error"] = "Connection refused"
except requests.exceptions.Timeout:
result["status"] = "FAIL"
result["error"] = "Request timed out"
except Exception as e:
result["status"] = "FAIL"
result["error"] = str(e)
return result
def main():
print(f"Access Verification - {datetime.utcnow().isoformat()}")
print("=" * 60)
all_pass = True
for endpoint in TARGET_ENDPOINTS:
result = verify_endpoint(endpoint)
status_icon = "PASS" if result["status"] == "PASS" else "FAIL"
print(f"[{status_icon}] {result['name']}: {result.get('http_status', 'N/A')}")
if result["status"] != "PASS":
all_pass = False
print(f" Error: {result.get('error', 'Unexpected status code')}")
print("=" * 60)
if all_pass:
print("All endpoints verified successfully.")
else:
print("WARNING: One or more endpoints failed verification.")
sys.exit(1)
if __name__ == "__main__":
main()Credential Inventory
# Credential Inventory (DO NOT commit to version control)
| Credential | Type | Scope | Expiry | Verified? |
|-----------|------|-------|--------|-----------|
| Chat API key | API key | /api/v1/chat | 2026-04-30 | [ ] |
| Admin portal | Username/password | Admin interface | N/A | [ ] |
| VPN certificate | Certificate | Network access | 2026-04-15 | [ ] |
| 測試 user account | OAuth 符元 | End-user access | Session-based | [ ] |
## Verification Steps
1. 測試 each credential against its intended endpoint
2. Verify 權限 level matches what is needed for 測試
3. Confirm credentials are not shared with other teams or engagements
4. Store credentials in password manager, never in plain text filesStep 6: 測試 Plan Review and Calibration
Before 測試 begins, review the 測試 plan with the full team to ensure everyone understands the approach, priorities, and constraints.
測試 Plan Review Meeting Agenda
# Pre-Engagement 測試 Plan Review
Duration: 60-90 minutes
Attendees: Full 紅隊
## Agenda
1. Scope Review (10 min)
- Walk through in-scope and out-of-scope targets
- Clarify any ambiguous scope boundaries
- Review 測試 constraints (rate limits, timing, prohibited actions)
2. Target Architecture Review (15 min)
- Review system architecture diagram
- 識別 攻擊面 priorities
- Discuss known 安全 controls and expected 防禦
3. 攻擊 Plan Walkthrough (20 min)
- Review attack categories and priorities
- Assign attack categories to team members
- Discuss automation vs. manual 測試 split
- Review custom 測試 cases for engagement-specific scenarios
4. Tool and Automation Review (10 min)
- Confirm all tools are configured and tested
- Review automation scripts and scan configurations
- Discuss expected API usage and cost 監控
5. Logistics (10 min)
- Confirm daily standup time
- Review communication plan and escalation procedures
- Confirm evidence handling procedures
- Review reporting timeline and responsibilities
6. Questions and Concerns (10 min)
- Open floor for any team member questions or concernsPriority Matrix
Use this matrix to prioritize 測試 activities based on the 威脅模型 and available time:
| 攻擊 Category | Business Impact | Likelihood of Success | Priority | Assigned To |
|---|---|---|---|---|
| Direct 提示詞注入 | High | High | P1 | |
| Indirect 提示詞注入 | High | Medium | P1 | |
| 系統提示詞 extraction | Medium | High | P2 | |
| Data exfiltration via RAG | High | Medium | P1 | |
| Function calling abuse | High | Medium | P1 | |
| Content policy bypass | Medium | Medium | P2 | |
| Denial of service | Low | High | P3 | |
| Model extraction | Low | Low | P3 |
Step 7: Final Readiness Review
The final readiness review is a gate check that must be completed before the engagement begins. Every item must be marked complete or explicitly waived with justification.
# Final Readiness Review
Date: ___
Engagement: ___
Reviewed by: ___
## Legal and Administrative
- [ ] All contracts and authorizations signed and filed
- [ ] Insurance coverage verified
- [ ] Third-party provider policies reviewed
## Team
- [ ] All team members briefed on scope and constraints
- [ ] Roles and responsibilities assigned
- [ ] Availability confirmed for engagement duration
## Infrastructure
- [ ] All tools installed, configured, and tested
- [ ] Evidence collection system operational
- [ ] Secure storage configured and accessible
## Access
- [ ] All credentials received and verified
- [ ] Network access tested (VPN, direct, etc.)
- [ ] 測試 accounts functional
## Communication
- [ ] Communication channels established and tested
- [ ] Escalation matrix distributed to all stakeholders
- [ ] Client POC confirmed available for engagement start
## 測試 Plan
- [ ] 測試 plan reviewed by full team
- [ ] 攻擊 priorities agreed upon
- [ ] Automation configurations reviewed
## READY TO BEGIN: [ ] Yes / [ ] No (if No, list blockers below)
Blockers:
1. ___
2. ___Common Pre-Engagement Mistakes
-
測試 credentials at the last minute. API keys expire, passwords get rotated, VPN certificates need renewal. Verify all credentials at least 48 hours before the engagement start date and again on the morning 測試 begins.
-
Skipping the team briefing. Even experienced testers need to 理解 engagement-specific constraints. A tester who does not know about a rate limit will trigger alerts. A tester who does not know about scope exclusions will 測試 out-of-scope targets.
-
Not 測試 tools against the target. A tool that works against a generic OpenAI endpoint may fail against the client's custom API. 測試 every tool against the actual target endpoints before the engagement starts.
-
Forgetting about time zones. If the client has 測試 windows or blackout periods, confirm the time zone. "Business hours only" means different things in New York and Singapore.
-
Inadequate evidence collection setup. If your evidence collection system is not configured before 測試 begins, you will lose critical findings. Configuring it after the fact means recreating tests, which wastes time.
What is the most important reason to verify credentials 48 hours before engagement start rather than the morning of?
相關主題
- Scoping Checklist Walkthrough -- Completing the scoping process that feeds into pre-engagement preparation
- Rules of Engagement Template -- Creating the ROE document referenced 在本 checklist
- 測試 Plan Development -- Developing the 測試 plan reviewed in Step 6
- Evidence Collection Methods -- Detailed evidence handling procedures