Professional Practice Exam
25-question practice exam on professional AI red teaming: engagement methodology, scoping, report writing, governance frameworks, client communication, and ethical considerations.
Professional Practice Exam
This exam tests professional skills required for conducting AI 紅隊 engagements: methodology, scoping, communication, reporting, governance, and ethics. These questions are based on real-world engagement scenarios. Set a 45-minute timer and answer all 25 questions without consulting reference materials.
Section A: Engagement Methodology (Questions 1-5)
1. During scoping, a client asks you to 測試 their customer-facing chatbot but explicitly excludes the backend 資料庫 from scope. During 測試, you discover that 提示詞注入 can cause the chatbot to execute SQL queries against the 資料庫. What is the correct course of action?
2. What is the most important factor in determining the order of 測試 activities during an AI 紅隊 engagement?
3. A client provides you with their 系統提示詞 before the engagement. How should this affect your 測試 approach?
4. During 測試, your automated 提示詞注入 tool generates 500 API calls in 10 minutes. The client's operations team contacts you about unusual API activity. What should you do?
5. What distinguishes an AI 紅隊 engagement from a standard LLM 安全 評估?
Section B: Reporting and Communication (Questions 6-10)
6. When writing a finding for a 提示詞注入 bypass, what makes the 'impact' section most useful to the client?
7. How should you handle a finding that you cannot consistently reproduce?
8. What is the recommended structure for an AI 紅隊 report executive summary?
9. When presenting findings to a client in a readout meeting, what approach is most effective?
10. A client disputes a finding, arguing that the attack requires unrealistic conditions. How should you respond?
Section C: Governance and Compliance (Questions 11-15)
11. How does the EU AI Act classify AI systems and what does this mean for 紅隊 engagements?
12. What is the NIST AI Risk Management Framework (AI RMF) and how does it inform 紅隊 methodology?
13. What is responsible disclosure in the context of AI 漏洞, and how does it differ from traditional software 漏洞 disclosure?
14. How should a 紅隊 handle the discovery of biased model behavior during an engagement scoped for 安全 測試?
15. What data retention obligations should a 紅隊 考慮 after an AI engagement?
Section D: Scoping and Planning (Questions 16-20)
16. What is the most commonly underscoped component in AI 紅隊 engagements?
17. How should you estimate the duration of an AI 紅隊 engagement?
18. When should you recommend a collaborative ('purple team') approach versus an 對抗性 ('紅隊') approach for AI 測試?
19. What should the rules of engagement specify about 測試 that generates harmful content?
20. What cost considerations are unique to AI 紅隊 engagements compared to traditional penetration 測試?
Section E: Ethics and Professional Standards (Questions 21-25)
21. You discover during 測試 that the client's AI chatbot can generate convincing medical advice that contradicts established medical guidelines. The client's chatbot is deployed in a general-purpose customer service context, not healthcare. What is the correct professional response?
22. What ethical obligation does a 紅隊 have regarding 越獄 techniques discovered during an engagement?
23. How should you handle a situation where your 測試 reveals that the client's AI system may be in violation of regulations?
24. What is the ethical consideration when 測試 AI systems that may affect vulnerable populations?
25. What professional development is most important for maintaining AI 紅隊 competency?