Professional Practice Exam
25-question practice exam on professional AI red teaming: engagement methodology, scoping, report writing, governance frameworks, client communication, and ethical considerations.
Professional Practice Exam
This exam tests professional skills required for conducting AI red team engagements: methodology, scoping, communication, reporting, governance, and ethics. These questions are based on real-world engagement scenarios. Set a 45-minute timer and answer all 25 questions without consulting reference materials.
Section A: Engagement Methodology (Questions 1-5)
1. During scoping, a client asks you to test their customer-facing chatbot but explicitly excludes the backend database from scope. During testing, you discover that prompt injection can cause the chatbot to execute SQL queries against the database. What is the correct course of action?
2. What is the most important factor in determining the order of testing activities during an AI red team engagement?
3. A client provides you with their system prompt before the engagement. How should this affect your testing approach?
4. During testing, your automated prompt injection tool generates 500 API calls in 10 minutes. The client's operations team contacts you about unusual API activity. What should you do?
5. What distinguishes an AI red team engagement from a standard LLM safety evaluation?
Section B: Reporting and Communication (Questions 6-10)
6. When writing a finding for a prompt injection bypass, what makes the 'impact' section most useful to the client?
7. How should you handle a finding that you cannot consistently reproduce?
8. What is the recommended structure for an AI red team report executive summary?
9. When presenting findings to a client in a readout meeting, what approach is most effective?
10. A client disputes a finding, arguing that the attack requires unrealistic conditions. How should you respond?
Section C: Governance and Compliance (Questions 11-15)
11. How does the EU AI Act classify AI systems and what does this mean for red team engagements?
12. What is the NIST AI Risk Management Framework (AI RMF) and how does it inform red team methodology?
13. What is responsible disclosure in the context of AI vulnerabilities, and how does it differ from traditional software vulnerability disclosure?
14. How should a red team handle the discovery of biased model behavior during an engagement scoped for security testing?
15. What data retention obligations should a red team consider after an AI engagement?
Section D: Scoping and Planning (Questions 16-20)
16. What is the most commonly underscoped component in AI red team engagements?
17. How should you estimate the duration of an AI red team engagement?
18. When should you recommend a collaborative ('purple team') approach versus an adversarial ('red team') approach for AI testing?
19. What should the rules of engagement specify about testing that generates harmful content?
20. What cost considerations are unique to AI red team engagements compared to traditional penetration testing?
Section E: Ethics and Professional Standards (Questions 21-25)
21. You discover during testing that the client's AI chatbot can generate convincing medical advice that contradicts established medical guidelines. The client's chatbot is deployed in a general-purpose customer service context, not healthcare. What is the correct professional response?
22. What ethical obligation does a red team have regarding jailbreak techniques discovered during an engagement?
23. How should you handle a situation where your testing reveals that the client's AI system may be in violation of regulations?
24. What is the ethical consideration when testing AI systems that may affect vulnerable populations?
25. What professional development is most important for maintaining AI red team competency?