Practice Exam 2: Advanced AI Security
25-question advanced practice exam covering multimodal attacks, training pipeline security, cloud AI security, forensics, and governance.
Practice Exam 2: Advanced AI Security
This exam targets practitioners with operational experience in AI red teaming. It covers five advanced domains that build on the fundamentals tested in Practice Exam 1. You should have scored Proficient or above on Practice Exam 1 before attempting this.
Section A: Multimodal Attacks (Questions 1-5)
1. How can adversarial perturbations in images be used to perform prompt injection against vision-language models (VLMs)?
2. What unique challenge does audio input present for AI security compared to text input?
3. In a multimodal system that processes both images and text, why is cross-modal injection especially difficult to defend against?
4. What is a 'typographic attack' against vision-language models?
5. Why are multimodal models that process documents (PDFs, spreadsheets, presentations) particularly high-risk targets for indirect prompt injection?
Section B: Training Pipeline Security (Questions 6-10)
6. What is the primary mechanism by which training data poisoning persists through the model lifecycle?
7. How does a 'backdoor trigger' work in a poisoned language model?
8. What is the security risk of 'RLHF reward hacking' and how does it relate to red teaming?
9. When an organization fine-tunes a base model on their domain-specific data, what is the primary safety risk?
10. What is a 'sleeper agent' in the context of AI model security?
Section C: Cloud AI Security (Questions 11-15)
11. What is the most commonly exploited misconfiguration in cloud-hosted AI model endpoints?
12. How does a model inversion attack work against a cloud-deployed ML API?
13. What specific security risk does model serialization (pickling) introduce in cloud ML pipelines?
14. Why is network segmentation particularly important for AI workloads in cloud environments?
15. What is the security implication of model versioning and rollback capabilities in cloud ML platforms?
Section D: Forensics (Questions 16-20)
16. When investigating a suspected AI system compromise, what is the first evidence to preserve?
17. How can log analysis distinguish between a prompt injection attack and normal user interaction?
18. What does a sudden change in a model's refusal rate indicate from a forensic perspective?
19. What forensic evidence should be collected when investigating a suspected training data poisoning incident?
20. Why is establishing a behavioral baseline critical for AI system forensics?
Section E: Governance (Questions 21-25)
21. How does the EU AI Act classify AI systems, and why does this classification matter for red teaming?
22. What is the relationship between responsible disclosure in AI red teaming and traditional cybersecurity vulnerability disclosure?
23. Why should AI red team reports include an assessment of the 'dual-use' potential of discovered attack techniques?
24. What governance framework should organizations implement for tracking and managing AI model versions in production?
25. How should an organization balance AI transparency requirements with the security risk of disclosing system details?