Practice Exam 2: 進階 AI 安全
25-question advanced practice exam covering multimodal attacks, training pipeline security, cloud AI security, forensics, and governance.
Practice Exam 2: Advanced AI 安全
This exam targets practitioners with operational experience in AI 紅隊演練. It covers five advanced domains that build on the fundamentals tested in Practice Exam 1. You should have scored Proficient or above on Practice Exam 1 before attempting this.
Section A: Multimodal 攻擊 (Questions 1-5)
1. How can 對抗性 perturbations in images be used to perform 提示詞注入 against vision-language models (VLMs)?
2. What unique challenge does audio 輸入 present for AI 安全 compared to text 輸入?
3. In a multimodal system that processes both images and text, why is cross-modal injection especially difficult to defend against?
4. What is a 'typographic attack' against vision-language models?
5. Why are multimodal models that process documents (PDFs, spreadsheets, presentations) particularly high-risk targets for indirect 提示詞注入?
Section B: Training Pipeline 安全 (Questions 6-10)
6. What is the primary mechanism by which 訓練 資料投毒 persists through 模型 lifecycle?
7. How does a '後門 trigger' work in a poisoned language model?
8. What is the 安全 risk of 'RLHF reward hacking' and how does it relate to 紅隊演練?
9. When an organization fine-tunes a base model on their domain-specific data, what is the primary 安全 risk?
10. What is a 'sleeper 代理' in the context of AI model 安全?
Section C: 雲端 AI 安全 (Questions 11-15)
11. What is the most commonly exploited misconfiguration in 雲端-hosted AI model endpoints?
12. How does a model inversion attack work against a 雲端-deployed ML API?
13. What specific 安全 risk does model serialization (pickling) introduce in 雲端 ML pipelines?
14. Why is network segmentation particularly important for AI workloads in 雲端 environments?
15. What is the 安全 implication of model versioning and rollback capabilities in 雲端 ML platforms?
Section D: Forensics (Questions 16-20)
16. When investigating a suspected AI system compromise, what is the first evidence to preserve?
17. How can log analysis distinguish between a 提示詞注入 attack and normal user interaction?
18. What does a sudden change in a model's refusal rate indicate from a forensic perspective?
19. What forensic evidence should be collected when investigating a suspected 訓練 資料投毒 incident?
20. Why is establishing a behavioral baseline critical for AI system forensics?
Section E: Governance (Questions 21-25)
21. How does the EU AI Act classify AI systems, and why does this classification matter for 紅隊演練?
22. What is the relationship between responsible disclosure in AI 紅隊演練 and traditional cybersecurity 漏洞 disclosure?
23. Why should AI 紅隊 reports include an 評估 of the 'dual-use' potential of discovered attack techniques?
24. What governance framework should organizations 實作 for tracking and managing AI model versions in production?
25. How should an organization balance AI transparency requirements with the 安全 risk of disclosing system details?