Practice Exam 3: 專家 紅隊
25-question expert-level practice exam covering research techniques, automation, fine-tuning attacks, supply chain security, and incident response.
Practice Exam 3: Expert 紅隊
This exam is designed for senior practitioners and researchers. It covers cutting-edge topics and requires deep technical 理解, operational experience, and the ability to reason about novel attack scenarios. You should have scored Proficient or above on both Practice Exams 1 and 2 before attempting this.
Section A: Research Techniques (Questions 1-5)
1. When developing a novel attack technique against an LLM, what is the methodological standard required for the finding to be considered rigorous?
2. What is the role of ablation studies in AI 紅隊 research?
3. How should a 紅隊 researcher approach the challenge of comparing attack success rates across different model providers when each provider uses different 安全 評估 criteria?
4. What is 'gradient-based 對抗性 prompt optimization' and why is it only applicable to white-box or open-weight models?
5. How does the concept of 'transferability' in 對抗性 AI research impact practical 紅隊演練 against closed-source models?
Section B: Automation (Questions 6-10)
6. What is the primary advantage of using LLM-powered fuzzing for automated 越獄 discovery?
7. When designing an automated 紅隊 pipeline that runs in CI/CD, what is the critical design consideration for handling non-deterministic results?
8. What is the role of an automated 'judge model' in 紅隊 評估 pipelines?
9. How should an automated 紅隊 system handle the discovery of a novel, high-severity 漏洞 during an unattended CI/CD run?
10. What is the purpose of 'attack tree' data structures in automated 紅隊演練?
Section C: Fine-Tuning 攻擊 (Questions 11-15)
11. How many 對抗性 微調 examples are typically sufficient to significantly degrade a model's 安全 訓練?
12. What is a LoRA (Low-Rank Adaptation) 投毒 attack?
13. How can 攻擊者 利用 微調-as-a-service platforms to create a 安全-stripped model?
14. What is 'catastrophic forgetting' in the context of 安全 對齊, and how is it exploited?
15. How should a 紅隊員 評估 the effectiveness of a 微調 platform's 安全 護欄?
Section D: Supply Chain 安全 (Questions 16-20)
16. What are the primary attack surfaces in the AI model 供應鏈?
17. How should organizations verify the integrity of a model downloaded from a public model hub?
18. What is 'dependency confusion' in the context of ML pipelines, and how does it enable 供應鏈 attacks?
19. Why is model provenance tracking more complex than traditional software provenance?
20. What specific risk do community-contributed model adapters (LoRA, QLoRA) pose to the AI 供應鏈?
Section E: Incident Response (Questions 21-25)
21. What is the correct order of operations when an AI system is confirmed to be producing harmful outputs due to a suspected 提示詞注入 campaign?
22. How does incident triage differ for AI systems compared to traditional software systems?
23. During an AI incident, what is the purpose of deploying a 'canary prompt' to the affected system?
24. What is the most challenging aspect of root cause analysis for AI incidents involving 提示詞注入?
25. How should an organization communicate about an AI incident to affected users while balancing transparency and 安全?