# training-security
標記為「training-security」的 6 篇文章
Fine-Tuning Security Assessment
Test your knowledge of fine-tuning security risks including LoRA attacks, RLHF manipulation, safety degradation, and catastrophic forgetting with 15 questions.
Vertex AI Attack Surface
Red team methodology for Vertex AI: prediction endpoint abuse, custom training security gaps, feature store poisoning, model monitoring evasion, and pipeline exploitation.
Advanced Training Attack Vectors
Cutting-edge training attacks: federated learning poisoning, model merging exploits, distributed training vulnerabilities, emergent capability risks, and synthetic data pipeline attacks.
Gradient-Based Attacks During Training
Technical deep dive into gradient-based attack methods that exploit training-time access, including gradient manipulation, adversarial weight perturbation, and training signal hijacking.
Vertex AI 攻擊面
為 Vertex AI 之紅隊方法論:預測端點濫用、自訂訓練安全缺口、特徵儲存投毒、模型監控逃避與管線利用。
Gradient-Based 攻擊s During 訓練
Technical deep dive into gradient-based attack methods that exploit training-time access, including gradient manipulation, adversarial weight perturbation, and training signal hijacking.