# training-security
4 articlestagged with “training-security”
Fine-Tuning Security Assessment
Test your knowledge of fine-tuning security risks including LoRA attacks, RLHF manipulation, safety degradation, and catastrophic forgetting with 15 questions.
Vertex AI Attack Surface
Red team methodology for Vertex AI: prediction endpoint abuse, custom training security gaps, feature store poisoning, model monitoring evasion, and pipeline exploitation.
Advanced Training Attack Vectors
Cutting-edge training attacks: federated learning poisoning, model merging exploits, distributed training vulnerabilities, emergent capability risks, and synthetic data pipeline attacks.
Gradient-Based Attacks During Training
Technical deep dive into gradient-based attack methods that exploit training-time access, including gradient manipulation, adversarial weight perturbation, and training signal hijacking.