# GCG
4 articlestagged with “GCG”
Universal Adversarial Attacks
Universal perturbations that transfer across models, adversarial suffix research, and techniques for creating model-agnostic attack payloads.
universal-attacksadversarial-perturbationstransferabilitymodel-agnosticGCG
Adversarial Suffix Generation
GCG attacks, universal adversarial triggers, soft prompt optimization, and defense evasion techniques for automated alignment bypass.
GCGadversarial-suffixesuniversal-triggerssoft-promptsoptimization
Lab: Adversarial Suffix Optimization
Implement GCG-style adversarial suffix attacks that automatically discover token sequences causing language models to comply with harmful requests. Covers gradient-based optimization, transferability analysis, and defense evaluation.
labexpertadversarial-suffixGCGoptimizationhands-on
Token-Level Adversarial Attacks
Using gradient-based optimization and token manipulation to discover adversarial suffixes that reliably trigger unsafe model behavior.
prompt-injectiontokensadversarialGCG