# adapter
8 articlestagged with “adapter”
Adapter Layer Security Analysis
Security analysis of adapter-based fine-tuning including LoRA, prefix tuning, and prompt tuning.
Adapter Layer Attack Vectors
Comprehensive analysis of attack vectors targeting parameter-efficient adapter layers including LoRA, QLoRA, and prefix tuning modules.
Adapter Poisoning Attacks
Poisoning publicly shared adapters and LoRA weights to compromise downstream users.
Malicious Adapter Injection
How attackers craft LoRA adapters containing backdoors, distribute poisoned adapters through model hubs, and exploit adapter stacking to compromise model safety -- techniques, detection challenges, and real-world supply chain risks.
LoRA & Adapter Attack Surface
Overview of security vulnerabilities in parameter-efficient fine-tuning methods including LoRA, QLoRA, and adapter-based approaches -- how the efficiency and shareability of adapters create novel attack vectors.
Direct Weight Manipulation
Techniques for directly modifying LoRA adapter weights to bypass safety training, inject targeted capabilities, and hide malicious behaviors -- going beyond dataset-driven fine-tuning to surgical weight-level attacks.
Shared Adapter Security Risks
Security risks of using publicly shared adapters from model hubs and community repositories.
LoRA & Adapter Layer Attacks
Security implications of LoRA and adapter-based fine-tuning, including safety alignment removal, adapter poisoning, rank manipulation attacks, and multi-adapter conflict exploitation.