# quantization
14 articlestagged with “quantization”
QLoRA Security Implications
Security implications of quantized LoRA fine-tuning including precision-related vulnerability introduction.
Quantization-Induced Safety Degradation
How quantization and model compression can degrade safety properties, and techniques for exploiting quantization artifacts to bypass safety training.
Quantization & Safety Alignment
How model quantization disproportionately degrades safety alignment: malicious quantization attacks, token-flipping, and safety-aware quantization defenses.
Model Compression Security
Security implications of model pruning, quantization, and knowledge distillation on AI system robustness.
Lab: Quantization Security Testing
Test behavioral differences between full-precision and quantized models to discover quantization-induced vulnerabilities.
Quantization-Induced Safety Regression Testing
Test how model quantization (INT8, INT4, GPTQ) degrades safety alignment and introduces exploitable gaps.
Lab: Exploiting Quantized Model Weaknesses
Hands-on lab exploring how model quantization degrades safety alignment, with techniques to find and exploit precision-related vulnerabilities.
Quantization-Induced Vulnerability Exploitation
Exploit behavioral differences between full-precision and quantized models.
Inference Optimization Risks
Security implications of model optimization techniques — covering quantization safety degradation, pruning vulnerability introduction, distillation attacks, and speculative decoding risks.
Llama Family Attacks
Comprehensive attack analysis of Meta's Llama model family including weight manipulation, fine-tuning safety removal, quantization artifacts, uncensored variants, and Llama Guard bypass techniques.
Quantization Effects on Security Properties
Systematic study of how different quantization methods (GPTQ, AWQ, GGUF, SqueezeLLM) affect model safety properties and vulnerability to attacks.
Quantization Impact on Model Safety
How quantization affects safety alignment including GPTQ, AWQ, and GGUF format implications.
Lab: Exploiting Quantized Models
Hands-on lab comparing attack success rates across quantization levels: testing jailbreaks on FP16 vs INT8 vs INT4, measuring safety degradation, and crafting quantization-aware exploits.
Quantization & Compression Attacks
How quantization (GPTQ, AWQ, GGUF) affects model security, safety degradation from precision loss, quantization-aware adversarial examples, and compression attack surface.