# quantization
標記為「quantization」的 29 篇文章
QLoRA Security Implications
Security implications of quantized LoRA fine-tuning including precision-related vulnerability introduction.
Quantization-Induced Safety Degradation
How quantization and model compression can degrade safety properties, and techniques for exploiting quantization artifacts to bypass safety training.
Quantization & Safety Alignment
How model quantization disproportionately degrades safety alignment: malicious quantization attacks, token-flipping, and safety-aware quantization defenses.
Model Compression Security
Security implications of model pruning, quantization, and knowledge distillation on AI system robustness.
Lab: Quantization Security Testing
Test behavioral differences between full-precision and quantized models to discover quantization-induced vulnerabilities.
Quantization-Induced Safety Regression Testing
Test how model quantization (INT8, INT4, GPTQ) degrades safety alignment and introduces exploitable gaps.
Lab: Exploiting Quantized Model Weaknesses
Hands-on lab exploring how model quantization degrades safety alignment, with techniques to find and exploit precision-related vulnerabilities.
Quantization-Induced Vulnerability Exploitation
Exploit behavioral differences between full-precision and quantized models.
Inference Optimization Risks
Security implications of model optimization techniques — covering quantization safety degradation, pruning vulnerability introduction, distillation attacks, and speculative decoding risks.
Llama Family Attacks
Comprehensive attack analysis of Meta's Llama model family including weight manipulation, fine-tuning safety removal, quantization artifacts, uncensored variants, and Llama Guard bypass techniques.
Quantization Effects on Security Properties
Systematic study of how different quantization methods (GPTQ, AWQ, GGUF, SqueezeLLM) affect model safety properties and vulnerability to attacks.
Quantization Impact on Model Safety
How quantization affects safety alignment including GPTQ, AWQ, and GGUF format implications.
Lab: Exploiting Quantized Models
Hands-on lab comparing attack success rates across quantization levels: testing jailbreaks on FP16 vs INT8 vs INT4, measuring safety degradation, and crafting quantization-aware exploits.
Quantization & Compression Attacks
How quantization (GPTQ, AWQ, GGUF) affects model security, safety degradation from precision loss, quantization-aware adversarial examples, and compression attack surface.
QLoRA 安全 Implications
安全 implications of quantized LoRA fine-tuning including precision-related vulnerability introduction.
Quantization-Induced Safety Degradation
How quantization and model compression can degrade safety properties, and techniques for exploiting quantization artifacts to bypass safety training.
Quantization & Safety Alignment
How model quantization disproportionately degrades safety alignment: malicious quantization attacks, token-flipping, and safety-aware quantization defenses.
模型 Compression 安全
安全 implications of model pruning, quantization, and knowledge distillation on AI system robustness.
實驗室: Quantization 安全 Testing
Test behavioral differences between full-precision and quantized models to discover quantization-induced vulnerabilities.
Quantization-Induced Safety Regression Testing
Test how model quantization (INT8, INT4, GPTQ) degrades safety alignment and introduces exploitable gaps.
實作:利用量化模型弱點
實作探索模型量化如何退化安全對齊之實作,含尋找並利用與精度相關漏洞之技術。
Quantization-Induced 漏洞 利用ation
利用 behavioral differences between full-precision and quantized models.
推論最佳化風險
模型最佳化技術的安全意涵——涵蓋量化安全降級、剪枝漏洞引入、蒸餾攻擊與推測解碼風險。
Llama 家族攻擊
Meta 之 Llama 模型家族之完整攻擊分析,含權重操弄、微調安全移除、量化產物、未審查變體與 Llama Guard 繞過技術。
Quantization Effects on 安全 Properties
Systematic study of how different quantization methods (GPTQ, AWQ, GGUF, SqueezeLLM) affect model safety properties and vulnerability to attacks.
Quantization Impact on 模型 Safety
How quantization affects safety alignment including GPTQ, AWQ, and GGUF format implications.
架構層級攻擊
鎖定模型架構最佳化的攻擊——涵蓋量化利用、蒸餾攻擊、KV 快取攻擊、MoE 路由操控與上下文視窗利用。
實驗室: 利用ing Quantized 模型s
Hands-on lab comparing attack success rates across quantization levels: testing jailbreaks on FP16 vs INT8 vs INT4, measuring safety degradation, and crafting quantization-aware exploits.
量化與壓縮攻擊
量化(GPTQ、AWQ、GGUF)如何影響模型安全、精度損失造成的安全退化、量化感知對抗範例,以及壓縮攻擊面。