# diffusion
標記為「diffusion」的 4 篇文章
Text-to-Image Model Attacks
Adversarial prompts for text-to-image models: unsafe content generation, safety filter bypass, watermark evasion, prompt injection in image generation pipelines, and concept smuggling.
Adversarial Attacks on Text-to-Image Models
Understanding and evaluating adversarial attacks on text-to-image generation models including prompt manipulation for safety bypass, concept erasure attacks, adversarial perturbation of guidance, and membership inference on training data.
Text-to-Image 模型 攻擊s
Adversarial prompts for text-to-image models: unsafe content generation, safety filter bypass, watermark evasion, prompt injection in image generation pipelines, and concept smuggling.
Adversarial 攻擊s on Text-to-Image 模型s
Understanding and evaluating adversarial attacks on text-to-image generation models including prompt manipulation for safety bypass, concept erasure attacks, adversarial perturbation of guidance, and membership inference on training data.