# adversarial-prompts
標記為「adversarial-prompts」的 2 篇文章
Text-to-Image Model Attacks
Adversarial prompts for text-to-image models: unsafe content generation, safety filter bypass, watermark evasion, prompt injection in image generation pipelines, and concept smuggling.
text-to-imagediffusionadversarial-promptscontent-generationwatermark
Text-to-Image 模型 攻擊s
Adversarial prompts for text-to-image models: unsafe content generation, safety filter bypass, watermark evasion, prompt injection in image generation pipelines, and concept smuggling.
text-to-imagediffusionadversarial-promptscontent-generationwatermark