Skip to main content
redteams.ai
All tags

# adversarial-prompts

1 articletagged with “adversarial-prompts

Text-to-Image Model Attacks

Adversarial prompts for text-to-image models: unsafe content generation, safety filter bypass, watermark evasion, prompt injection in image generation pipelines, and concept smuggling.

text-to-imagediffusionadversarial-promptscontent-generationwatermark
Intermediate