1 articletagged with “stable-diffusion”
Understanding and evaluating adversarial attacks on text-to-image generation models including prompt manipulation for safety bypass, concept erasure attacks, adversarial perturbation of guidance, and membership inference on training data.