1 articletagged with “adversarial-examples”
Create basic adversarial examples that cause LLMs to misclassify, misinterpret, or bypass safety checks on text input.