音訊與語音模型
語音辨識與音訊處理 AI 的安全——涵蓋對抗性音訊、語音複製風險、隱藏命令與音訊注入技術。
語音與音訊處理 AI 引入文字處理不存在的獨特攻擊面。對抗性音訊可嵌入人類聽不到但 AI 識別為命令的訊號。語音複製可冒充授權使用者。隱藏命令可觸發 AI 代理採取未授權動作。
Loading...
語音辨識與音訊處理 AI 的安全——涵蓋對抗性音訊、語音複製風險、隱藏命令與音訊注入技術。
語音與音訊處理 AI 引入文字處理不存在的獨特攻擊面。對抗性音訊可嵌入人類聽不到但 AI 識別為命令的訊號。語音複製可冒充授權使用者。隱藏命令可觸發 AI 代理採取未授權動作。
Attacking automatic speech recognition systems including adversarial audio that transcribes differently than heard, hidden voice commands, and background audio injection.
Techniques for crafting adversarial audio perturbations including psychoacoustic hiding, frequency domain attacks, and over-the-air adversarial audio.
Voice cloning for social engineering against AI systems, voice authentication bypass, speaker verification attacks, and detection techniques.
Hands-on lab creating adversarial audio examples using Python audio processing, targeting Whisper transcription with injected text.
Comprehensive attack taxonomy for audio-enabled LLMs: adversarial audio generation, voice-based prompt injection, cross-modal split attacks, and ultrasonic perturbations.