# adversarial-attacks
標記為「adversarial-attacks」的 6 篇文章
Case Study: Adversarial Attacks on Autonomous Vehicle Perception Systems
Analysis of adversarial attacks targeting autonomous vehicle perception systems, including stop sign perturbation, phantom object injection, and LiDAR spoofing, with implications for safety-critical AI deployment.
Multimodal Embedding Attacks
Exploiting cross-modal embedding models like CLIP — adversarial image-text alignment manipulation, cross-modal injection, and attacks on multimodal retrieval systems.
Gradient-Based Attacks During Training
Technical deep dive into gradient-based attack methods that exploit training-time access, including gradient manipulation, adversarial weight perturbation, and training signal hijacking.
Case Study: Adversarial 攻擊s on Autonomous Vehicle Perception Systems
Analysis of adversarial attacks targeting autonomous vehicle perception systems, including stop sign perturbation, phantom object injection, and LiDAR spoofing, with implications for safety-critical AI deployment.
多模態嵌入向量攻擊
利用 CLIP 等跨模態嵌入模型——對抗性圖文對齊操控、跨模態注入與對多模態檢索系統的攻擊。
Gradient-Based 攻擊s During 訓練
Technical deep dive into gradient-based attack methods that exploit training-time access, including gradient manipulation, adversarial weight perturbation, and training signal hijacking.