# attack
標記為「attack」的 28 篇文章
AI Attack Timeline Reconstruction
Techniques for reconstructing the complete timeline of an AI attack from available evidence.
Capstone: Build a Multimodal Attack Testing Suite
Design and implement a comprehensive testing suite for attacking multimodal AI systems across text, image, audio, and document modalities.
Attack Coverage Tracking System
Build a system for tracking attack coverage across vulnerability categories and defense configurations.
AI Attack Taxonomy Overview
Comprehensive overview of the AI attack taxonomy covering all major attack categories and their relationships.
Model Distillation Security Lab
Extract model capabilities through distillation techniques using only black-box API access.
Multi-Objective Attack Optimization
Optimize attack payloads for multiple simultaneous objectives: jailbreaking, data extraction, and defense evasion.
Multimodal Attack Chain Lab
Chain attacks across text, image, and structured data modalities to exploit multimodal system vulnerabilities.
Interpretability-Guided Attack Design
Use mechanistic interpretability to identify exploitable circuits and design targeted attacks.
Attack Technique Index
Comprehensive index of attack techniques organized by target, difficulty, and defense-bypass approach.
Purple Teaming for AI
Collaborative attack-defense exercises for AI systems: structuring purple team engagements, real-time knowledge transfer, joint attack simulation, and measuring defensive improvement through iterative testing.
Embedding Inversion Attack Walkthrough
Walkthrough of inverting text embeddings to recover original documents from vector databases.
Knowledge Graph Injection Attack Walkthrough
Walkthrough of injecting adversarial facts into knowledge graphs consumed by LLM-based reasoning systems.
Real-Time Attack Detection System
Build a real-time attack detection system that monitors LLM interactions for adversarial patterns.
Building an Attack Replay Tool
Build a tool that records and replays attack sequences for regression testing and defense validation.
AI 攻擊 Timeline Reconstruction
Techniques for reconstructing the complete timeline of an AI attack from available evidence.
Capstone: Build a Multimodal 攻擊 Testing Suite
Design and implement a comprehensive testing suite for attacking multimodal AI systems across text, image, audio, and document modalities.
攻擊 Coverage Tracking System
Build a system for tracking attack coverage across vulnerability categories and defense configurations.
AI 攻擊 Taxonomy 概覽
Comprehensive overview of the AI attack taxonomy covering all major attack categories and their relationships.
模型 Distillation 安全 實驗室
Extract model capabilities through distillation techniques using only black-box API access.
Multi-Objective 攻擊 Optimization
Optimize attack payloads for multiple simultaneous objectives: jailbreaking, data extraction, and defense evasion.
Multimodal 攻擊 Chain 實驗室
Chain attacks across text, image, and structured data modalities to exploit multimodal system vulnerabilities.
Interpretability-指南d 攻擊 Design
Use mechanistic interpretability to identify exploitable circuits and design targeted attacks.
攻擊 Technique Index
Comprehensive index of attack techniques organized by target, difficulty, and defense-bypass approach.
Purple Teaming for AI
Collaborative attack-defense exercises for AI systems: structuring purple team engagements, real-time knowledge transfer, joint attack simulation, and measuring defensive improvement through iterative testing.
Embedding Inversion 攻擊 導覽
導覽 of inverting text embeddings to recover original documents from vector databases.
Knowledge Graph Injection 攻擊 導覽
導覽 of injecting adversarial facts into knowledge graphs consumed by LLM-based reasoning systems.
Real-Time 攻擊 Detection System
Build a real-time attack detection system that monitors LLM interactions for adversarial patterns.
Building an 攻擊 Replay 工具
Build a tool that records and replays attack sequences for regression testing and defense validation.