# ai-security
標記為「ai-security」的 21 篇文章
Cloud AI Security
Comprehensive overview of cloud AI security for red teamers: shared responsibility models, attack surfaces across AWS, Azure, and GCP AI services, threat models for model APIs, data pipelines, and inference endpoints.
Multi-Cloud AI Security Overview
Security risks of multi-cloud AI deployments: cross-cloud attack surfaces, credential management challenges, inconsistent security controls, and governance gaps across AWS, Azure, and GCP AI services.
Code Generation Security
How AI coding assistants introduce security vulnerabilities through suggestion poisoning, training data extraction, insecure code generation, and IDE extension risks.
Impact Categories
Overview of the real-world consequences of successful AI attacks, from misinformation and harmful content to financial fraud and regulatory violations.
RAG, Data & Training Attacks
Overview of attacks targeting the data layer of AI systems, including RAG poisoning, training data manipulation, and data extraction techniques.
Glossary of AI Security Terms
Comprehensive glossary of AI security terminology used throughout the curriculum.
AI-Specific Threat Modeling
Adapting STRIDE for AI systems, building attack trees for LLM applications, identifying AI-specific threat categories, and producing actionable threat models that drive red team test plans.
Building AI-Specific Threat Models
Step-by-step walkthrough for creating threat models tailored to AI and LLM systems, covering asset identification, threat enumeration, attack tree construction, and risk prioritization.
Mapping the Attack Surface of AI Systems
Systematic walkthrough for identifying and mapping every attack surface in an AI system, from user inputs through model inference to output delivery and tool integrations.
2025 年 AI 紅隊演練現況
2025 年初 AI 紅隊演練版圖的調查——新興攻擊向量、產業採用、工具成熟度,以及隨著領域演進的預期。
用 AI 紅隊演練找 CVE:以研究為根基的指南
AI 紅隊演練技術如何在 SQLite、OpenSSL、Linux 核心與 UEFI 開機載入器中發現真實世界 CVE——附上背後研究的引用。
歡迎來到 redteams.ai
介紹 AI 紅隊演練知識庫——我們為何建構它以及未來展望。
AI 安全最新動態 — 2026 年 3 月
2026 年 3 月最重要 AI 安全發展、工具更新、研究亮點與新興攻擊向量的月度彙整。
雲端 AI 安全
給紅隊員的雲端 AI 安全完整概覽:共同責任模型、跨 AWS、Azure 與 GCP AI 服務的攻擊面、模型 API、資料管線與推論端點的威脅模型。
程式碼生成安全
AI 程式設計助理如何透過建議投毒、訓練資料萃取、不安全程式碼生成與 IDE 擴充功能風險引入安全漏洞。
影響類別
成功 AI 攻擊之真實世界後果的概覽,從錯誤資訊與有害內容到金融詐欺與法規違規。
RAG、資料與訓練攻擊
針對 AI 系統資料層攻擊的概覽,包含 RAG 投毒、訓練資料操控與資料萃取技術。
Glossary of AI 安全 Terms
Comprehensive glossary of AI security terminology used throughout the curriculum.
AI-Specific Threat 模型ing
Adapting STRIDE for AI systems, building attack trees for LLM applications, identifying AI-specific threat categories, and producing actionable threat models that drive red team test plans.
Building AI-Specific Threat 模型s
Step-by-step walkthrough for creating threat models tailored to AI and LLM systems, covering asset identification, threat enumeration, attack tree construction, and risk prioritization.
Mapping the 攻擊 Surface of AI Systems
Systematic walkthrough for identifying and mapping every attack surface in an AI system, from user inputs through model inference to output delivery and tool integrations.