# ai-security
9 articlestagged with “ai-security”
Cloud AI Security
Comprehensive overview of cloud AI security for red teamers: shared responsibility models, attack surfaces across AWS, Azure, and GCP AI services, threat models for model APIs, data pipelines, and inference endpoints.
Multi-Cloud AI Security Overview
Security risks of multi-cloud AI deployments: cross-cloud attack surfaces, credential management challenges, inconsistent security controls, and governance gaps across AWS, Azure, and GCP AI services.
Code Generation Security
How AI coding assistants introduce security vulnerabilities through suggestion poisoning, training data extraction, insecure code generation, and IDE extension risks.
Impact Categories
Overview of the real-world consequences of successful AI attacks, from misinformation and harmful content to financial fraud and regulatory violations.
RAG, Data & Training Attacks
Overview of attacks targeting the data layer of AI systems, including RAG poisoning, training data manipulation, and data extraction techniques.
Glossary of AI Security Terms
Comprehensive glossary of AI security terminology used throughout the curriculum.
AI-Specific Threat Modeling
Adapting STRIDE for AI systems, building attack trees for LLM applications, identifying AI-specific threat categories, and producing actionable threat models that drive red team test plans.
Building AI-Specific Threat Models
Step-by-step walkthrough for creating threat models tailored to AI and LLM systems, covering asset identification, threat enumeration, attack tree construction, and risk prioritization.
Mapping the Attack Surface of AI Systems
Systematic walkthrough for identifying and mapping every attack surface in an AI system, from user inputs through model inference to output delivery and tool integrations.