# cloud-ai
22 articlestagged with “cloud-ai”
Practice Exam 2: Advanced AI Security
25-question advanced practice exam covering multimodal attacks, training pipeline security, cloud AI security, forensics, and governance.
Advanced Cloud AI Security Assessment
15-question advanced assessment covering cloud AI attack surfaces across AWS, Azure, and GCP: guardrail bypass, knowledge base exploitation, managed identity abuse, model customization risks, and multi-cloud attack paths.
Advanced Cloud AI Security Assessment (Assessment)
Advanced assessment on multi-cloud AI security, IAM misconfigurations, and endpoint hardening.
Cloud AI Security Study Guide
Study guide for cloud AI security covering AWS, Azure, GCP, and multi-cloud assessment strategies.
AWS AI Services Security Overview
Red team methodology for AWS AI services including Bedrock, SageMaker, Comprehend, and Rekognition: service enumeration, attack surface mapping, and exploitation techniques.
AWS Bedrock Guardrails Red Team Testing
Red team testing of AWS Bedrock Guardrails including content filters, denied topics, and PII handling.
AWS Bedrock Security Guide
Comprehensive security guide for AWS Bedrock including guardrails, IAM policies, and model access controls.
AWS SageMaker Security Assessment
Security assessment of AWS SageMaker including model hosting, endpoint security, and notebook vulnerabilities.
Azure AI Studio Security Assessment
Security assessment of Azure AI Studio including prompt flow, model catalog, and deployment security.
Azure OpenAI Security Guide
Security guide for Azure OpenAI Service including content filtering, managed identity, and network isolation.
Cloud AI Data Residency and Sovereignty
Managing data residency and sovereignty requirements for cloud-based AI systems across jurisdictions.
Cloud AI IAM Misconfigurations
Common IAM misconfigurations in cloud AI services and their exploitation for unauthorized model access.
Cloud AI Logging and Forensics
Setting up comprehensive logging and forensic capabilities for cloud-deployed AI systems.
Cloud Model Endpoint Security
Securing model endpoints in cloud deployments including authentication, authorization, and traffic management.
GCP AI Services Security Overview
Red team methodology for GCP AI services including Vertex AI, Model Garden, and AI Platform: service enumeration, service account exploitation, and attack surface mapping.
GCP AI Platform Threat Analysis
Threat analysis of GCP AI platform services including AutoML, custom training, and prediction endpoints.
GCP Vertex AI Security Guide
Security guide for GCP Vertex AI including model garden, endpoints, and Gemini API security.
Multi-Cloud AI Security Strategy (Cloud Ai Security)
Security strategy for organizations using AI services across multiple cloud providers.
Serverless AI Security Considerations
Security considerations for AI workloads running on serverless platforms including Lambda, Cloud Functions, and Azure Functions.
AI Infrastructure Exploitation
Methodology for exploiting GPU clusters, model serving frameworks (Triton, vLLM, Ollama), Kubernetes ML platforms, cloud AI services, and cost amplification attacks.
Lab: Azure Content Filter Evasion
Hands-on lab for mapping and testing Azure OpenAI Service content filtering categories, severity levels, and bypass techniques.
Lab: AWS Bedrock Guardrails Testing
Hands-on lab for systematically testing and bypassing AWS Bedrock's built-in guardrails including content filters, denied topics, and word filters.