# cloud-ai
標記為「cloud-ai」的 45 篇文章
Practice Exam 2: Advanced AI Security
25-question advanced practice exam covering multimodal attacks, training pipeline security, cloud AI security, forensics, and governance.
Advanced Cloud AI Security Assessment
15-question advanced assessment covering cloud AI attack surfaces across AWS, Azure, and GCP: guardrail bypass, knowledge base exploitation, managed identity abuse, model customization risks, and multi-cloud attack paths.
Advanced Cloud AI Security Assessment (Assessment)
Advanced assessment on multi-cloud AI security, IAM misconfigurations, and endpoint hardening.
Cloud AI Security Study Guide
Study guide for cloud AI security covering AWS, Azure, GCP, and multi-cloud assessment strategies.
AWS AI Services Security Overview
Red team methodology for AWS AI services including Bedrock, SageMaker, Comprehend, and Rekognition: service enumeration, attack surface mapping, and exploitation techniques.
AWS Bedrock Guardrails Red Team Testing
Red team testing of AWS Bedrock Guardrails including content filters, denied topics, and PII handling.
AWS Bedrock Security Guide
Comprehensive security guide for AWS Bedrock including guardrails, IAM policies, and model access controls.
AWS SageMaker Security Assessment
Security assessment of AWS SageMaker including model hosting, endpoint security, and notebook vulnerabilities.
Azure AI Studio Security Assessment
Security assessment of Azure AI Studio including prompt flow, model catalog, and deployment security.
Azure OpenAI Security Guide
Security guide for Azure OpenAI Service including content filtering, managed identity, and network isolation.
Cloud AI Data Residency and Sovereignty
Managing data residency and sovereignty requirements for cloud-based AI systems across jurisdictions.
Cloud AI IAM Misconfigurations
Common IAM misconfigurations in cloud AI services and their exploitation for unauthorized model access.
Cloud AI Logging and Forensics
Setting up comprehensive logging and forensic capabilities for cloud-deployed AI systems.
Cloud Model Endpoint Security
Securing model endpoints in cloud deployments including authentication, authorization, and traffic management.
GCP AI Services Security Overview
Red team methodology for GCP AI services including Vertex AI, Model Garden, and AI Platform: service enumeration, service account exploitation, and attack surface mapping.
GCP AI Platform Threat Analysis
Threat analysis of GCP AI platform services including AutoML, custom training, and prediction endpoints.
GCP Vertex AI Security Guide
Security guide for GCP Vertex AI including model garden, endpoints, and Gemini API security.
Multi-Cloud AI Security Strategy (Cloud Ai Security)
Security strategy for organizations using AI services across multiple cloud providers.
Serverless AI Security Considerations
Security considerations for AI workloads running on serverless platforms including Lambda, Cloud Functions, and Azure Functions.
AI Infrastructure Exploitation
Methodology for exploiting GPU clusters, model serving frameworks (Triton, vLLM, Ollama), Kubernetes ML platforms, cloud AI services, and cost amplification attacks.
Lab: Azure Content Filter Evasion
Hands-on lab for mapping and testing Azure OpenAI Service content filtering categories, severity levels, and bypass techniques.
Lab: AWS Bedrock Guardrails Testing
Hands-on lab for systematically testing and bypassing AWS Bedrock's built-in guardrails including content filters, denied topics, and word filters.
Practice Exam 2: 進階 AI 安全
25-question advanced practice exam covering multimodal attacks, training pipeline security, cloud AI security, forensics, and governance.
進階 Cloud AI 安全 評量
15-question advanced assessment covering cloud AI attack surfaces across AWS, Azure, and GCP: guardrail bypass, knowledge base exploitation, managed identity abuse, model customization risks, and multi-cloud attack paths.
章節評量:雲端 AI
15 題校準評量,測試你對雲端 AI 平台安全的理解。
進階 Cloud AI 安全 評量 (評量)
進階 assessment on multi-cloud AI security, IAM misconfigurations, and endpoint hardening.
Cloud AI 安全 Study 指南
Study guide for cloud AI security covering AWS, Azure, GCP, and multi-cloud assessment strategies.
AWS AI 服務安全概觀
為 AWS AI 服務之紅隊方法論,含 Bedrock、SageMaker、Comprehend 與 Rekognition:服務列舉、攻擊面對應與利用技術。
AWS Bedrock Guardrails 紅隊 Testing
Red team testing of AWS Bedrock Guardrails including content filters, denied topics, and PII handling.
AWS Bedrock 安全 指南
Comprehensive security guide for AWS Bedrock including guardrails, IAM policies, and model access controls.
AWS SageMaker 安全 評量
安全 assessment of AWS SageMaker including model hosting, endpoint security, and notebook vulnerabilities.
Azure AI Studio 安全 評量
安全 assessment of Azure AI Studio including prompt flow, model catalog, and deployment security.
Azure OpenAI 安全 指南
安全 guide for Azure OpenAI Service including content filtering, managed identity, and network isolation.
Cloud AI Data Residency and Sovereignty
Managing data residency and sovereignty requirements for cloud-based AI systems across jurisdictions.
Cloud AI IAM Misconfigurations
Common IAM misconfigurations in cloud AI services and their exploitation for unauthorized model access.
Cloud AI Logging and Forensics
Setting up comprehensive logging and forensic capabilities for cloud-deployed AI systems.
Cloud 模型 Endpoint 安全
Securing model endpoints in cloud deployments including authentication, authorization, and traffic management.
GCP AI 服務安全概觀
GCP AI 服務(包括 Vertex AI、Model Garden 與 AI Platform)之紅隊方法論:服務枚舉、服務帳號攻擊,以及攻擊面繪製。
GCP AI Platform Threat Analysis
Threat analysis of GCP AI platform services including AutoML, custom training, and prediction endpoints.
GCP Vertex AI 安全 指南
安全 guide for GCP Vertex AI including model garden, endpoints, and Gemini API security.
Multi-Cloud AI 安全 Strategy (Cloud Ai 安全)
安全 strategy for organizations using AI services across multiple cloud providers.
Serverless AI 安全 Considerations
安全 considerations for AI workloads running on serverless platforms including Lambda, Cloud Functions, and Azure Functions.
AI Infrastructure 利用ation
Methodology for exploiting GPU clusters, model serving frameworks (Triton, vLLM, Ollama), Kubernetes ML platforms, cloud AI services, and cost amplification attacks.
實驗室: Azure Content Filter Evasion
Hands-on lab for mapping and testing Azure OpenAI Service content filtering categories, severity levels, and bypass techniques.
實驗室: AWS Bedrock Guardrails Testing
Hands-on lab for systematically testing and bypassing AWS Bedrock's built-in guardrails including content filters, denied topics, and word filters.