# cloud
標記為「cloud」的 101 篇文章
Cloud AI Security Practice Exam 1
Practice exam covering AWS Bedrock, Azure OpenAI, and GCP Vertex AI security assessments.
Cloud AI Security Practice Exam 2
Advanced practice exam on multi-cloud AI security, IAM misconfigurations, and cost-based attacks.
Multi-Cloud AI Security Assessment
Assessment spanning AWS Bedrock, Azure OpenAI, and GCP Vertex AI security configurations and misconfigurations.
Cloud AI Platforms Assessment
Assessment covering AWS Bedrock, Azure OpenAI, GCP Vertex AI, and multi-cloud security strategies.
Cloud AI Security Assessment (Assessment)
Assessment covering AWS Bedrock, Azure OpenAI, GCP Vertex AI security configurations and threats.
IAM for AI Systems Assessment
Assessment of identity and access management vulnerabilities specific to AI service deployments.
Skill Verification: Cloud AI Security
Practical verification of cloud AI platform security assessment skills.
Skill Verification: Cloud AI Security (Assessment)
Hands-on verification of cloud AI service security assessment across AWS, Azure, and GCP.
Capstone: Cloud AI Security Assessment
Assess AI deployment security across AWS, Azure, and GCP cloud platforms, producing a comprehensive cloud AI security assessment report.
Cloud ML Platform Security (AWS/Azure/GCP)
Security comparison of cloud ML platforms including AWS SageMaker, Azure Machine Learning, and Google Vertex AI. IAM configuration, data security, model serving, and platform-specific attack surfaces.
AWS Bedrock Agent Security Assessment
Security assessment of AWS Bedrock Agents including action groups, knowledge bases, and guardrail configurations.
AWS Bedrock Agents Security
Security assessment of AWS Bedrock Agents including action groups, knowledge bases, and guardrail integration.
Azure AI Foundry Security Guide
Comprehensive security guide for Azure AI Foundry including model deployment, prompt flow, and content safety.
Azure AI Content Safety Testing
Testing Azure AI Content Safety service for bypass vulnerabilities and configuration weaknesses.
Cloud AI API Key Management Security
Best practices and attack vectors for API key management in cloud AI service deployments.
Cloud AI Compliance Automation
Automating AI compliance checks and security assessments using cloud-native tools and policy-as-code approaches.
Cloud AI Container and Runtime Security
Security of containerized AI model serving including image scanning, runtime protection, and orchestration security.
Cloud AI Cost Exploitation Attacks
Detailed analysis of cost-based attacks against cloud AI services including prompt inflation and resource exhaustion.
Cloud AI Disaster Recovery Planning
Disaster recovery and business continuity planning for cloud AI deployments including model backup and failover.
Cloud Fine-Tuning Service Security
Security assessment of cloud-based fine-tuning services including data isolation, model access, and output controls.
Cloud AI Security Monitoring Setup
Setting up comprehensive security monitoring for cloud AI deployments using native cloud tools and third-party solutions.
Network Isolation for Cloud AI Workloads
Implementing network isolation strategies for cloud AI deployments including private endpoints, VPC configurations, service mesh integration, and data plane segmentation for LLM inference and training workloads.
Cloud AI Network Security Architecture
Network security architecture for cloud AI deployments including VPC design, endpoints, and traffic inspection.
Cloud AI Prompt Caching Security
Security implications of prompt caching features in cloud AI services including cache poisoning and information leakage.
Cloud AI Secrets and Credential Management
Managing secrets, credentials, and sensitive configuration for cloud AI applications securely.
Secrets Rotation for Cloud AI Deployments
Implementing automated secrets rotation strategies for API keys, model endpoint credentials, and service accounts used in cloud AI/LLM deployments across AWS, Azure, and GCP.
Shared Responsibility Model for Cloud AI Security
Understanding the division of security responsibilities between cloud providers and customers for AI/ML workloads across AWS, Azure, and GCP, with specific guidance for LLM deployments.
Cloud Model Registry Security
Security of cloud model registries including SageMaker Model Registry, Azure ML Registry, and Vertex AI Model Registry.
GCP Model Garden Security
Security assessment of GCP Model Garden including model deployment, versioning, and access control.
GCP Vertex AI Agent Builder Security
Security assessment of Google Vertex AI Agent Builder including grounding, tool use, and safety settings.
Hugging Face Inference Endpoints Security
Security analysis of Hugging Face Inference Endpoints including model isolation and API security.
Multi-Cloud AI Attack Surface Analysis
Comparative attack surface analysis across AWS, Azure, and GCP AI service portfolios.
June 2026: Cloud AI Security Challenge
Find and document vulnerabilities in a cloud-deployed AI service covering API security, model serving infrastructure, authentication, and data handling.
Azure ML Attack Surface
Security assessment of Azure Machine Learning -- managed identity exploitation, workspace security, compute instance attacks, and endpoint vulnerabilities.
Cloud AI Infrastructure Attacks
Security assessment of cloud-hosted AI/ML platforms including AWS SageMaker, Azure ML, and GCP Vertex AI -- IAM misconfigurations, model theft, and data exposure.
AWS SageMaker Attack Surface
Security assessment of AWS SageMaker -- IAM role exploitation, endpoint abuse, notebook server attacks, and training pipeline manipulation.
GCP Vertex AI Attack Surface
Security assessment of Google Cloud Vertex AI -- service account exploitation, endpoint security, notebook attacks, and pipeline manipulation.
Lab: Cloud AI Security Assessment
Conduct an end-to-end security assessment of a cloud-deployed AI service, covering API security, model vulnerabilities, data handling, and infrastructure configuration.
Lab: Cloud AI Assessment
Hands-on lab for conducting an end-to-end security assessment of a cloud-deployed AI system including infrastructure review, API testing, model security evaluation, and data flow analysis.
CTF: Cloud AI Heist
Extract secrets from a cloud-deployed AI application by exploiting misconfigurations, SSRF, metadata endpoints, and model-level vulnerabilities in a realistic cloud environment.
Cloud Infiltrator Challenge
Navigate through cloud AI service misconfigurations to access a protected model endpoint and extract its secrets.
Lab: Azure Content Filter Evasion
Hands-on lab for mapping and testing Azure OpenAI Service content filtering categories, severity levels, and bypass techniques.
Lab: AWS Bedrock Guardrails Testing
Hands-on lab for systematically testing and bypassing AWS Bedrock's built-in guardrails including content filters, denied topics, and word filters.
AWS SageMaker Red Teaming
End-to-end walkthrough for red teaming ML models deployed on AWS SageMaker: endpoint enumeration, IAM policy analysis, model extraction testing, inference pipeline exploitation, and CloudTrail log review.
Azure ML Security Testing
End-to-end walkthrough for security testing Azure Machine Learning endpoints: workspace enumeration, managed online endpoint exploitation, compute instance assessment, data store access review, and Azure Monitor analysis.
Azure OpenAI Red Team Walkthrough (Platform Walkthrough)
End-to-end walkthrough for red teaming Azure OpenAI deployments: deployment configuration review, content filtering bypass testing, managed identity exploitation, prompt flow assessment, and diagnostic log analysis.
AWS Bedrock Red Team Walkthrough (Platform Walkthrough)
End-to-end walkthrough for red teaming AI systems on AWS Bedrock: setting up access, invoking models via the Converse API, testing Bedrock Guardrails, exploiting knowledge bases, and analyzing CloudTrail logs.
GCP Vertex AI Security Testing
End-to-end walkthrough for security testing Vertex AI deployments on Google Cloud: endpoint enumeration, IAM policy analysis, model serving exploitation, pipeline assessment, and Cloud Audit Logs review.
Cloud AI Platform Walkthroughs
Hands-on walkthroughs for red teaming AI systems deployed on major cloud platforms: AWS Bedrock, Azure OpenAI, Google Vertex AI, and Hugging Face Hub.
Vertex AI Red Team Walkthrough
End-to-end walkthrough for red teaming Google Cloud Vertex AI: prediction endpoint testing, Model Garden security assessment, Feature Store probing, and Cloud Logging analysis.
Cloud AI 安全 Practice Exam 1
Practice exam covering AWS Bedrock, Azure OpenAI, and GCP Vertex AI security assessments.
Cloud AI 安全 Practice Exam 2
進階 practice exam on multi-cloud AI security, IAM misconfigurations, and cost-based attacks.
Multi-Cloud AI 安全 評量
評量 spanning AWS Bedrock, Azure OpenAI, and GCP Vertex AI security configurations and misconfigurations.
Cloud AI Platforms 評量
評量 covering AWS Bedrock, Azure OpenAI, GCP Vertex AI, and multi-cloud security strategies.
Cloud AI 安全 評量 (評量)
評量 covering AWS Bedrock, Azure OpenAI, GCP Vertex AI security configurations and threats.
IAM for AI Systems 評量
評量 of identity and access management vulnerabilities specific to AI service deployments.
章節評量:基礎設施
15 題校準評量,測試你對 AI 基礎設施安全的理解——供應鏈、API 安全、雲端部署與模型服務。
Skill Verification: Cloud AI 安全
Practical verification of cloud AI platform security assessment skills.
Skill Verification: Cloud AI 安全 (評量)
Hands-on verification of cloud AI service security assessment across AWS, Azure, and GCP.
Capstone: Cloud AI 安全 評量
Assess AI deployment security across AWS, Azure, and GCP cloud platforms, producing a comprehensive cloud AI security assessment report.
Cloud ML Platform 安全 (AWS/Azure/GCP)
安全 comparison of cloud ML platforms including AWS SageMaker, Azure Machine Learning, and Google Vertex AI. IAM configuration, data security, model serving, and platform-specific attack surfaces.
AWS Bedrock 代理 安全 評量
安全 assessment of AWS Bedrock 代理s including action groups, knowledge bases, and guardrail configurations.
AWS Bedrock 代理s 安全
安全 assessment of AWS Bedrock 代理s including action groups, knowledge bases, and guardrail integration.
Azure AI Foundry 安全 指南
Comprehensive security guide for Azure AI Foundry including model deployment, prompt flow, and content safety.
Azure AI Content Safety Testing
Testing Azure AI Content Safety service for bypass vulnerabilities and configuration weaknesses.
Cloud AI API Key Management 安全
Best practices and attack vectors for API key management in cloud AI service deployments.
Cloud AI Compliance Automation
Automating AI compliance checks and security assessments using cloud-native tools and policy-as-code approaches.
Cloud AI Container and Runtime 安全
安全 of containerized AI model serving including image scanning, runtime protection, and orchestration security.
Cloud AI Cost 利用ation 攻擊s
Detailed analysis of cost-based attacks against cloud AI services including prompt inflation and resource exhaustion.
Cloud AI Disaster Recovery Planning
Disaster recovery and business continuity planning for cloud AI deployments including model backup and failover.
Cloud Fine-Tuning Service 安全
安全 assessment of cloud-based fine-tuning services including data isolation, model access, and output controls.
Cloud AI 安全 Monitoring Setup
Setting up comprehensive security monitoring for cloud AI deployments using native cloud tools and third-party solutions.
Network Isolation for Cloud AI Workloads
Implementing network isolation strategies for cloud AI deployments including private endpoints, VPC configurations, service mesh integration, and data plane segmentation for LLM inference and training workloads.
Cloud AI Network 安全 Architecture
Network security architecture for cloud AI deployments including VPC design, endpoints, and traffic inspection.
Cloud AI Prompt Caching 安全
安全 implications of prompt caching features in cloud AI services including cache poisoning and information leakage.
Cloud AI Secrets and Credential Management
Managing secrets, credentials, and sensitive configuration for cloud AI applications securely.
Secrets Rotation for Cloud AI Deployments
Implementing automated secrets rotation strategies for API keys, model endpoint credentials, and service accounts used in cloud AI/LLM deployments across AWS, Azure, and GCP.
Shared Responsibility 模型 for Cloud AI 安全
Understanding the division of security responsibilities between cloud providers and customers for AI/ML workloads across AWS, Azure, and GCP, with specific guidance for LLM deployments.
Cloud 模型 Registry 安全
安全 of cloud model registries including SageMaker 模型 Registry, Azure ML Registry, and Vertex AI 模型 Registry.
GCP 模型 Garden 安全
安全 assessment of GCP 模型 Garden including model deployment, versioning, and access control.
GCP Vertex AI 代理 Builder 安全
安全 assessment of Google Vertex AI 代理 Builder including grounding, tool use, and safety settings.
Hugging Face Inference Endpoints 安全
安全 analysis of Hugging Face Inference Endpoints including model isolation and API security.
Multi-Cloud AI 攻擊 Surface Analysis
Comparative attack surface analysis across AWS, Azure, and GCP AI service portfolios.
June 2026: Cloud AI 安全 Challenge
Find and document vulnerabilities in a cloud-deployed AI service covering API security, model serving infrastructure, authentication, and data handling.
Azure ML 攻擊 Surface
安全 assessment of Azure Machine Learning -- managed identity exploitation, workspace security, compute instance attacks, and endpoint vulnerabilities.
雲端 AI 基礎設施攻擊
雲端託管 AI/ML 平台的安全評估,包含 AWS SageMaker、Azure ML 與 GCP Vertex AI——IAM 設定錯誤、模型竊取與資料暴露。
AWS SageMaker 攻擊 Surface
安全 assessment of AWS SageMaker -- IAM role exploitation, endpoint abuse, notebook server attacks, and training pipeline manipulation.
GCP Vertex AI 攻擊 Surface
安全 assessment of Google Cloud Vertex AI -- service account exploitation, endpoint security, notebook attacks, and pipeline manipulation.
實驗室: Cloud AI 安全 評量
Conduct an end-to-end security assessment of a cloud-deployed AI service, covering API security, model vulnerabilities, data handling, and infrastructure configuration.
實驗室: Cloud AI 評量
Hands-on lab for conducting an end-to-end security assessment of a cloud-deployed AI system including infrastructure review, API testing, model security evaluation, and data flow analysis.
CTF:雲端 AI 劫案
透過利用組態錯誤、SSRF、metadata 端點與模型層漏洞,從部署於雲端的 AI 應用中擷取機密。
Cloud Infiltrator Challenge
Navigate through cloud AI service misconfigurations to access a protected model endpoint and extract its secrets.
實驗室: Azure Content Filter Evasion
Hands-on lab for mapping and testing Azure OpenAI Service content filtering categories, severity levels, and bypass techniques.
實驗室: AWS Bedrock Guardrails Testing
Hands-on lab for systematically testing and bypassing AWS Bedrock's built-in guardrails including content filters, denied topics, and word filters.
AWS SageMaker 紅隊演練
End-to-end walkthrough for red teaming ML models deployed on AWS SageMaker: endpoint enumeration, IAM policy analysis, model extraction testing, inference pipeline exploitation, and CloudTrail log review.
Azure ML 安全 Testing
End-to-end walkthrough for security testing Azure Machine Learning endpoints: workspace enumeration, managed online endpoint exploitation, compute instance assessment, data store access review, and Azure Monitor analysis.
Azure OpenAI 紅隊 導覽 (Platform 導覽)
End-to-end walkthrough for red teaming Azure OpenAI deployments: deployment configuration review, content filtering bypass testing, managed identity exploitation, prompt flow assessment, and diagnostic log analysis.
AWS Bedrock 紅隊 導覽 (Platform 導覽)
End-to-end walkthrough for red teaming AI systems on AWS Bedrock: setting up access, invoking models via the Converse API, testing Bedrock Guardrails, exploiting knowledge bases, and analyzing CloudTrail logs.
GCP Vertex AI 安全 Testing
End-to-end walkthrough for security testing Vertex AI deployments on Google Cloud: endpoint enumeration, IAM policy analysis, model serving exploitation, pipeline assessment, and Cloud Audit Logs review.
雲端 AI 平台導覽
在主要雲端平台上紅隊演練 AI 系統的動手導覽:AWS Bedrock、Azure OpenAI、Google Vertex AI 與 Hugging Face Hub。
Vertex AI 紅隊 導覽
End-to-end walkthrough for red teaming Google Cloud Vertex AI: prediction endpoint testing, 模型 Garden security assessment, Feature Store probing, and Cloud Logging analysis.