# aws
標記為「aws」的 58 篇文章
Cloud AI Forensics: AWS
Forensic investigation techniques for AWS AI services including SageMaker, Bedrock, and associated infrastructure logging and evidence collection.
Cloud AI Security Practice Exam 1
Practice exam covering AWS Bedrock, Azure OpenAI, and GCP Vertex AI security assessments.
Advanced Cloud AI Security Assessment
15-question advanced assessment covering cloud AI attack surfaces across AWS, Azure, and GCP: guardrail bypass, knowledge base exploitation, managed identity abuse, model customization risks, and multi-cloud attack paths.
Cloud AI Security Assessment
Test your knowledge of AWS, Azure, and GCP AI service security with 15 intermediate-level questions covering cloud-specific attack surfaces and misconfigurations.
Capstone: Cloud AI Security Assessment
Assess AI deployment security across AWS, Azure, and GCP cloud platforms, producing a comprehensive cloud AI security assessment report.
Cloud ML Platform Security (AWS/Azure/GCP)
Security comparison of cloud ML platforms including AWS SageMaker, Azure Machine Learning, and Google Vertex AI. IAM configuration, data security, model serving, and platform-specific attack surfaces.
Bedrock Attack Surface
Comprehensive red team methodology for Amazon Bedrock: model invocation API abuse, guardrails bypass techniques, custom model endpoint exploitation, IAM misconfigurations, knowledge base poisoning, and Bedrock Agents exploitation.
AWS IAM for AI Services
IAM exploitation patterns for AWS AI services: overprivileged roles, cross-account model access, service-linked roles, resource policies for Bedrock and SageMaker, and privilege escalation through AI-specific IAM actions.
AWS AI Services Security Overview
Red team methodology for AWS AI services including Bedrock, SageMaker, Comprehend, and Rekognition: service enumeration, attack surface mapping, and exploitation techniques.
SageMaker Exploitation
Red team attack methodology for Amazon SageMaker: endpoint exploitation, notebook instance attacks, training job manipulation, model artifact tampering, and VPC misconfigurations in ML workloads.
AWS Bedrock Agent Security Assessment
Security assessment of AWS Bedrock Agents including action groups, knowledge bases, and guardrail configurations.
AWS Bedrock Agents Security
Security assessment of AWS Bedrock Agents including action groups, knowledge bases, and guardrail integration.
AWS Bedrock Guardrails Red Team Testing
Red team testing of AWS Bedrock Guardrails including content filters, denied topics, and PII handling.
AWS Bedrock Security Deep Dive
Advanced security assessment of AWS Bedrock covering model invocation controls, guardrails bypass testing, VPC configurations, and red team methodologies for foundation model APIs.
AWS Bedrock Security Guide
Comprehensive security guide for AWS Bedrock including guardrails, IAM policies, and model access controls.
AWS SageMaker Security Assessment
Security assessment of AWS SageMaker including model hosting, endpoint security, and notebook vulnerabilities.
IAM Best Practices for Cloud AI Services
Cross-cloud IAM best practices for securing AI services on AWS, Azure, and GCP, covering least privilege, service identity management, cross-account access, and policy automation.
Shared Responsibility Model for Cloud AI Security
Understanding the division of security responsibilities between cloud providers and customers for AI/ML workloads across AWS, Azure, and GCP, with specific guidance for LLM deployments.
Cloud AI Security
Comprehensive overview of cloud AI security for red teamers: shared responsibility models, attack surfaces across AWS, Azure, and GCP AI services, threat models for model APIs, data pipelines, and inference endpoints.
Security Controls Comparison Matrix
Side-by-side comparison of AWS, Azure, and GCP AI security controls: IAM patterns, content filtering, guardrails, network isolation, logging, and threat detection across cloud providers.
Cloud AI Infrastructure Attacks
Security assessment of cloud-hosted AI/ML platforms including AWS SageMaker, Azure ML, and GCP Vertex AI -- IAM misconfigurations, model theft, and data exposure.
AWS SageMaker Attack Surface
Security assessment of AWS SageMaker -- IAM role exploitation, endpoint abuse, notebook server attacks, and training pipeline manipulation.
Lab: AWS Bedrock Guardrails Testing
Hands-on lab for systematically testing and bypassing AWS Bedrock's built-in guardrails including content filters, denied topics, and word filters.
Cloud AI Security Cheat Sheet
Quick reference comparing AI security controls across AWS, Azure, and GCP -- covering IAM, networking, encryption, monitoring, and AI-specific services.
AWS SageMaker Red Teaming
End-to-end walkthrough for red teaming ML models deployed on AWS SageMaker: endpoint enumeration, IAM policy analysis, model extraction testing, inference pipeline exploitation, and CloudTrail log review.
AWS Bedrock Red Team Walkthrough
Complete guide to red teaming AWS Bedrock deployments: testing guardrails bypass techniques, knowledge base data exfiltration, agent prompt injection, model customization abuse, and CloudTrail evasion.
AWS Bedrock Red Team Walkthrough (Platform Walkthrough)
End-to-end walkthrough for red teaming AI systems on AWS Bedrock: setting up access, invoking models via the Converse API, testing Bedrock Guardrails, exploiting knowledge bases, and analyzing CloudTrail logs.
Cloud AI Platform Walkthroughs
Hands-on walkthroughs for red teaming AI systems deployed on major cloud platforms: AWS Bedrock, Azure OpenAI, Google Vertex AI, and Hugging Face Hub.
Testing AWS Bedrock Deployments
Red team testing guide for models deployed via AWS Bedrock including guardrails and access controls.
Cloud AI Forensics: AWS
Forensic investigation techniques for AWS AI services including SageMaker, Bedrock, and associated infrastructure logging and evidence collection.
Cloud AI 安全 Practice Exam 1
Practice exam covering AWS Bedrock, Azure OpenAI, and GCP Vertex AI security assessments.
進階 Cloud AI 安全 評量
15-question advanced assessment covering cloud AI attack surfaces across AWS, Azure, and GCP: guardrail bypass, knowledge base exploitation, managed identity abuse, model customization risks, and multi-cloud attack paths.
紅隊演練雲端 AI 服務:實務指南
在 AWS、Azure 與 GCP 上紅隊演練 AI 服務的實務指南——涵蓋共享責任邊界、服務特定攻擊面與雲端原生安全控制。
Capstone: Cloud AI 安全 評量
Assess AI deployment security across AWS, Azure, and GCP cloud platforms, producing a comprehensive cloud AI security assessment report.
Cloud ML Platform 安全 (AWS/Azure/GCP)
安全 comparison of cloud ML platforms including AWS SageMaker, Azure Machine Learning, and Google Vertex AI. IAM configuration, data security, model serving, and platform-specific attack surfaces.
Bedrock 攻擊 Surface
Comprehensive red team methodology for Amazon Bedrock: model invocation API abuse, guardrails bypass techniques, custom model endpoint exploitation, IAM misconfigurations, knowledge base poisoning, and Bedrock 代理s exploitation.
AWS IAM for AI Services
IAM exploitation patterns for AWS AI services: overprivileged roles, cross-account model access, service-linked roles, resource policies for Bedrock and SageMaker, and privilege escalation through AI-specific IAM actions.
AWS AI 服務安全概觀
為 AWS AI 服務之紅隊方法論,含 Bedrock、SageMaker、Comprehend 與 Rekognition:服務列舉、攻擊面對應與利用技術。
SageMaker 利用ation
Red team attack methodology for Amazon SageMaker: endpoint exploitation, notebook instance attacks, training job manipulation, model artifact tampering, and VPC misconfigurations in ML workloads.
AWS Bedrock 代理 安全 評量
安全 assessment of AWS Bedrock 代理s including action groups, knowledge bases, and guardrail configurations.
AWS Bedrock 代理s 安全
安全 assessment of AWS Bedrock 代理s including action groups, knowledge bases, and guardrail integration.
AWS Bedrock Guardrails 紅隊 Testing
Red team testing of AWS Bedrock Guardrails including content filters, denied topics, and PII handling.
AWS Bedrock 安全 Deep Dive
進階 security assessment of AWS Bedrock covering model invocation controls, guardrails bypass testing, VPC configurations, and red team methodologies for foundation model APIs.
AWS Bedrock 安全 指南
Comprehensive security guide for AWS Bedrock including guardrails, IAM policies, and model access controls.
AWS SageMaker 安全 評量
安全 assessment of AWS SageMaker including model hosting, endpoint security, and notebook vulnerabilities.
IAM Best Practices for Cloud AI Services
Cross-cloud IAM best practices for securing AI services on AWS, Azure, and GCP, covering least privilege, service identity management, cross-account access, and policy automation.
Shared Responsibility 模型 for Cloud AI 安全
Understanding the division of security responsibilities between cloud providers and customers for AI/ML workloads across AWS, Azure, and GCP, with specific guidance for LLM deployments.
雲端 AI 安全
給紅隊員的雲端 AI 安全完整概覽:共同責任模型、跨 AWS、Azure 與 GCP AI 服務的攻擊面、模型 API、資料管線與推論端點的威脅模型。
安全 Controls Comparison Matrix
Side-by-side comparison of AWS, Azure, and GCP AI security controls: IAM patterns, content filtering, guardrails, network isolation, logging, and threat detection across cloud providers.
雲端 AI 基礎設施攻擊
雲端託管 AI/ML 平台的安全評估,包含 AWS SageMaker、Azure ML 與 GCP Vertex AI——IAM 設定錯誤、模型竊取與資料暴露。
AWS SageMaker 攻擊 Surface
安全 assessment of AWS SageMaker -- IAM role exploitation, endpoint abuse, notebook server attacks, and training pipeline manipulation.
實驗室: AWS Bedrock Guardrails Testing
Hands-on lab for systematically testing and bypassing AWS Bedrock's built-in guardrails including content filters, denied topics, and word filters.
雲端 AI 安全速查表
跨 AWS、Azure 與 GCP 的 AI 安全控制快速參考——涵蓋 IAM、網路、加密、監控與 AI 特定服務。
AWS SageMaker 紅隊演練
End-to-end walkthrough for red teaming ML models deployed on AWS SageMaker: endpoint enumeration, IAM policy analysis, model extraction testing, inference pipeline exploitation, and CloudTrail log review.
AWS Bedrock 紅隊 導覽
Complete guide to red teaming AWS Bedrock deployments: testing guardrails bypass techniques, knowledge base data exfiltration, agent prompt injection, model customization abuse, and CloudTrail evasion.
AWS Bedrock 紅隊 導覽 (Platform 導覽)
End-to-end walkthrough for red teaming AI systems on AWS Bedrock: setting up access, invoking models via the Converse API, testing Bedrock Guardrails, exploiting knowledge bases, and analyzing CloudTrail logs.
雲端 AI 平台導覽
在主要雲端平台上紅隊演練 AI 系統的動手導覽:AWS Bedrock、Azure OpenAI、Google Vertex AI 與 Hugging Face Hub。
Testing AWS Bedrock Deployments
Red team testing guide for models deployed via AWS Bedrock including guardrails and access controls.