# aws
29 articlestagged with “aws”
Cloud AI Forensics: AWS
Forensic investigation techniques for AWS AI services including SageMaker, Bedrock, and associated infrastructure logging and evidence collection.
Cloud AI Security Practice Exam 1
Practice exam covering AWS Bedrock, Azure OpenAI, and GCP Vertex AI security assessments.
Advanced Cloud AI Security Assessment
15-question advanced assessment covering cloud AI attack surfaces across AWS, Azure, and GCP: guardrail bypass, knowledge base exploitation, managed identity abuse, model customization risks, and multi-cloud attack paths.
Cloud AI Security Assessment
Test your knowledge of AWS, Azure, and GCP AI service security with 15 intermediate-level questions covering cloud-specific attack surfaces and misconfigurations.
Capstone: Cloud AI Security Assessment
Assess AI deployment security across AWS, Azure, and GCP cloud platforms, producing a comprehensive cloud AI security assessment report.
Cloud ML Platform Security (AWS/Azure/GCP)
Security comparison of cloud ML platforms including AWS SageMaker, Azure Machine Learning, and Google Vertex AI. IAM configuration, data security, model serving, and platform-specific attack surfaces.
Bedrock Attack Surface
Comprehensive red team methodology for Amazon Bedrock: model invocation API abuse, guardrails bypass techniques, custom model endpoint exploitation, IAM misconfigurations, knowledge base poisoning, and Bedrock Agents exploitation.
AWS IAM for AI Services
IAM exploitation patterns for AWS AI services: overprivileged roles, cross-account model access, service-linked roles, resource policies for Bedrock and SageMaker, and privilege escalation through AI-specific IAM actions.
AWS AI Services Security Overview
Red team methodology for AWS AI services including Bedrock, SageMaker, Comprehend, and Rekognition: service enumeration, attack surface mapping, and exploitation techniques.
SageMaker Exploitation
Red team attack methodology for Amazon SageMaker: endpoint exploitation, notebook instance attacks, training job manipulation, model artifact tampering, and VPC misconfigurations in ML workloads.
AWS Bedrock Agent Security Assessment
Security assessment of AWS Bedrock Agents including action groups, knowledge bases, and guardrail configurations.
AWS Bedrock Agents Security
Security assessment of AWS Bedrock Agents including action groups, knowledge bases, and guardrail integration.
AWS Bedrock Guardrails Red Team Testing
Red team testing of AWS Bedrock Guardrails including content filters, denied topics, and PII handling.
AWS Bedrock Security Deep Dive
Advanced security assessment of AWS Bedrock covering model invocation controls, guardrails bypass testing, VPC configurations, and red team methodologies for foundation model APIs.
AWS Bedrock Security Guide
Comprehensive security guide for AWS Bedrock including guardrails, IAM policies, and model access controls.
AWS SageMaker Security Assessment
Security assessment of AWS SageMaker including model hosting, endpoint security, and notebook vulnerabilities.
IAM Best Practices for Cloud AI Services
Cross-cloud IAM best practices for securing AI services on AWS, Azure, and GCP, covering least privilege, service identity management, cross-account access, and policy automation.
Shared Responsibility Model for Cloud AI Security
Understanding the division of security responsibilities between cloud providers and customers for AI/ML workloads across AWS, Azure, and GCP, with specific guidance for LLM deployments.
Cloud AI Security
Comprehensive overview of cloud AI security for red teamers: shared responsibility models, attack surfaces across AWS, Azure, and GCP AI services, threat models for model APIs, data pipelines, and inference endpoints.
Security Controls Comparison Matrix
Side-by-side comparison of AWS, Azure, and GCP AI security controls: IAM patterns, content filtering, guardrails, network isolation, logging, and threat detection across cloud providers.
Cloud AI Infrastructure Attacks
Security assessment of cloud-hosted AI/ML platforms including AWS SageMaker, Azure ML, and GCP Vertex AI -- IAM misconfigurations, model theft, and data exposure.
AWS SageMaker Attack Surface
Security assessment of AWS SageMaker -- IAM role exploitation, endpoint abuse, notebook server attacks, and training pipeline manipulation.
Lab: AWS Bedrock Guardrails Testing
Hands-on lab for systematically testing and bypassing AWS Bedrock's built-in guardrails including content filters, denied topics, and word filters.
Cloud AI Security Cheat Sheet
Quick reference comparing AI security controls across AWS, Azure, and GCP -- covering IAM, networking, encryption, monitoring, and AI-specific services.
AWS SageMaker Red Teaming
End-to-end walkthrough for red teaming ML models deployed on AWS SageMaker: endpoint enumeration, IAM policy analysis, model extraction testing, inference pipeline exploitation, and CloudTrail log review.
AWS Bedrock Red Team Walkthrough
Complete guide to red teaming AWS Bedrock deployments: testing guardrails bypass techniques, knowledge base data exfiltration, agent prompt injection, model customization abuse, and CloudTrail evasion.
AWS Bedrock Red Team Walkthrough (Platform Walkthrough)
End-to-end walkthrough for red teaming AI systems on AWS Bedrock: setting up access, invoking models via the Converse API, testing Bedrock Guardrails, exploiting knowledge bases, and analyzing CloudTrail logs.
Cloud AI Platform Walkthroughs
Hands-on walkthroughs for red teaming AI systems deployed on major cloud platforms: AWS Bedrock, Azure OpenAI, Google Vertex AI, and Hugging Face Hub.
Testing AWS Bedrock Deployments
Red team testing guide for models deployed via AWS Bedrock including guardrails and access controls.