AWS AI Services Security Overview
Red team methodology for AWS AI services including Bedrock, SageMaker, Comprehend, and Rekognition: service enumeration, attack surface mapping, and exploitation techniques.
AWS AI Services Security Overview
AWS offers a broad portfolio of AI and machine learning services, each with distinct security boundaries, IAM integration patterns, and attack surfaces. For red teamers, understanding which services are in use and how they are configured is the first step in any cloud AI engagement. AWS AI services range from fully managed foundation model APIs (Bedrock) to self-managed ML platforms (SageMaker) to purpose-built AI services (Comprehend, Rekognition, Textract), and each tier shifts the shared responsibility boundary differently.
Service Landscape
Amazon Bedrock
Amazon Bedrock is AWS's managed foundation model service. It provides API access to models from Anthropic, Meta, Mistral, Cohere, and Amazon's own Titan family. From a red team perspective, Bedrock is the highest-value target in most AWS AI deployments because it handles the most sensitive data (user prompts, system prompts, business logic) and connects to downstream systems through Bedrock Agents.
Key Bedrock components to assess:
| Component | Function | Attack Surface |
|---|---|---|
| Model invocation API | Send prompts, receive completions | Prompt injection, guardrail bypass, cost abuse |
| Custom models | Fine-tuned models with customer data | Training data extraction, model theft |
| Knowledge bases | RAG with customer data sources | Knowledge base poisoning, data exfiltration |
| Agents | Tool-calling with AWS service integration | Tool abuse, SSRF, privilege escalation |
| Guardrails | Content filtering and safety controls | Bypass techniques, filter enumeration |
| Model evaluation | Automated model testing | Test data exposure, evaluation manipulation |
For detailed Bedrock exploitation techniques, see Bedrock Attack Surface.
Amazon SageMaker
SageMaker is AWS's full ML lifecycle platform covering data labeling, notebook-based development, training job execution, model hosting, and MLOps pipelines. It has a much larger attack surface than Bedrock because customers manage more of the stack.
Key SageMaker components:
| Component | Function | Attack Surface |
|---|---|---|
| Notebook instances | Jupyter-based development | Code execution, credential access, lateral movement |
| Training jobs | Model training on managed compute | Training data access, compute abuse, artifact tampering |
| Endpoints | Model serving infrastructure | Inference abuse, endpoint exposure, model extraction |
| Pipelines | MLOps automation | Pipeline poisoning, step manipulation |
| Model Registry | Model version management | Model replacement, supply chain attacks |
| Feature Store | Feature management for ML | Feature poisoning, data manipulation |
For detailed SageMaker exploitation, see SageMaker Exploitation.
Amazon Comprehend
Comprehend is a natural language processing service for sentiment analysis, entity recognition, topic modeling, and PII detection. While it lacks the general-purpose attack surface of Bedrock, it is often used in data pipelines where its output drives downstream decisions.
Red team relevance:
- PII detection bypass: If Comprehend is the sole PII detection mechanism, crafting inputs that evade its detection leads to PII leakage in downstream systems
- Sentiment manipulation: When business logic depends on sentiment scores (e.g., routing angry customers to human agents), manipulated inputs can game routing decisions
- Custom model training: Comprehend custom classifiers trained on customer data may be exfiltrable through the model export functionality
- IAM overprivilege:
comprehend:*policies grant access to all Comprehend operations including model training data access
Amazon Rekognition
Rekognition provides image and video analysis including object detection, facial recognition, content moderation, and text extraction. Its primary red team relevance is in adversarial ML attacks against visual classifiers.
- Content moderation bypass: Generating images that evade Rekognition's content moderation while containing prohibited content (adversarial perturbation)
- Facial recognition evasion: Techniques to avoid facial recognition detection or cause misidentification
- Custom labels exploitation: Custom Rekognition models trained on proprietary data may be extractable
- Data exposure: Images sent to Rekognition may be retained depending on configuration; enumeration of processing history can reveal sensitive data
Additional AWS AI Services
| Service | Red Team Relevance |
|---|---|
| Amazon Textract | Document processing pipelines -- poisoned documents could inject content |
| Amazon Transcribe | Audio-to-text -- adversarial audio attacks, transcript manipulation |
| Amazon Translate | Translation pipelines -- language-based filter bypass |
| Amazon Polly | Text-to-speech -- voice phishing content generation |
| Amazon Lex | Chatbot framework -- dialog flow manipulation |
| Amazon Kendra | Enterprise search with AI -- search result poisoning |
| Amazon Q | Enterprise AI assistant -- data access through conversational interface |
Enumeration and Reconnaissance
Discovering AI Services in Use
The first step in any AWS AI red team engagement is identifying which AI services are active and how they are configured.
# List Bedrock model access
aws bedrock list-foundation-models --region us-east-1 \
--query 'modelSummaries[].{id:modelId,provider:providerName}' --output table
# Check for custom Bedrock models
aws bedrock list-custom-models --region us-east-1
# List Bedrock agents
aws bedrock-agent list-agents --region us-east-1
# List Bedrock knowledge bases
aws bedrock-agent list-knowledge-bases --region us-east-1
# Check for Bedrock guardrails
aws bedrock list-guardrails --region us-east-1
# List SageMaker endpoints (active models)
aws sagemaker list-endpoints --status-equals InService
# List SageMaker notebook instances
aws sagemaker list-notebook-instances --status-equals InService
# List SageMaker models
aws sagemaker list-models
# Check for Comprehend endpoints
aws comprehend list-endpoints
# Check for Rekognition custom models
aws rekognition list-project-policies --project-arn <arn>IAM Policy Analysis
Identify principals with AI service permissions:
# Find roles with Bedrock permissions
# Look for bedrock:InvokeModel, bedrock:InvokeModelWithResponseStream
# and bedrock-agent:InvokeAgent in IAM policies
# Check the calling identity's Bedrock permissions
aws bedrock list-foundation-models 2>&1
aws bedrock-runtime invoke-model --model-id anthropic.claude-v2 \
--body '{"prompt":"test"}' /dev/null 2>&1
# Enumerate SageMaker permissions
aws sagemaker list-endpoints 2>&1
aws sagemaker describe-endpoint --endpoint-name <name> 2>&1Key IAM actions to look for:
| Action | Risk |
|---|---|
bedrock:InvokeModel | Can call any allowed foundation model |
bedrock:CreateModelCustomizationJob | Can fine-tune models with arbitrary data |
bedrock-agent:InvokeAgent | Can trigger agent actions with tool access |
sagemaker:CreateNotebookInstance | Can create compute with IAM role access |
sagemaker:CreateEndpoint | Can deploy models (compute cost risk) |
sagemaker:DescribeTrainingJob | Can access training job configs and data locations |
For detailed IAM exploitation patterns, see IAM for AI Services.
Common Misconfigurations
Overprivileged AI Roles
The most common AWS AI misconfiguration is overprivileged IAM roles. Organizations frequently grant bedrock:* or sagemaker:* to application roles because the fine-grained permission model is complex and poorly documented.
{
"Effect": "Allow",
"Action": "bedrock:*",
"Resource": "*"
}This policy allows the role to invoke any model, create custom models, manage agents, configure guardrails, and access knowledge bases. A compromised application with this policy gives the attacker full control of the organization's Bedrock infrastructure.
Exposed Endpoints
SageMaker endpoints are by default accessible within the VPC, but misconfigurations in VPC settings, security groups, or endpoint policies can expose them to broader access. Check for:
- SageMaker endpoints in public subnets with public IP addresses
- Security groups allowing inbound access from
0.0.0.0/0 - Missing VPC endpoint policies that should restrict which principals can invoke endpoints
- Lambda functions with public API Gateway triggers that proxy to SageMaker endpoints without authentication
Logging Gaps
AWS CloudTrail logs AI service API calls, but many organizations fail to enable data event logging for Bedrock model invocations. Without data events, prompt content and model responses are not captured, making it impossible to detect prompt injection attacks or data exfiltration through model interactions.
Related Topics
- Bedrock Attack Surface -- Detailed Bedrock exploitation techniques
- SageMaker Exploitation -- SageMaker attack methodology
- IAM for AI Services -- AWS IAM patterns for AI
- Cloud AI Security Overview -- Cross-provider cloud AI security
During AWS AI reconnaissance, you discover an IAM role with the policy 'bedrock:*' on resource '*'. Which of the following actions does this NOT allow?
Why should AWS AI reconnaissance span multiple regions rather than focusing on the organization's primary region?
References
- AWS Bedrock Documentation -- Official Bedrock service documentation
- AWS SageMaker Security -- SageMaker security best practices
- AWS AI Services IAM Reference -- IAM actions for all AWS services