Cloud AI Platform Walkthroughs
Hands-on walkthroughs for red teaming AI systems deployed on major cloud platforms: AWS Bedrock, Azure OpenAI, Google Vertex AI, and Hugging Face Hub.
AI models rarely run in isolation. In production, they are deployed through cloud platform services that add layers of authentication, content filtering, rate limiting, logging, and orchestration on top of the base model. These platform layers are both a defense surface and an attack surface. A prompt injection that works against a raw model API may be blocked by platform-level content filtering, but that same platform may introduce new vulnerabilities through misconfigured IAM policies, overly permissive knowledge base access, or insecure default settings.
This section provides platform-specific walkthroughs that cover the complete workflow: provisioning access, understanding the platform's security model, testing platform-level guardrails, and identifying platform-specific misconfigurations that a model-only assessment would miss.
Why Platform-Specific Testing Matters
Testing a model through a cloud platform differs from testing the same model directly in several important ways:
Authentication and authorization. Cloud platforms wrap model access in IAM policies, API keys, managed identities, and service accounts. Misconfigured access controls can expose models to unauthorized users or grant excessive permissions to authorized ones.
Content filtering. Every major platform provides configurable content filtering that sits between the user and the model. These filters have their own bypass techniques, false positive rates, and configuration pitfalls.
Data integration. Platforms connect models to enterprise data through knowledge bases (Bedrock), data stores (Vertex AI), and file search (Azure OpenAI). These integrations create data exfiltration and injection attack surfaces that do not exist when testing the model in isolation.
Logging and monitoring. Platform logging can both help and hinder red teaming. Understanding what is logged (and what is not) is essential for realistic assessments.
Platform Comparison
| Feature | AWS Bedrock | Azure OpenAI | Vertex AI | Hugging Face |
|---|---|---|---|---|
| Model access | API (Invoke/Converse) | API (Chat/Completions) | API (Predict) | API + local inference |
| Content filtering | Guardrails | Content Safety | Responsible AI | Community + custom |
| Data integration | Knowledge Bases | On Your Data / File Search | Feature Store / RAG | Datasets + Spaces |
| Auth model | IAM roles + policies | Entra ID + RBAC | IAM + service accounts | API tokens + orgs |
| Logging | CloudTrail + CloudWatch | Azure Monitor + Diagnostic Logs | Cloud Logging | Inference API logs |
| Red team difficulty | Intermediate | Intermediate | Intermediate | Beginner |
Recommended Approach
For each platform, follow this assessment sequence:
Access and Authentication Review
Verify how access is provisioned. Test for overly permissive IAM policies, leaked credentials, and misconfigured service accounts. Determine whether the model endpoint is exposed beyond its intended audience.
Content Filter Assessment
Map the platform's content filtering configuration. Test each filter category at its configured threshold. Identify bypass techniques specific to the platform's filtering implementation.
Data Integration Testing
If the model connects to knowledge bases, file stores, or databases, test for data exfiltration through the model, injection through connected data sources, and unauthorized access to data outside the intended scope.
Model-Level Testing
Run standard model-level attacks (prompt injection, jailbreaking, data extraction) through the platform layer. Compare results against direct model access to understand what the platform blocks versus what reaches the model.
Logging and Detection Evasion
Review what the platform logs about your testing activity. Identify which attack patterns are visible in logs and which evade detection. This informs both the red team report and the client's monitoring strategy.
Environment Preparation
All platform walkthroughs assume you have:
- An account on the target platform with sufficient permissions for testing
- Written authorization to perform security testing (see the Engagement Kickoff walkthrough)
- The platform's CLI tool installed and configured
- Python 3.10+ with the platform's SDK installed
# Platform CLI tools
aws --version # AWS CLI v2
az --version # Azure CLI
gcloud --version # Google Cloud SDK
huggingface-cli whoami # Hugging Face CLI
# Platform Python SDKs
pip install boto3 # AWS
pip install openai azure-identity # Azure OpenAI
pip install google-cloud-aiplatform # Vertex AI
pip install huggingface_hub transformers # Hugging FaceWalkthrough Index
- AWS Bedrock Walkthrough -- Model invocation, guardrail testing, knowledge base exploitation, and CloudTrail analysis
- Azure OpenAI Walkthrough -- Deployment testing, content filtering bypass, managed identity exploitation, and prompt flow assessment
- Vertex AI Walkthrough -- Prediction endpoint testing, Model Garden assessment, and Feature Store probing
- Hugging Face Hub Walkthrough -- Model assessment, malicious model scanning, and Transformers library testing