Shared Responsibility 模型 for Cloud AI 安全
Understanding the division of security responsibilities between cloud providers and customers for AI/ML workloads across AWS, Azure, and GCP, with specific guidance for LLM deployments.
概覽
The shared responsibility model is a foundational concept in 雲端 安全: the provider secures the infrastructure, and the customer secures what they build on it. AWS formalized this as "安全 OF the 雲端 vs. 安全 IN the 雲端." But when organizations deploy AI workloads — particularly LLM-based applications — this model breaks down in ways that create dangerous 安全 gaps.
考慮 a typical LLM deployment on AWS Bedrock. AWS secures the underlying compute infrastructure, manages model serving, and handles API availability. The customer is responsible for IAM policies, API key management, and application-level 安全. But who is responsible for 模型's behavior? If 模型 generates harmful content, leaks its 系統提示詞, or is susceptible to 提示詞注入, is that a provider failure (模型 is defective) or a customer failure (the application lacks 護欄)? The answer is often both — and in practice, neither organization owns the gap.
This article maps the shared responsibility boundaries for AI workloads across the three major 雲端 providers, identifies the AI-specific gaps that the traditional model fails to address, and provides practical tools for organizations to close those gaps. The goal is not a theoretical framework but an operational guide that 安全 teams can use to assign clear ownership for every AI 安全 control.
The Traditional Model vs. AI Reality
Traditional 雲端 Shared Responsibility
In traditional 雲端 deployments, the responsibility boundary is relatively clear:
| Layer | IaaS (EC2) | PaaS (Lambda) | SaaS (Gmail) |
|---|---|---|---|
| Physical 安全 | Provider | Provider | Provider |
| Network infrastructure | Provider | Provider | Provider |
| Hypervisor/OS | Provider | Provider | Provider |
| Runtime/middleware | Customer | Provider | Provider |
| Application code | Customer | Customer | Provider |
| Data | Customer | Customer | Customer |
| Identity/access | Customer | Customer | Shared |
AI-Specific Responsibility Gaps
AI workloads introduce new 安全 dimensions that do not map cleanly to existing layers:
| AI 安全 Dimension | Who Owns It? | Gap Description |
|---|---|---|
| Model weight integrity | Ambiguous | Provider hosts 模型, but customer chose it. Who verifies it's not poisoned? |
| 安全 對齊 | Ambiguous | Provider trained 模型, but customer defines acceptable behavior for their use case |
| Prompt injection 防禦 | Customer (mostly) | Provider may offer basic 護欄, but application-level 防禦 is customer's responsibility |
| 訓練資料 governance | Provider (for foundation models) | Customer has no visibility into what data 模型 was trained on |
| 微調 data 安全 | Customer | Customer provides the data; provider processes it but customer owns data governance |
| 輸出 content 安全 | Shared | Provider offers content filters; customer must configure and supplement them |
| Model API abuse 偵測 | Shared | Provider monitors for infrastructure abuse; customer monitors for application-level abuse |
| Regulatory compliance | Customer (primarily) | Provider offers compliance tools but customer must configure for their specific requirements |
# Tool: Shared responsibility mapper for 雲端 AI deployments
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
class ResponsibilityOwner(Enum):
PROVIDER = "provider"
CUSTOMER = "customer"
SHARED = "shared"
UNASSIGNED = "unassigned" # Gap — nobody owns this
class CloudProvider(Enum):
AWS_BEDROCK = "aws_bedrock"
AZURE_OPENAI = "azure_openai"
GCP_VERTEX = "gcp_vertex"
SELF_HOSTED = "self_hosted"
@dataclass
class SecurityControl:
"""A specific 安全 control for AI workloads."""
control_id: str
name: str
description: str
category: str
owasp_llm_mapping: list[str]
nist_ai_rmf_function: str
@dataclass
class ResponsibilityAssignment:
"""Assignment of a 安全 control to an owner for a specific provider."""
control: SecurityControl
provider: CloudProvider
owner: ResponsibilityOwner
provider_capabilities: str # What the provider offers for this control
customer_actions_required: str # What the customer must do
gap_description: Optional[str] = None # If UNASSIGNED, what's the gap?
evidence_of_implementation: str = ""
# Define the AI-specific 安全 control catalog
AI_SECURITY_CONTROLS: list[SecurityControl] = [
SecurityControl(
control_id="AI-SEC-001",
name="Model Provenance Verification",
description="Verify the origin, integrity, and 訓練 methodology of foundation models before deployment",
category="supply_chain",
owasp_llm_mapping=["LLM03"],
nist_ai_rmf_function="MAP",
),
SecurityControl(
control_id="AI-SEC-002",
name="提示詞注入 防禦",
description="Detect and block 提示詞注入 attacks in user inputs and retrieved content",
category="input_security",
owasp_llm_mapping=["LLM01"],
nist_ai_rmf_function="MANAGE",
),
SecurityControl(
control_id="AI-SEC-003",
name="輸出 Content Filtering",
description="Filter model outputs for harmful content, PII leakage, and policy violations",
category="output_security",
owasp_llm_mapping=["LLM02", "LLM05"],
nist_ai_rmf_function="MEASURE",
),
SecurityControl(
control_id="AI-SEC-004",
name="System Prompt Protection",
description="Prevent extraction or leakage of system prompts and instructions",
category="configuration_security",
owasp_llm_mapping=["LLM07"],
nist_ai_rmf_function="MANAGE",
),
SecurityControl(
control_id="AI-SEC-005",
name="微調 Data Governance",
description="Ensure 訓練/微調 data meets quality, privacy, and 安全 standards",
category="data_governance",
owasp_llm_mapping=["LLM04"],
nist_ai_rmf_function="GOVERN",
),
SecurityControl(
control_id="AI-SEC-006",
name="API Authentication and Authorization",
description="Enforce strong 認證 and least-privilege access to model API endpoints",
category="access_control",
owasp_llm_mapping=["LLM06"],
nist_ai_rmf_function="GOVERN",
),
SecurityControl(
control_id="AI-SEC-007",
name="Rate Limiting and Cost Controls",
description="Prevent model API abuse through rate limiting, budget caps, and anomaly 偵測",
category="availability",
owasp_llm_mapping=["LLM10"],
nist_ai_rmf_function="MANAGE",
),
SecurityControl(
control_id="AI-SEC-008",
name="Model 輸出 Logging and Audit",
description="Log all model interactions with sufficient detail for 安全 forensics and compliance",
category="監控",
owasp_llm_mapping=["LLM01", "LLM02"],
nist_ai_rmf_function="MEASURE",
),
SecurityControl(
control_id="AI-SEC-009",
name="RAG Pipeline 安全",
description="Secure the retrieval pipeline against document 投毒 and 嵌入向量 manipulation",
category="data_pipeline",
owasp_llm_mapping=["LLM08"],
nist_ai_rmf_function="MAP",
),
SecurityControl(
control_id="AI-SEC-010",
name="代理/Tool Authorization Controls",
description="Restrict and monitor tool/function calls made by AI 代理 to enforce least privilege",
category="agent_security",
owasp_llm_mapping=["LLM06"],
nist_ai_rmf_function="MANAGE",
),
]
# Provider-specific responsibility mappings
def get_aws_bedrock_assignments() -> list[ResponsibilityAssignment]:
"""Map 安全 controls to responsibility owners for AWS Bedrock."""
return [
ResponsibilityAssignment(
control=AI_SECURITY_CONTROLS[0], # Model Provenance
provider=CloudProvider.AWS_BEDROCK,
owner=ResponsibilityOwner.SHARED,
provider_capabilities=(
"AWS vets foundation models before listing on Bedrock. Provides model cards "
"with 訓練 methodology summaries. Manages model serving infrastructure."
),
customer_actions_required=(
"評估 model cards for suitability. 測試 models against your specific "
"use case requirements. Maintain an internal model registry documenting "
"which models are approved for which use cases."
),
),
ResponsibilityAssignment(
control=AI_SECURITY_CONTROLS[1], # 提示詞注入
provider=CloudProvider.AWS_BEDROCK,
owner=ResponsibilityOwner.CUSTOMER,
provider_capabilities=(
"Bedrock 護欄 offers configurable content filtering and denied topics. "
"Does not include specialized 提示詞注入 偵測."
),
customer_actions_required=(
"實作 application-level 提示詞注入 偵測. Configure Bedrock "
"護欄 for content filtering. Add 輸入 validation before 模型 API call. "
"考慮 third-party tools like Prompt Guard or custom classifiers."
),
gap_description=(
"Bedrock 護欄 focuses on content policy, not 提示詞注入 偵測. "
"Customers must 實作 their own injection 防禦 layer."
),
),
ResponsibilityAssignment(
control=AI_SECURITY_CONTROLS[2], # 輸出 Filtering
provider=CloudProvider.AWS_BEDROCK,
owner=ResponsibilityOwner.SHARED,
provider_capabilities=(
"Bedrock 護欄 provides configurable 輸出 content filters for "
"hate, insults, sexual content, violence, and misconduct categories. "
"Supports custom denied topics and word filters."
),
customer_actions_required=(
"Configure 護欄 thresholds appropriate for your use case. "
"實作 additional application-specific 輸出 validation. "
"Add PII 偵測 using Bedrock 護欄 PII filters or Amazon Comprehend."
),
),
ResponsibilityAssignment(
control=AI_SECURITY_CONTROLS[5], # API Auth
provider=CloudProvider.AWS_BEDROCK,
owner=ResponsibilityOwner.SHARED,
provider_capabilities=(
"Bedrock uses IAM for 認證. Supports resource-based policies, "
"VPC endpoints, and CloudTrail logging of API calls."
),
customer_actions_required=(
"Configure IAM policies with least privilege. Use VPC endpoints for "
"private access. Enable CloudTrail logging. 實作 application-level "
"認證 for end users. Rotate credentials regularly."
),
),
ResponsibilityAssignment(
control=AI_SECURITY_CONTROLS[6], # Rate Limiting
provider=CloudProvider.AWS_BEDROCK,
owner=ResponsibilityOwner.SHARED,
provider_capabilities=(
"Bedrock enforces account-level quotas and throughput limits. "
"Supports provisioned throughput for predictable capacity."
),
customer_actions_required=(
"Set appropriate account quotas. 實作 application-level rate limiting "
"per user/session. Configure AWS Budgets alerts for cost anomalies. "
"Monitor for abuse patterns in application logs."
),
),
ResponsibilityAssignment(
control=AI_SECURITY_CONTROLS[7], # Logging
provider=CloudProvider.AWS_BEDROCK,
owner=ResponsibilityOwner.SHARED,
provider_capabilities=(
"Bedrock supports model invocation logging to S3 and CloudWatch. "
"Logs include 輸入/輸出 content, 符元 counts, and latency."
),
customer_actions_required=(
"Enable model invocation logging. Configure log retention and access controls. "
"Build 監控 dashboards and alert rules. Ensure logging configuration "
"meets regulatory retention requirements."
),
),
]
class SharedResponsibilityAuditor:
"""Audits an organization's AI deployment against the shared responsibility model."""
def __init__(self, provider: CloudProvider):
self.provider = provider
self.assignments = self._load_assignments()
def _load_assignments(self) -> list[ResponsibilityAssignment]:
if self.provider == CloudProvider.AWS_BEDROCK:
return get_aws_bedrock_assignments()
# Add other providers similarly
return []
def audit(self, implemented_controls: dict[str, dict]) -> dict:
"""
Audit implemented controls against required responsibilities.
Args:
implemented_controls: Dict mapping control_id to 實作 details.
Expected keys: "implemented" (bool), "evidence" (str), "owner" (str).
Returns:
Audit report with gaps, risks, and recommendations.
"""
gaps = []
compliant = []
partial = []
for assignment in self.assignments:
control_id = assignment.control.control_id
impl = implemented_controls.get(control_id, {})
if not impl.get("implemented", False):
gaps.append({
"control_id": control_id,
"control_name": assignment.control.name,
"owner": assignment.owner.value,
"customer_action_required": assignment.customer_actions_required,
"owasp_mapping": assignment.control.owasp_llm_mapping,
"risk": "HIGH" if assignment.owner in (
ResponsibilityOwner.CUSTOMER, ResponsibilityOwner.SHARED
) else "MEDIUM",
})
elif assignment.gap_description and not impl.get("gap_addressed", False):
partial.append({
"control_id": control_id,
"control_name": assignment.control.name,
"gap": assignment.gap_description,
"current_state": impl.get("evidence", "No evidence provided"),
})
else:
compliant.append({
"control_id": control_id,
"control_name": assignment.control.name,
"evidence": impl.get("evidence", ""),
})
total = len(self.assignments)
compliant_count = len(compliant)
return {
"provider": self.provider.value,
"total_controls": total,
"compliant": compliant_count,
"partial": len(partial),
"gaps": len(gaps),
"compliance_rate": compliant_count / total if total > 0 else 0,
"gap_details": gaps,
"partial_details": partial,
"compliant_details": compliant,
"risk_summary": {
"high_risk_gaps": sum(1 for g in gaps if g["risk"] == "HIGH"),
"medium_risk_gaps": sum(1 for g in gaps if g["risk"] == "MEDIUM"),
},
"top_recommendations": [
g["customer_action_required"]
for g in sorted(gaps, key=lambda x: x["risk"], reverse=True)[:5]
],
}Provider-Specific Responsibility Maps
AWS Bedrock
AWS Bedrock operates as a managed service that abstracts model serving. AWS's responsibility includes infrastructure 安全, model hosting, API availability, and basic content 安全 features through Bedrock 護欄. The customer is responsible for everything above the API layer: IAM configuration, application 安全, prompt engineering for 安全, 護欄 configuration, data pipeline 安全, and compliance mapping.
Key customer-side gaps to address:
- Prompt injection 偵測: Bedrock 護欄 does not include dedicated 提示詞注入 偵測. Deploy application-level classifiers.
- 系統提示詞 安全: Bedrock does not enforce 系統提示詞 confidentiality. 實作 輸出 filtering to detect 系統提示詞 leakage.
- VPC configuration: Bedrock API calls traverse the public internet by default. Configure VPC endpoints for private access.
Azure OpenAI Service
Azure OpenAI Service provides the deepest integration with enterprise 安全 tooling through Azure Active Directory, Private Endpoints, and Azure Content 安全. Microsoft's responsibility extends further than other providers, covering content filtering (on by default), abuse 監控, and integration with Microsoft Defender for 雲端.
Key customer-side gaps:
- Content filter customization: Default filters may be too restrictive or too permissive for your use case. 測試 and calibrate filter severity levels.
- Data residency: Azure OpenAI processes data in the region of deployment but may send data to Microsoft for abuse 監控 unless opted out. Verify data residency meets regulatory requirements.
- Provisioned throughput 安全: PTU deployments have different 安全 considerations than pay-as-you-go. Ensure network isolation for dedicated capacity.
GCP Vertex AI
GCP Vertex AI offers both API access to foundation models and infrastructure for self-hosted model deployment. Google's responsibility includes infrastructure 安全, model serving for API-accessed models, and Google 雲端 Armor integration. The customer side includes IAM, VPC Service Controls, and application-level 安全.
Key customer-side gaps:
- Model Garden 供應鏈: Vertex AI Model Garden provides access to third-party models with varying 安全 postures. 評估 each model independently.
- Custom model 安全: When deploying custom models on Vertex AI endpoints, the customer owns the full model 安全 stack.
- Grounding and RAG 安全: Vertex AI Search and Conversation grounding features introduce data pipeline 安全 requirements that are entirely customer-managed.
Building Your Responsibility Matrix
# Template: RACI matrix for AI 安全 responsibilities
from dataclasses import dataclass
@dataclass
class RACIEntry:
"""A single entry in the AI 安全 RACI matrix."""
activity: str
responsible: str # Does the work
accountable: str # Makes the decision, has final authority
consulted: str # Provides 輸入
informed: str # Kept up to date
AI_SECURITY_RACI: list[RACIEntry] = [
RACIEntry(
activity="Model selection and approval for production use",
responsible="AI/ML Engineering",
accountable="CISO / AI Governance Board",
consulted="安全 Team, Legal, Compliance",
informed="Application Development Teams",
),
RACIEntry(
activity="Prompt injection 防禦 實作",
responsible="Application 安全 Team",
accountable="安全 Engineering Lead",
consulted="AI/ML Engineering, 紅隊",
informed="Development Teams, CISO",
),
RACIEntry(
activity="雲端 AI service IAM configuration",
responsible="雲端 Engineering / Platform Team",
accountable="雲端 安全 Lead",
consulted="安全 Team, AI/ML Engineering",
informed="Application Development Teams",
),
RACIEntry(
activity="Model 輸出 content 安全 configuration",
responsible="AI/ML Engineering",
accountable="Product Owner + 安全",
consulted="Legal, Compliance, Trust & 安全",
informed="Customer Support, CISO",
),
RACIEntry(
activity="微調 data quality and 安全 review",
responsible="Data Engineering + AI/ML Engineering",
accountable="Data Governance Lead",
consulted="安全 Team, Legal/Privacy",
informed="AI/ML Engineering Lead",
),
RACIEntry(
activity="AI incident response",
responsible="安全 Operations (SOC)",
accountable="CISO",
consulted="AI/ML Engineering, Application Team, Legal",
informed="Executive Leadership, Compliance",
),
RACIEntry(
activity="AI 紅隊演練 and 安全 評估",
responsible="紅隊 / AI 安全 Team",
accountable="安全 Engineering Lead",
consulted="AI/ML Engineering, Product",
informed="CISO, Development Teams",
),
RACIEntry(
activity="Regulatory compliance for AI systems",
responsible="Compliance Team",
accountable="General Counsel / CISO",
consulted="AI/ML Engineering, 安全, Product",
informed="Executive Leadership",
),
]
def generate_raci_matrix(entries: list[RACIEntry]) -> str:
"""Generate a formatted RACI matrix for documentation."""
header = "| Activity | Responsible | Accountable | Consulted | Informed |"
separator = "|----------|------------|-------------|-----------|----------|"
rows = [
f"| {e.activity} | {e.responsible} | {e.accountable} | {e.consulted} | {e.informed} |"
for e in entries
]
return "\n".join([header, separator] + rows)Operationalizing the Model
Mapping responsibilities is the first step. Operationalizing 模型 requires sustained effort across multiple organizational functions.
Automated Compliance Verification
Use 雲端-native tools (AWS Config, Azure Policy, GCP Organization Policy) to continuously verify that customer-side controls are deployed and correctly configured. Define custom rules that check AI-specific configurations:
- AWS Config: Custom rules that verify Bedrock model invocation logging is enabled, VPC endpoints are configured for Bedrock Runtime, and IAM policies restrict model access to approved roles.
- Azure Policy: Built-in and custom policies that verify Azure OpenAI content filtering is enabled, private endpoints are configured, and diagnostic settings capture model interaction logs.
- GCP Organization Policy: Constraints that restrict which Vertex AI models can be deployed, ensure VPC Service Controls are applied to AI API services, and verify that audit logging is active.
Regular Gap Assessments
Run the SharedResponsibilityAuditor quarterly against your actual deployment configuration to 識別 drift and new gaps. AI deployments change rapidly — new models are adopted, new integrations are built, and provider capabilities evolve. A quarterly cadence ensures that responsibility assignments stay current.
The 評估 should include a reconciliation step where the 安全 team validates that every AI asset in the inventory has a clear responsibility assignment 對每個 安全 control. Assets without clear assignments represent unmitigated risk.
Provider Change Tracking
Monitor provider announcements for changes to their 安全 capabilities and responsibility boundaries. AWS, Azure, and GCP frequently update their AI service features, which can shift the responsibility boundary. 例如, when AWS Bedrock added 護欄, some 輸出 content filtering responsibilities shifted partially to the provider — but only for organizations that configure and enable 護欄. Organizations that do not enable the feature retain full customer-side responsibility.
Assign a team member to monitor the following sources 對每個 provider:
- AWS: Bedrock release notes, 安全 Blog, and re:Invent announcements
- Azure: Azure OpenAI Service updates, Microsoft 安全 Blog
- GCP: Vertex AI release notes, Google 雲端 安全 summaries
Cross-Functional Reviews
The RACI matrix must be reviewed with all stakeholder teams at least annually to ensure ownership remains accurate as teams reorganize and products evolve. AI 安全 is inherently cross-functional — it spans 雲端 engineering, data science, application 安全, legal, and compliance teams. If any team's structure changes, the RACI matrix may need updating to reflect new ownership.
Incident-Driven Updates
After any AI 安全 incident, the shared responsibility model should be reviewed to determine whether the incident revealed a gap in responsibility assignment. If the root cause of an incident was that nobody owned a specific 安全 control, the RACI matrix must be updated immediately — not at the next quarterly review.
Common Pitfalls
Organizations 實作 the shared responsibility model for AI frequently encounter these problems:
-
Assuming the provider handles model 安全. 雲端 providers secure the infrastructure, not 模型 behavior. If a model generates harmful content, that is a customer-side problem to solve through 護欄, not a provider outage to report.
-
Treating AI 安全 as a 雲端 engineering problem. 雲端 engineers manage IAM and networking but typically lack the expertise to 評估 model 安全, 提示詞注入 risks, or 輸出 quality. AI 安全 requires collaboration between 雲端, data science, and 安全 teams.
-
Not updating after provider feature releases. When a provider launches a new 安全 feature (like Bedrock 護欄 or Azure Content 安全), organizations must 評估 whether to adopt it and update their responsibility assignments accordingly. Ignoring new features means missing available risk reduction.
-
Failing to account for shadow AI. Development teams often experiment with AI services outside of approved channels — using personal API keys, 測試 models through web interfaces, or deploying models on unmanaged infrastructure. The shared responsibility model only works for assets that are known and tracked.
-
Confusing compliance with 安全. Passing a compliance audit (SOC 2, ISO 27001) does not mean AI workloads are secure. Compliance frameworks are beginning to address AI-specific risks, but most current certifications do not 評估 提示詞注入 防禦, model 安全 對齊, or AI-specific incident response capabilities.
Self-Hosted vs. Managed: How Hosting Model Shifts Responsibility
The shared responsibility model changes dramatically when organizations choose to self-host open-source models rather than using managed 雲端 AI services. 理解 this shift is critical for 安全 planning.
Managed Services (Bedrock, Azure OpenAI, Vertex AI)
With managed services, the provider handles infrastructure 安全, model serving, patching, and availability. The customer's responsibility starts at the API boundary: 認證, 授權, content 安全 configuration, prompt engineering, application 安全, and compliance. 這是 the most common deployment model and the one most organizations should prefer unless they have specific requirements that mandate self-hosting.
Self-Hosted on 雲端 Compute (EC2, GKE, AKS)
When self-hosting an open-source model on 雲端 compute, the customer assumes responsibility for the full model 安全 stack: model weight integrity, 推論 server configuration, GPU driver patching, container 安全, network isolation, and model API 認證. The 雲端 provider is responsible only for the underlying compute and network infrastructure. This significantly increases the customer's 安全 burden and requires specialized expertise in ML operations 安全 that many organizations lack.
Self-Hosted on-Premises
On-premises deployments place all responsibility on the customer, from physical hardware 安全 through model behavior. This option provides maximum control but maximum responsibility, and is typically only appropriate for organizations with specific data sovereignty or regulatory requirements that cannot be met by any 雲端 deployment.
The key insight is that self-hosting does not eliminate the provider relationship — it replaces a single AI service provider with multiple component providers (hardware vendor, OS vendor, 推論 framework maintainer, model developer). Each of these relationships has its own implicit shared responsibility model that must be mapped and managed.
The Open-Source Model Complication
Open-source models add another dimension to the responsibility matrix. When an organization deploys Meta's Llama or Mistral AI's models, 模型 provider has no ongoing 安全 obligation — they released the weights under an open license and moved on. 存在 no SLA, no 安全 patch cadence, and no incident response relationship. The organization assumes full responsibility for model 安全, including 監控 for newly discovered 漏洞, evaluating whether published jailbreaks affect their deployment, and deciding when to migrate to a newer model version.
這是 fundamentally different from using a managed API like OpenAI's or Anthropic's, where the provider continuously updates 模型, patches 安全 issues, and monitors for new attack techniques. Organizations choosing open-source models must build internal capability to perform these functions, or accept the risk that their deployed model will gradually fall behind the evolving threat landscape.
紅隊 評估 Approach
安全 teams assessing an organization's AI shared responsibility 實作 should focus on identifying gaps — responsibilities that neither the provider nor the customer has effectively addressed. The most common gaps are:
-
Prompt injection 防禦: Provider offers content filtering; customer assumes this covers 提示詞注入 (it does not). Neither has deployed dedicated injection 偵測.
-
Model 輸出 監控: Provider logs API calls; customer monitors application metrics. Neither monitors model outputs for 安全 violations, data leakage, or anomalous patterns.
-
供應鏈 verification: For managed services, the provider handles model integrity. For self-hosted models, nobody verifies model weight integrity after initial download.
-
Incident 偵測 for AI-specific attacks: Provider monitors for infrastructure attacks; customer monitors for application attacks. Neither monitors for AI-specific attack patterns (gradual escalation, multi-turn manipulation, 嵌入向量 投毒).
The 評估 should produce a clear gap map that identifies every unowned responsibility and recommends assignment to the appropriate team.
參考文獻
- AWS. "Shared Responsibility Model," https://aws.amazon.com/compliance/shared-responsibility-model/
- Microsoft. "Shared Responsibility in the 雲端," https://learn.microsoft.com/en-us/azure/安全/fundamentals/shared-responsibility
- NIST AI 600-1, "Artificial Intelligence Risk Management Framework: Generative AI Profile," https://csrc.nist.gov/publications/detail/ai/600-1/final
- OWASP Top 10 for 大型語言模型 Applications 2025, https://owasp.org/www-project-top-10-for-large-language-model-applications/