AI Supply Chain Security Overview
Comprehensive overview of the AI/ML supply chain attack surface, covering model poisoning, data poisoning, dependency attacks, and risk assessment frameworks aligned with OWASP LLM03:2025.
The AI supply chain is fundamentally different from the traditional software supply chain. In addition to code dependencies, package managers, and build systems, AI systems depend on pre-trained models, training datasets, fine-tuning services, and inference APIs -- each of which introduces trust assumptions that attackers can exploit. OWASP ranks Supply Chain Vulnerabilities as LLM03 in the 2025 Top 10 for LLM Applications, recognizing that the blast radius of a single compromised component can affect thousands of downstream deployments.
The AI Supply Chain Map
Every AI system depends on a chain of components, each sourced from different providers with different trust levels:
┌─────────────────────────────────────────────────────────────────┐
│ AI SUPPLY CHAIN MAP │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Pre-trained │ │ Training │ │ Fine-tuning │ │
│ │ Models │ │ Datasets │ │ Services │ │
│ │ (HF, ONNX) │ │ (Common Crawl│ │ (OpenAI, │ │
│ │ │ │ Wikipedia) │ │ Replicate) │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ ML Frameworks & Libraries │ │
│ │ PyTorch, TensorFlow, Transformers, LangChain │ │
│ └──────────────────────┬──────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Package Managers & Registries │ │
│ │ pip, npm, conda, Docker Hub, GHCR │ │
│ └──────────────────────┬──────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Inference APIs & Plugins/Extensions │ │
│ │ OpenAI, Anthropic, MCP servers, LangChain │ │
│ │ tools, vector DB connectors │ │
│ └─────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Trust Boundaries in the AI Supply Chain
Each component crosses a trust boundary when it enters your environment:
"""
AI Supply Chain Trust Boundary Analyzer
Maps every external dependency in an AI project and classifies
the trust level and verification status of each component.
"""
import json
import hashlib
import subprocess
from dataclasses import dataclass, field
from enum import Enum
from typing import Optional
from pathlib import Path
class TrustLevel(Enum):
VERIFIED = "verified" # Signed, checksummed, from known source
PARTIALLY_VERIFIED = "partial" # Some verification but gaps exist
UNVERIFIED = "unverified" # No verification performed
UNKNOWN = "unknown" # Trust status not yet assessed
class ComponentType(Enum):
MODEL = "model"
DATASET = "dataset"
FRAMEWORK = "framework"
PACKAGE = "package"
PLUGIN = "plugin"
API = "api"
CONTAINER = "container"
@dataclass
class SupplyChainComponent:
name: str
component_type: ComponentType
source: str
version: str
trust_level: TrustLevel = TrustLevel.UNKNOWN
checksum: Optional[str] = None
signature_verified: bool = False
last_audit: Optional[str] = None
known_vulnerabilities: list = field(default_factory=list)
def risk_score(self) -> int:
"""Calculate a 0-100 risk score for this component."""
score = 50 # baseline
# Trust level adjustments
trust_adjustments = {
TrustLevel.VERIFIED: -30,
TrustLevel.PARTIALLY_VERIFIED: -10,
TrustLevel.UNVERIFIED: +20,
TrustLevel.UNKNOWN: +30,
}
score += trust_adjustments.get(self.trust_level, 0)
# Type-specific risk (models carry highest inherent risk)
type_adjustments = {
ComponentType.MODEL: +15,
ComponentType.DATASET: +10,
ComponentType.PLUGIN: +10,
ComponentType.PACKAGE: +5,
ComponentType.FRAMEWORK: 0,
ComponentType.API: +5,
ComponentType.CONTAINER: +5,
}
score += type_adjustments.get(self.component_type, 0)
# Vulnerability adjustments
score += len(self.known_vulnerabilities) * 10
# Signature verification
if not self.signature_verified:
score += 10
return max(0, min(100, score))
def scan_python_dependencies() -> list[SupplyChainComponent]:
"""Scan pip-installed packages for AI/ML dependencies."""
components = []
try:
result = subprocess.run(
["pip", "list", "--format=json"],
capture_output=True, text=True, check=True
)
packages = json.loads(result.stdout)
ml_packages = {
"torch", "tensorflow", "transformers", "langchain",
"openai", "anthropic", "huggingface-hub", "datasets",
"safetensors", "tokenizers", "accelerate", "peft",
"sentence-transformers", "chromadb", "pinecone-client",
"faiss-cpu", "faiss-gpu", "onnxruntime", "triton",
}
for pkg in packages:
if pkg["name"].lower() in ml_packages:
components.append(SupplyChainComponent(
name=pkg["name"],
component_type=ComponentType.FRAMEWORK,
source="pypi",
version=pkg["version"],
trust_level=TrustLevel.PARTIALLY_VERIFIED,
))
except (subprocess.CalledProcessError, json.JSONDecodeError):
pass
return components
def scan_model_artifacts(model_dir: str) -> list[SupplyChainComponent]:
"""Scan a directory for model files and assess their trust status."""
components = []
model_extensions = {
".bin", ".pt", ".pth", ".onnx", ".safetensors",
".pkl", ".pickle", ".h5", ".pb", ".tflite",
}
model_path = Path(model_dir)
if not model_path.exists():
return components
for f in model_path.rglob("*"):
if f.suffix in model_extensions:
file_hash = hashlib.sha256(f.read_bytes()).hexdigest()
components.append(SupplyChainComponent(
name=f.name,
component_type=ComponentType.MODEL,
source=str(f.parent),
version=file_hash[:12],
checksum=file_hash,
trust_level=TrustLevel.UNVERIFIED,
))
return components
def generate_supply_chain_report(components: list[SupplyChainComponent]) -> dict:
"""Generate a risk report for all supply chain components."""
high_risk = [c for c in components if c.risk_score() >= 70]
medium_risk = [c for c in components if 40 <= c.risk_score() < 70]
low_risk = [c for c in components if c.risk_score() < 40]
return {
"total_components": len(components),
"high_risk_count": len(high_risk),
"medium_risk_count": len(medium_risk),
"low_risk_count": len(low_risk),
"high_risk_components": [
{
"name": c.name,
"type": c.component_type.value,
"source": c.source,
"risk_score": c.risk_score(),
"trust_level": c.trust_level.value,
"signed": c.signature_verified,
}
for c in high_risk
],
"recommendations": _generate_recommendations(components),
}
def _generate_recommendations(components: list[SupplyChainComponent]) -> list[str]:
"""Generate actionable recommendations based on scan results."""
recs = []
unverified_models = [
c for c in components
if c.component_type == ComponentType.MODEL
and c.trust_level == TrustLevel.UNVERIFIED
]
if unverified_models:
recs.append(
f"CRITICAL: {len(unverified_models)} model artifact(s) have no "
f"verification. Implement model signing and checksum validation."
)
unsigned_packages = [
c for c in components if not c.signature_verified
]
if unsigned_packages:
recs.append(
f"WARNING: {len(unsigned_packages)} package(s) lack signature "
f"verification. Enable GPG signature checking for pip packages."
)
return recs
if __name__ == "__main__":
# Example: scan current environment
deps = scan_python_dependencies()
models = scan_model_artifacts("./models")
all_components = deps + models
report = generate_supply_chain_report(all_components)
print(json.dumps(report, indent=2))OWASP LLM03:2025 -- Supply Chain Vulnerabilities
OWASP LLM03 identifies specific supply chain risks for LLM applications:
| Risk Category | Description | Example Attack |
|---|---|---|
| Compromised pre-trained models | Backdoored models from public repositories | PoisonGPT (Mithril Security, 2023) |
| Poisoned training data | Manipulated datasets that alter model behavior | Nightshade data poisoning |
| Malicious packages | Typosquatting and dependency confusion in ML ecosystems | PyTorch nightly incident (Dec 2022) |
| Vulnerable frameworks | Exploitable serialization, RCE in ML libraries | Pickle deserialization in PyTorch models |
| Compromised plugins | Malicious LLM plugins, tool poisoning | MCP tool description poisoning |
| Outdated components | Known CVEs in deployed ML infrastructure | TensorFlow CVE-2023-25668 |
Mapping OWASP LLM03 to Defensive Controls
# owasp-llm03-controls.yaml
# Maps each OWASP LLM03 risk to specific defensive controls
supply_chain_controls:
compromised_models:
risks:
- "Backdoored weights producing targeted misclassifications"
- "Trojan triggers activating on specific inputs"
- "Surgically edited knowledge (e.g., PoisonGPT ROME attacks)"
controls:
- id: SC-MODEL-01
name: "Model signature verification"
description: "Verify cryptographic signatures on all model artifacts before deployment"
implementation: "model-signing-verification.mdx"
priority: critical
- id: SC-MODEL-02
name: "Model behavioral testing"
description: "Run standardized test suites against models before deployment"
implementation: "trojan-model-detection.mdx"
priority: critical
- id: SC-MODEL-03
name: "Model provenance tracking"
description: "Maintain chain of custody records for all model artifacts"
implementation: "model-signing-verification.mdx"
priority: high
poisoned_data:
risks:
- "Label flipping attacks degrading model accuracy"
- "Backdoor insertion via poisoned training examples"
- "Clean-label attacks that evade manual inspection"
controls:
- id: SC-DATA-01
name: "Data validation pipeline"
description: "Automated validation of training data integrity and distribution"
implementation: "training-data-integrity.mdx"
priority: critical
- id: SC-DATA-02
name: "Data provenance tracking"
description: "Track origin and transformations of all training data"
implementation: "training-data-integrity.mdx"
priority: high
malicious_packages:
risks:
- "Typosquatting ML package names (e.g., pytorchh)"
- "Dependency confusion attacks on internal ML packages"
- "Compromised package maintainer accounts"
controls:
- id: SC-PKG-01
name: "Dependency scanning"
description: "Automated scanning of all AI/ML dependencies"
implementation: "dependency-scanning-ai.mdx"
priority: critical
- id: SC-PKG-02
name: "Package pinning and lockfiles"
description: "Pin all dependency versions with integrity hashes"
implementation: "dependency-scanning-ai.mdx"
priority: high
vulnerable_frameworks:
risks:
- "Pickle deserialization RCE in model loading"
- "Known CVEs in TensorFlow, PyTorch, ONNX Runtime"
- "Unsafe default configurations in ML serving frameworks"
controls:
- id: SC-FW-01
name: "Framework vulnerability scanning"
description: "Regular CVE scanning of ML framework dependencies"
implementation: "dependency-scanning-ai.mdx"
priority: critical
- id: SC-FW-02
name: "Safe serialization enforcement"
description: "Enforce safetensors format, block pickle deserialization"
implementation: "model-repository-security.mdx"
priority: criticalThe Three Threat Categories
1. Model Poisoning
Model poisoning attacks target the model weights themselves. Unlike traditional malware, a poisoned model contains no executable code -- the malicious behavior is encoded in the neural network parameters.
"""
Model Poisoning Risk Assessor
Evaluates the risk that a given model has been poisoned based on
its provenance, distribution channel, and verification status.
"""
import hashlib
import json
from datetime import datetime, timedelta
from pathlib import Path
def assess_model_provenance(model_info: dict) -> dict:
"""
Assess poisoning risk based on model provenance.
Checks:
1. Is the source a known, trusted repository?
2. Is the publisher a verified organization?
3. Does the model have integrity checksums?
4. When was it last scanned for backdoors?
"""
risk_factors = []
risk_score = 0
# Check source repository trust
trusted_sources = {
"huggingface.co": {"trust": "medium", "note": "Open uploads, some scanning"},
"pytorch.org": {"trust": "high", "note": "Official PyTorch models"},
"tensorflow.org": {"trust": "high", "note": "Official TF models"},
"internal-registry": {"trust": "high", "note": "Controlled by your org"},
}
source = model_info.get("source", "unknown")
source_info = trusted_sources.get(source, {"trust": "low", "note": "Unknown source"})
if source_info["trust"] == "low":
risk_factors.append(f"Model from untrusted source: {source}")
risk_score += 30
elif source_info["trust"] == "medium":
risk_factors.append(f"Model from partially-trusted source: {source}")
risk_score += 15
# Check publisher verification
if not model_info.get("publisher_verified", False):
risk_factors.append("Publisher identity not verified")
risk_score += 20
# Check integrity verification
if not model_info.get("checksum"):
risk_factors.append("No integrity checksum available")
risk_score += 15
# Check for recent security scan
last_scan = model_info.get("last_security_scan")
if not last_scan:
risk_factors.append("Model has never been scanned for backdoors")
risk_score += 25
else:
scan_date = datetime.fromisoformat(last_scan)
if datetime.now() - scan_date > timedelta(days=30):
risk_factors.append("Security scan is older than 30 days")
risk_score += 10
return {
"risk_score": min(100, risk_score),
"risk_level": (
"critical" if risk_score >= 70
else "high" if risk_score >= 50
else "medium" if risk_score >= 30
else "low"
),
"risk_factors": risk_factors,
"recommendation": (
"DO NOT DEPLOY -- model requires verification"
if risk_score >= 50
else "Deploy with enhanced monitoring"
if risk_score >= 30
else "Acceptable risk with standard monitoring"
),
}
# Example usage
if __name__ == "__main__":
model = {
"name": "finance-sentiment-v2",
"source": "huggingface.co",
"publisher_verified": False,
"checksum": None,
"last_security_scan": None,
}
result = assess_model_provenance(model)
print(json.dumps(result, indent=2))
# Output: risk_score: 60, risk_level: "high"
# Recommendation: "DO NOT DEPLOY -- model requires verification"2. Data Poisoning
Data poisoning attacks manipulate the training data to influence model behavior. These attacks can be inserted during data collection, annotation, or preprocessing.
"""
Training Data Integrity Checker
Performs statistical analysis on training datasets to detect
potential poisoning indicators.
"""
import numpy as np
from collections import Counter
def check_label_distribution(
labels: list,
expected_distribution: dict[str, float],
tolerance: float = 0.05,
) -> dict:
"""
Check if label distribution matches expected proportions.
Label flipping attacks alter the distribution of classes.
"""
total = len(labels)
actual_distribution = {
label: count / total
for label, count in Counter(labels).items()
}
anomalies = []
for label, expected_pct in expected_distribution.items():
actual_pct = actual_distribution.get(label, 0.0)
deviation = abs(actual_pct - expected_pct)
if deviation > tolerance:
anomalies.append({
"label": label,
"expected": round(expected_pct, 4),
"actual": round(actual_pct, 4),
"deviation": round(deviation, 4),
})
return {
"total_samples": total,
"distribution": actual_distribution,
"anomalies": anomalies,
"poisoning_indicator": len(anomalies) > 0,
}
def detect_duplicate_clusters(
embeddings: np.ndarray,
threshold: float = 0.99,
) -> dict:
"""
Detect clusters of near-duplicate entries that might indicate
data injection attacks (same poisoned sample repeated).
"""
from sklearn.metrics.pairwise import cosine_similarity
sim_matrix = cosine_similarity(embeddings)
np.fill_diagonal(sim_matrix, 0)
suspicious_pairs = []
for i in range(len(sim_matrix)):
for j in range(i + 1, len(sim_matrix)):
if sim_matrix[i][j] > threshold:
suspicious_pairs.append({
"index_a": i,
"index_b": j,
"similarity": float(sim_matrix[i][j]),
})
return {
"total_samples": len(embeddings),
"suspicious_duplicate_pairs": len(suspicious_pairs),
"pairs": suspicious_pairs[:50], # cap output
"poisoning_indicator": len(suspicious_pairs) > len(embeddings) * 0.01,
}3. Dependency Attacks
AI/ML projects have uniquely deep dependency trees. A typical LLM application depends on hundreds of packages, any of which can be compromised.
#!/bin/bash
# ai-dependency-audit.sh
# Audit AI/ML project dependencies for known vulnerabilities
set -euo pipefail
PROJECT_DIR="${1:-.}"
REPORT_FILE="ai-supply-chain-audit-$(date +%Y%m%d).json"
echo "[*] AI Supply Chain Dependency Audit"
echo "[*] Scanning: $PROJECT_DIR"
# Step 1: Extract all Python dependencies with versions
echo "[*] Extracting Python dependencies..."
pip freeze > /tmp/current_deps.txt
# Step 2: Check for known vulnerable ML packages
echo "[*] Checking for known vulnerable ML packages..."
VULNERABLE_PATTERNS=(
"torch==1\.[0-9]\." # PyTorch < 2.0 has pickle RCE issues
"tensorflow==1\." # TF 1.x is EOL
"transformers==4\.[0-2][0-9]\." # Early 4.x has several CVEs
"numpy==1\.1[0-9]\." # NumPy < 1.20 has buffer overflow CVEs
"pillow==([0-8]\.|9\.[0-2])" # Pillow < 9.3 has multiple CVEs
)
echo '{"vulnerable_packages": [' > "$REPORT_FILE"
FIRST=true
for pattern in "${VULNERABLE_PATTERNS[@]}"; do
matches=$(grep -iE "$pattern" /tmp/current_deps.txt || true)
if [ -n "$matches" ]; then
while IFS= read -r match; do
if [ "$FIRST" = true ]; then
FIRST=false
else
echo "," >> "$REPORT_FILE"
fi
echo " {\"package\": \"$match\", \"pattern\": \"$pattern\"}" >> "$REPORT_FILE"
echo "[!] VULNERABLE: $match"
done <<< "$matches"
fi
done
echo ']' >> "$REPORT_FILE"
echo '}' >> "$REPORT_FILE"
# Step 3: Check for packages not from PyPI (potential dependency confusion)
echo "[*] Checking for non-PyPI packages..."
pip list --format=json | python3 -c "
import json, sys
packages = json.load(sys.stdin)
for pkg in packages:
name = pkg['name']
# Flag packages with unusual naming patterns
if name.startswith('internal-') or '-internal' in name:
print(f'[!] REVIEW: {name} -- possible internal package (dependency confusion risk)')
if any(c in name for c in ['_', ' ']):
print(f'[!] REVIEW: {name} -- non-standard naming (typosquatting risk)')
"
echo "[*] Audit complete. Report: $REPORT_FILE"Risk Assessment Framework
Use this framework to evaluate your organization's AI supply chain risk:
"""
AI Supply Chain Risk Assessment Framework
Provides a structured methodology for evaluating and scoring
the supply chain risk of an AI/ML deployment.
"""
from dataclasses import dataclass
from enum import Enum
class Likelihood(Enum):
VERY_LOW = 1
LOW = 2
MEDIUM = 3
HIGH = 4
VERY_HIGH = 5
class Impact(Enum):
NEGLIGIBLE = 1
LOW = 2
MODERATE = 3
HIGH = 4
CRITICAL = 5
@dataclass
class RiskScenario:
name: str
category: str
description: str
likelihood: Likelihood
impact: Impact
existing_controls: list[str]
recommended_controls: list[str]
@property
def inherent_risk(self) -> int:
return self.likelihood.value * self.impact.value
@property
def risk_level(self) -> str:
score = self.inherent_risk
if score >= 20:
return "CRITICAL"
elif score >= 12:
return "HIGH"
elif score >= 6:
return "MEDIUM"
else:
return "LOW"
def build_ai_risk_assessment() -> list[RiskScenario]:
"""Build a standard AI supply chain risk assessment."""
return [
RiskScenario(
name="Backdoored pre-trained model",
category="Model Poisoning",
description=(
"A model downloaded from a public repository contains "
"a backdoor trigger that causes targeted misclassification"
),
likelihood=Likelihood.MEDIUM,
impact=Impact.CRITICAL,
existing_controls=[],
recommended_controls=[
"Model signature verification",
"Behavioral testing before deployment",
"Model provenance tracking",
"Isolated model evaluation environment",
],
),
RiskScenario(
name="Poisoned fine-tuning dataset",
category="Data Poisoning",
description=(
"Training data sourced from the internet contains "
"deliberately crafted samples that bias model outputs"
),
likelihood=Likelihood.HIGH,
impact=Impact.HIGH,
existing_controls=[],
recommended_controls=[
"Data validation pipeline",
"Statistical anomaly detection",
"Data provenance tracking",
"Manual review of data samples",
],
),
RiskScenario(
name="Malicious pip package",
category="Dependency Attack",
description=(
"A typosquatted or compromised PyPI package executes "
"arbitrary code during installation or import"
),
likelihood=Likelihood.MEDIUM,
impact=Impact.CRITICAL,
existing_controls=[],
recommended_controls=[
"Dependency scanning in CI/CD",
"Package pinning with hash verification",
"Private PyPI mirror with allowlisting",
"Runtime sandboxing for package imports",
],
),
RiskScenario(
name="Pickle deserialization RCE",
category="Framework Vulnerability",
description=(
"Loading a model serialized with pickle executes "
"embedded code on the host system"
),
likelihood=Likelihood.HIGH,
impact=Impact.CRITICAL,
existing_controls=[],
recommended_controls=[
"Enforce safetensors format for all models",
"Block pickle/joblib deserialization in production",
"Scan model files for embedded code before loading",
"Container isolation for model loading",
],
),
]
def generate_risk_matrix(scenarios: list[RiskScenario]) -> str:
"""Generate a text-based risk matrix."""
matrix = {}
for s in scenarios:
key = (s.likelihood.value, s.impact.value)
matrix.setdefault(key, []).append(s.name)
output = "\n Risk Matrix (Likelihood x Impact)\n"
output += " " + "-" * 60 + "\n"
output += " Impact -> Negl. Low Mod. High Crit.\n"
output += " " + "-" * 60 + "\n"
for likelihood in reversed(Likelihood):
row = f" {likelihood.name:<10}"
for impact in Impact:
items = matrix.get((likelihood.value, impact.value), [])
cell = f"[{len(items)}]" if items else " . "
row += f" {cell:>6}"
output += row + "\n"
return output
if __name__ == "__main__":
scenarios = build_ai_risk_assessment()
print(generate_risk_matrix(scenarios))
print("\nDetailed Risk Register:")
for s in sorted(scenarios, key=lambda x: x.inherent_risk, reverse=True):
print(f"\n [{s.risk_level}] {s.name}")
print(f" Category: {s.category}")
print(f" Risk Score: {s.inherent_risk}/25")
print(f" Recommended Controls:")
for ctrl in s.recommended_controls:
print(f" - {ctrl}")Baseline Supply Chain Controls
Every organization deploying AI should implement these minimum controls:
# ai-supply-chain-baseline-controls.yaml
# Minimum controls for AI supply chain security
baseline_controls:
# Tier 1: Must-have controls
tier_1_critical:
- name: "Model integrity verification"
description: "Verify checksums/signatures of all model artifacts"
implementation:
- "Calculate SHA-256 of every model file at download"
- "Compare against publisher-provided checksums"
- "Reject models without verification data"
automation: "Pre-deployment pipeline gate"
reference: "model-signing-verification.mdx"
- name: "Dependency pinning"
description: "Pin all Python/npm dependencies with integrity hashes"
implementation:
- "Use pip-compile with --generate-hashes"
- "Commit lockfiles to version control"
- "Reject unpinned dependencies in CI"
automation: "CI/CD pipeline check"
reference: "dependency-scanning-ai.mdx"
- name: "Safe serialization"
description: "Block unsafe model formats (pickle, joblib)"
implementation:
- "Enforce safetensors for all transformer models"
- "Scan for pickle files in model artifacts"
- "Block torch.load() without weights_only=True"
automation: "Pre-commit hook + CI gate"
reference: "model-repository-security.mdx"
# Tier 2: Should-have controls
tier_2_important:
- name: "Vulnerability scanning"
description: "Automated CVE scanning of all ML dependencies"
implementation:
- "Integrate Snyk/Dependabot/Trivy in CI pipeline"
- "Set severity thresholds for build failure"
- "Weekly full-dependency audit"
automation: "CI/CD + scheduled scans"
reference: "dependency-scanning-ai.mdx"
- name: "Model behavioral testing"
description: "Test models for backdoor triggers before deployment"
implementation:
- "Standardized test suites per model type"
- "Adversarial input testing"
- "Output distribution analysis"
automation: "Pre-deployment test suite"
reference: "trojan-model-detection.mdx"
- name: "Data provenance tracking"
description: "Track origin and transformations of training data"
implementation:
- "Log data sources with timestamps"
- "Hash datasets at each pipeline stage"
- "Maintain data lineage graph"
automation: "ML pipeline integration"
reference: "training-data-integrity.mdx"
# Tier 3: Advanced controls
tier_3_advanced:
- name: "AI SBOM generation"
description: "Software Bill of Materials including models and datasets"
implementation:
- "Generate SBOM at build time"
- "Include model cards, dataset cards"
- "Publish SBOM with deployments"
automation: "Build pipeline"
reference: "model-signing-verification.mdx"
- name: "Supply chain monitoring"
description: "Continuous monitoring of upstream supply chain changes"
implementation:
- "Watch for model updates on registries"
- "Monitor PyPI/npm for package compromises"
- "Alert on unexpected dependency changes"
automation: "Continuous monitoring service"
reference: "ai-supply-chain-incident-response.mdx"How the Pages in This Series Connect
This overview page introduces the AI supply chain attack surface. The remaining pages in this series provide deep dives into specific defensive areas:
| Page | Focus Area | Key Defensive Question |
|---|---|---|
| Model Repository Security | Securing model downloads | How do I safely acquire models? |
| Trojan Model Detection | Detecting backdoored models | How do I know a model is clean? |
| ML Pipeline Security | Securing build pipelines | How do I prevent pipeline compromise? |
| Training Data Integrity | Validating training data | How do I ensure data hasn't been poisoned? |
| Model Signing & Verification | Cryptographic verification | How do I prove model provenance? |
| Dependency Scanning for AI | Package vulnerability scanning | How do I secure my dependency tree? |
| AI Supply Chain Incident Response | Responding to compromises | What do I do when a compromise is detected? |
References
- OWASP (2025). "Top 10 for LLM Applications: LLM03 -- Supply Chain Vulnerabilities"
- IBM Research (2024). "AI Supply Chain Security: Threats and Countermeasures"
- Mithril Security (2023). "PoisonGPT: How We Hid a Lobotomized LLM on Hugging Face"
- JFrog (2024). "Malicious ML Models on Hugging Face"
- NIST (2024). "AI Risk Management Framework (AI RMF 1.0)"
- PyTorch (2022). "Compromise of torchtriton Package"
What makes AI supply chain attacks fundamentally different from traditional software supply chain attacks?