# supply-chain
77 articlestagged with “supply-chain”
AI Supply Chain Incident Response
Incident response procedures for compromises in the AI supply chain, including model repositories, training pipelines, and dependency chains.
Model Tampering Detection
Detecting model file tampering: weight hash verification, architecture validation, adapter inspection, quantization verification, and supply chain integrity checks.
Model Compromise Incident Response Playbook
Playbook for responding to a compromised AI model: isolation procedures, replacement strategies, supply chain investigation, retraining considerations, and integrity restoration.
Training Data Provenance Forensics
Forensic techniques for tracing the origins, lineage, and integrity of training data used in machine learning models.
Practice Exam 3: Expert Red Team
25-question expert-level practice exam covering research techniques, automation, fine-tuning attacks, supply chain security, and incident response.
Infrastructure Security Assessment (Assessment)
Test your knowledge of AI infrastructure security including model serving, API security, deployment architectures, and supply chain risks with 10 intermediate-level questions.
Model Supply Chain Assessment
Assessment covering model provenance, checkpoint manipulation, and third-party model risks.
Capstone: Security Audit of an Open-Source LLM
Conduct a comprehensive security audit of an open-source large language model, covering model weights integrity, safety alignment evaluation, supply chain verification, and adversarial robustness testing.
Capstone: Supply Chain AI Security
Red team assessment of AI-driven supply chain optimization covering data poisoning, decision manipulation, and operational disruption.
Capstone: Build an AI Supply Chain Security Tool
Build a tool that scans, audits, and monitors the security of AI/ML supply chains including model provenance, dependency integrity, and artifact verification.
Case Study: GitHub Copilot Code Injection
Analysis of prompt injection vulnerabilities in GitHub Copilot through malicious repository content.
Case Study: GitHub Copilot Generating Vulnerable Code
Analysis of research findings demonstrating that GitHub Copilot and similar AI code assistants systematically generate code containing security vulnerabilities, and the implications for software supply chain security.
Case Study: LangChain Remote Code Execution Vulnerabilities (CVE-2023-29374 and CVE-2023-36258)
Technical analysis of critical remote code execution vulnerabilities in LangChain's LLMMathChain and PALChain components that allowed arbitrary Python execution through crafted LLM outputs.
Case Study: Training Data Poisoning in Code Generation Models
Analysis of training data poisoning attacks targeting code generation models like GitHub Copilot and OpenAI Codex, where adversarial code patterns in training data cause models to suggest vulnerable or malicious code.
Security Risks of Cloud AI Model Marketplaces
Assessing security risks in cloud AI model marketplaces including AWS Bedrock Model Garden, Azure AI Model Catalog, GCP Vertex AI Model Garden, and Hugging Face Hub, covering supply chain attacks, trojan models, and verification gaps.
Cloud Model Registry Security
Security of cloud model registries including SageMaker Model Registry, Azure ML Registry, and Vertex AI Model Registry.
Model Garden Risks
Security risks of deploying models from GCP Model Garden: third-party model trust, model provenance verification, deployment from untrusted sources, and supply chain attack vectors.
AI-Generated Dependency Confusion
Exploiting LLM tendency to hallucinate package names for dependency confusion attacks.
IDE Extension Attacks
Attack surface analysis for IDE extensions: malicious extensions, extension-to-extension communication, telemetry exfiltration, and workspace trust exploitation.
Supply Chain Risks in AI Code Generation
Analysis of supply chain attack vectors introduced by AI code generation tools, including dependency confusion, typosquatting, and training data poisoning.
Dependency Suggestion Attacks
Manipulating AI coding assistants to suggest malicious dependencies, typosquatted packages, or vulnerable library versions.
Code Generation Security
How AI coding assistants introduce security vulnerabilities through suggestion poisoning, training data extraction, insecure code generation, and IDE extension risks.
LLM-Generated Dockerfile Security
Analyzing security vulnerabilities commonly introduced by AI-generated Dockerfiles and container configurations.
Code Suggestion Poisoning
Overview of attacks that manipulate AI coding assistant suggestions through training data poisoning and inference-time context manipulation.
Training Data Attacks on Code Models
Poisoning training data for code generation models: inserting vulnerable patterns into popular repositories, dependency confusion via suggestions, and trojan code patterns.
Repository Context Poisoning
Poisoning repository context (README, comments, issues) to influence code generation behavior.
July 2026: Supply Chain Audit Challenge
Audit an ML project's entire supply chain for security issues including dependencies, model provenance, data pipelines, training infrastructure, and deployment artifacts.
Data Provenance Attacks
Compromising training data integrity by attacking provenance tracking systems, falsifying data lineage, and manipulating data pipeline metadata.
Data Provenance and Lineage
Tracking data through ML pipelines, detecting contamination, verifying data integrity, and implementing provenance systems for training data security.
Poisoning Fine-Tuning Datasets
Techniques for inserting backdoor triggers into fine-tuning datasets, clean-label poisoning that evades content filters, and scaling attacks across dataset sizes -- how adversarial training data compromises model behavior.
Malicious Adapter Injection
How attackers craft LoRA adapters containing backdoors, distribute poisoned adapters through model hubs, and exploit adapter stacking to compromise model safety -- techniques, detection challenges, and real-world supply chain risks.
Repository Poisoning for Code Models
Techniques for poisoning code repositories to influence code generation models, including training data poisoning through popular repositories, backdoor injection in open-source dependencies, and supply chain attacks targeting code model training pipelines.
Code Suggestion Poisoning (Frontier Research)
Poisoning training data and package ecosystems to influence AI code suggestions: insecure pattern seeding, package name confusion, trojan code injection, and supply chain risks.
Open-Source Model Governance
Governance frameworks for organizations using open-source AI models including security vetting and supply chain risks.
AI Supply Chain Governance
Governance frameworks for managing risks from third-party models, training data, and AI service dependencies.
AI Supply Chain Governance (Governance Compliance)
Governance frameworks for managing AI supply chain risks including model providers, data sources, and integrations.
Supply Chain AI Security
Security of AI-powered supply chain management, demand forecasting, and logistics optimization systems.
Food Safety AI Threats
Threat analysis for AI in food safety including supply chain monitoring, quality inspection, and recall prediction.
AI Infrastructure Security
Overview of security concerns in AI infrastructure, covering model supply chains, API security, deployment architecture, and the unique attack surfaces of ML systems.
Deep Supply Chain Analysis
Comprehensive analysis of the AI supply chain dependency tree covering model weights, tokenizers, datasets, libraries, and infrastructure components with audit methodology.
Attacking ML CI/CD Pipelines
Advanced techniques for compromising ML continuous integration and deployment pipelines, including pipeline injection, artifact tampering, training job hijacking, and exploiting the unique trust boundaries in automated ML workflows.
ML Pipeline Supply Chain Security
Securing the ML pipeline supply chain from training framework dependencies to serving infrastructure components.
Security of Dynamic Model Loading in Production
Analyzing risks of hot-swapping, dynamic loading, and A/B testing of ML models in production serving infrastructure
Poisoning Model Registries
Advanced techniques for attacking model registries like MLflow, Weights & Biases, and Hugging Face Hub, including model replacement attacks, metadata manipulation, artifact poisoning, and supply chain compromise through registry infrastructure.
Model Serialization Attacks
Pickle, SafeTensors, and ONNX deserialization attacks targeting ML model files for arbitrary code execution.
Model Supply Chain Risks
Attack vectors in the AI model supply chain, including malicious model files, pickle exploits, compromised model registries, and dependency vulnerabilities.
AI Supply Chain Exploitation
Methodology for exploiting the AI/ML supply chain: model serialization RCE, dependency confusion, dataset poisoning, CI/CD injection, and container escape.
AI Supply Chain Deep Dive
Deep analysis of AI supply chain security threats including sleeper agents, slopsquatting, malicious model uploads, pickle deserialization exploits, and model provenance verification challenges.
Supply Chain Security for ML Dependencies
Securing the ML dependency supply chain including PyTorch, transformers, and model weight downloads.
Lab: Model Supply Chain Poisoning
Simulate model supply chain attacks by injecting backdoors into model weights distributed through public registries.
Supply Chain Saboteur: Level 2 — Model Registry
Compromise a model registry to inject backdoored model weights into a deployment pipeline.
CTF: Supply Chain Saboteur
Identify and exploit supply chain vulnerabilities in a model deployment pipeline. Find poisoned models, exploit malicious packages, and compromise the ML infrastructure.
CTF: Supply Chain Attack
Find and exploit vulnerabilities in an ML supply chain including compromised dependencies, poisoned models, backdoored training data, and malicious model files. Practice ML-specific supply chain security assessment.
Supply Chain Detective: Find the Backdoor
Analyze a model pipeline to find where a backdoor was inserted — data, training, or post-processing.
Lab: ML Pipeline Poisoning
Compromise an end-to-end machine learning pipeline by attacking data ingestion, preprocessing, training, evaluation, and deployment stages. Learn to identify and exploit weaknesses across the full ML lifecycle.
Lab: Model Registry Compromise
Explore techniques for compromising model registries and substituting malicious models into production pipelines. Learn to detect model tampering, verify model provenance, and secure the model supply chain.
Lab: Supply Chain Audit
Audit an ML project's dependencies for vulnerabilities, covering model files, Python packages, container images, and training data provenance.
Lab: ML Supply Chain Scan
Hands-on lab for auditing machine learning model dependencies, detecting malicious packages in ML pipelines, and scanning model files for backdoors and supply chain threats.
Simulation: Open Source AI Project Audit
Security audit simulation for an open-source AI application, covering code review, dependency analysis, model supply chain verification, and deployment configuration review.
Simulation: AI Supply Chain Attack Investigation
Investigate and respond to a supply chain compromise affecting an AI system's model weights, training data pipeline, and third-party dependencies.
AI Supply Chain Pipeline Assessment
Assess the full ML pipeline from data ingestion through model deployment for supply chain attacks.
Supply Chain Optimization AI Assessment
Assess an AI supply chain optimization system for manipulation of demand forecasts and routing decisions.
Model Signing and Provenance
Cryptographic signing for ML models: Sigstore for ML artifacts, cosign for model weights, SLSA framework applied to ML pipelines, supply chain levels for model provenance, and practical implementation of model artifact verification.
Registry-Specific Attacks
Attack techniques targeting model registries: version confusion, dependency resolution exploitation, namespace squatting, model aliasing attacks, and practical exploitation of registry trust models.
Indirect Prompt Injection
How attackers embed malicious instructions in external data sources that LLMs process, enabling attacks without direct access to the model's input.
Model Merging & LoRA Composition Exploits
Exploiting model merging techniques (TIES, DARE, linear interpolation) and LoRA composition to introduce backdoors through individually benign model components.
Model Supply Chain Attacks
Comprehensive analysis of model supply chain attack vectors from training data through deployment.
Model Checkpoint & Recovery Attacks
Checkpoint file format vulnerabilities, modification attacks on safetensors and PyTorch formats, checkpoint poisoning, storage security, and supply chain implications.
Training Loop Vulnerabilities
Attacks on the training process itself including gradient manipulation, loss function tampering, learning rate schedule attacks, and training infrastructure compromise.
Poisoning Attacks on Synthetic Training Data
Comprehensive analysis of poisoning vectors in synthetic data generation pipelines, from teacher model manipulation to post-generation filtering evasion.
Security of Training Checkpoints
Threat analysis of model checkpoint storage, serialization, and restoration including checkpoint poisoning, deserialization attacks, and integrity verification.
Model Hub Supply Chain Attack
Attacking the ML model supply chain through hub repositories like Hugging Face, including typosquatting, model poisoning, and repository manipulation techniques.
Model Serialization RCE
Remote code execution through malicious model files using pickle deserialization, safetensors manipulation, and other model serialization format vulnerabilities.
Supply Chain Prompt Injection Walkthrough
Plant injection payloads in upstream data sources consumed by LLM applications including packages and documentation.
Full Engagement: Supply Chain AI Optimizer
End-to-end engagement for a supply chain AI with access to logistics, inventory, and supplier management systems.
Hugging Face Security Audit Walkthrough
Step-by-step walkthrough for auditing Hugging Face models: scanning for malicious model files, verifying model provenance, assessing model card completeness, and testing Spaces and Inference API security.
Hugging Face Hub Red Team Walkthrough
Walkthrough for assessing AI models on Hugging Face Hub: model security assessment, scanning for malicious models, Transformers library testing, and Spaces application evaluation.