# supply-chain
標記為「supply-chain」的 177 篇文章
Agent Supply Chain Attacks
Compromising AI agents through poisoned packages, backdoored MCP servers, malicious model registries, and weaponized agent frameworks -- including the Postmark MCP breach and NullBulge campaigns.
MCP Supply Chain Security: Defending Against Backdoored MCP Packages
A defense-focused guide to securing the MCP package supply chain -- analyzing the Postmark MCP breach, understanding how malicious MCP servers are distributed, and implementing package verification, dependency scanning, and policy enforcement.
AI Supply Chain Incident Response
Incident response procedures for compromises in the AI supply chain, including model repositories, training pipelines, and dependency chains.
Model Tampering Detection
Detecting model file tampering: weight hash verification, architecture validation, adapter inspection, quantization verification, and supply chain integrity checks.
Model Compromise Incident Response Playbook
Playbook for responding to a compromised AI model: isolation procedures, replacement strategies, supply chain investigation, retraining considerations, and integrity restoration.
Training Data Provenance Forensics
Forensic techniques for tracing the origins, lineage, and integrity of training data used in machine learning models.
Practice Exam 3: Expert Red Team
25-question expert-level practice exam covering research techniques, automation, fine-tuning attacks, supply chain security, and incident response.
Infrastructure Security Assessment (Assessment)
Test your knowledge of AI infrastructure security including model serving, API security, deployment architectures, and supply chain risks with 10 intermediate-level questions.
Model Supply Chain Assessment
Assessment covering model provenance, checkpoint manipulation, and third-party model risks.
Capstone: Security Audit of an Open-Source LLM
Conduct a comprehensive security audit of an open-source large language model, covering model weights integrity, safety alignment evaluation, supply chain verification, and adversarial robustness testing.
Capstone: Supply Chain AI Security
Red team assessment of AI-driven supply chain optimization covering data poisoning, decision manipulation, and operational disruption.
Capstone: Build an AI Supply Chain Security Tool
Build a tool that scans, audits, and monitors the security of AI/ML supply chains including model provenance, dependency integrity, and artifact verification.
Case Study: GitHub Copilot Code Injection
Analysis of prompt injection vulnerabilities in GitHub Copilot through malicious repository content.
Case Study: GitHub Copilot Generating Vulnerable Code
Analysis of research findings demonstrating that GitHub Copilot and similar AI code assistants systematically generate code containing security vulnerabilities, and the implications for software supply chain security.
Case Study: LangChain Remote Code Execution Vulnerabilities (CVE-2023-29374 and CVE-2023-36258)
Technical analysis of critical remote code execution vulnerabilities in LangChain's LLMMathChain and PALChain components that allowed arbitrary Python execution through crafted LLM outputs.
Case Study: Training Data Poisoning in Code Generation Models
Analysis of training data poisoning attacks targeting code generation models like GitHub Copilot and OpenAI Codex, where adversarial code patterns in training data cause models to suggest vulnerable or malicious code.
Security Risks of Cloud AI Model Marketplaces
Assessing security risks in cloud AI model marketplaces including AWS Bedrock Model Garden, Azure AI Model Catalog, GCP Vertex AI Model Garden, and Hugging Face Hub, covering supply chain attacks, trojan models, and verification gaps.
Cloud Model Registry Security
Security of cloud model registries including SageMaker Model Registry, Azure ML Registry, and Vertex AI Model Registry.
Model Garden Risks
Security risks of deploying models from GCP Model Garden: third-party model trust, model provenance verification, deployment from untrusted sources, and supply chain attack vectors.
AI-Generated Dependency Confusion
Exploiting LLM tendency to hallucinate package names for dependency confusion attacks.
IDE Extension Attacks
Attack surface analysis for IDE extensions: malicious extensions, extension-to-extension communication, telemetry exfiltration, and workspace trust exploitation.
Supply Chain Risks in AI Code Generation
Analysis of supply chain attack vectors introduced by AI code generation tools, including dependency confusion, typosquatting, and training data poisoning.
Dependency Suggestion Attacks
Manipulating AI coding assistants to suggest malicious dependencies, typosquatted packages, or vulnerable library versions.
Code Generation Security
How AI coding assistants introduce security vulnerabilities through suggestion poisoning, training data extraction, insecure code generation, and IDE extension risks.
LLM-Generated Dockerfile Security
Analyzing security vulnerabilities commonly introduced by AI-generated Dockerfiles and container configurations.
Code Suggestion Poisoning
Overview of attacks that manipulate AI coding assistant suggestions through training data poisoning and inference-time context manipulation.
Training Data Attacks on Code Models
Poisoning training data for code generation models: inserting vulnerable patterns into popular repositories, dependency confusion via suggestions, and trojan code patterns.
Repository Context Poisoning
Poisoning repository context (README, comments, issues) to influence code generation behavior.
July 2026: Supply Chain Audit Challenge
Audit an ML project's entire supply chain for security issues including dependencies, model provenance, data pipelines, training infrastructure, and deployment artifacts.
Data Provenance Attacks
Compromising training data integrity by attacking provenance tracking systems, falsifying data lineage, and manipulating data pipeline metadata.
Data Provenance and Lineage
Tracking data through ML pipelines, detecting contamination, verifying data integrity, and implementing provenance systems for training data security.
Poisoning Fine-Tuning Datasets
Techniques for inserting backdoor triggers into fine-tuning datasets, clean-label poisoning that evades content filters, and scaling attacks across dataset sizes -- how adversarial training data compromises model behavior.
Malicious Adapter Injection
How attackers craft LoRA adapters containing backdoors, distribute poisoned adapters through model hubs, and exploit adapter stacking to compromise model safety -- techniques, detection challenges, and real-world supply chain risks.
Repository Poisoning for Code Models
Techniques for poisoning code repositories to influence code generation models, including training data poisoning through popular repositories, backdoor injection in open-source dependencies, and supply chain attacks targeting code model training pipelines.
Code Suggestion Poisoning (Frontier Research)
Poisoning training data and package ecosystems to influence AI code suggestions: insecure pattern seeding, package name confusion, trojan code injection, and supply chain risks.
Open-Source Model Governance
Governance frameworks for organizations using open-source AI models including security vetting and supply chain risks.
AI Supply Chain Governance
Governance frameworks for managing risks from third-party models, training data, and AI service dependencies.
AI Supply Chain Governance (Governance Compliance)
Governance frameworks for managing AI supply chain risks including model providers, data sources, and integrations.
Supply Chain AI Security
Security of AI-powered supply chain management, demand forecasting, and logistics optimization systems.
Food Safety AI Threats
Threat analysis for AI in food safety including supply chain monitoring, quality inspection, and recall prediction.
AI Supply Chain Incident Response
Defense-focused guide to responding to AI supply chain compromises, covering incident response playbooks, model tampering detection, rollback procedures, communication templates, and automated integrity monitoring.
AI Supply Chain Security Overview
Comprehensive overview of the AI/ML supply chain attack surface, covering model poisoning, data poisoning, dependency attacks, and risk assessment frameworks aligned with OWASP LLM03:2025.
Dependency Scanning for AI/ML
Defense-focused guide to scanning AI/ML dependencies for vulnerabilities, covering AI-specific dependency risks, malicious package detection, automated scanning pipelines, and policy enforcement for ML toolchains.
AI Infrastructure Security
Overview of security concerns in AI infrastructure, covering model supply chains, API security, deployment architecture, and the unique attack surfaces of ML systems.
Deep Supply Chain Analysis
Comprehensive analysis of the AI supply chain dependency tree covering model weights, tokenizers, datasets, libraries, and infrastructure components with audit methodology.
Attacking ML CI/CD Pipelines
Advanced techniques for compromising ML continuous integration and deployment pipelines, including pipeline injection, artifact tampering, training job hijacking, and exploiting the unique trust boundaries in automated ML workflows.
ML Pipeline Security
Defense-focused guide to securing ML training and deployment pipelines, covering CI/CD cross-tenant attacks, safetensors conversion hijacking, pipeline hardening, and isolated build environments.
ML Pipeline Supply Chain Security
Securing the ML pipeline supply chain from training framework dependencies to serving infrastructure components.
Security of Dynamic Model Loading in Production
Analyzing risks of hot-swapping, dynamic loading, and A/B testing of ML models in production serving infrastructure
Poisoning Model Registries
Advanced techniques for attacking model registries like MLflow, Weights & Biases, and Hugging Face Hub, including model replacement attacks, metadata manipulation, artifact poisoning, and supply chain compromise through registry infrastructure.
Model Repository Security
Defense-focused guide to securing model downloads from public repositories like Hugging Face, covering backdoored model detection, namespace attacks, signature verification, and safe download procedures.
Model Serialization Attacks
Pickle, SafeTensors, and ONNX deserialization attacks targeting ML model files for arbitrary code execution.
Model Signing and Verification
Defense-focused guide to implementing cryptographic model signing and verification, covering Sigstore for ML, certificate management, SBOM generation for AI systems, and deployment-time verification workflows.
Model Supply Chain Risks
Attack vectors in the AI model supply chain, including malicious model files, pickle exploits, compromised model registries, and dependency vulnerabilities.
AI Supply Chain Exploitation
Methodology for exploiting the AI/ML supply chain: model serialization RCE, dependency confusion, dataset poisoning, CI/CD injection, and container escape.
AI Supply Chain Deep Dive
Deep analysis of AI supply chain security threats including sleeper agents, slopsquatting, malicious model uploads, pickle deserialization exploits, and model provenance verification challenges.
Supply Chain Security for ML Dependencies
Securing the ML dependency supply chain including PyTorch, transformers, and model weight downloads.
Training Data Integrity
Defense-focused guide to ensuring training data has not been poisoned, covering label flipping, backdoor insertion, clean-label attacks, data validation pipelines, provenance tracking, and anomaly detection.
Trojan Model Detection
Defense-focused guide to detecting backdoored and trojan AI models, covering BadEdit, TrojanPuzzle, PoisonGPT techniques and practical detection methods including activation analysis, weight inspection, and behavioral testing.
Lab: Model Supply Chain Poisoning
Simulate model supply chain attacks by injecting backdoors into model weights distributed through public registries.
Supply Chain Saboteur: Level 2 — Model Registry
Compromise a model registry to inject backdoored model weights into a deployment pipeline.
CTF: Supply Chain Saboteur
Identify and exploit supply chain vulnerabilities in a model deployment pipeline. Find poisoned models, exploit malicious packages, and compromise the ML infrastructure.
CTF: Supply Chain Attack
Find and exploit vulnerabilities in an ML supply chain including compromised dependencies, poisoned models, backdoored training data, and malicious model files. Practice ML-specific supply chain security assessment.
Supply Chain Detective: Find the Backdoor
Analyze a model pipeline to find where a backdoor was inserted — data, training, or post-processing.
Lab: ML Pipeline Poisoning
Compromise an end-to-end machine learning pipeline by attacking data ingestion, preprocessing, training, evaluation, and deployment stages. Learn to identify and exploit weaknesses across the full ML lifecycle.
Lab: Model Registry Compromise
Explore techniques for compromising model registries and substituting malicious models into production pipelines. Learn to detect model tampering, verify model provenance, and secure the model supply chain.
Lab: Supply Chain Audit
Audit an ML project's dependencies for vulnerabilities, covering model files, Python packages, container images, and training data provenance.
Lab: ML Supply Chain Scan
Hands-on lab for auditing machine learning model dependencies, detecting malicious packages in ML pipelines, and scanning model files for backdoors and supply chain threats.
Simulation: Open Source AI Project Audit
Security audit simulation for an open-source AI application, covering code review, dependency analysis, model supply chain verification, and deployment configuration review.
Simulation: AI Supply Chain Attack Investigation
Investigate and respond to a supply chain compromise affecting an AI system's model weights, training data pipeline, and third-party dependencies.
AI Supply Chain Pipeline Assessment
Assess the full ML pipeline from data ingestion through model deployment for supply chain attacks.
Supply Chain Optimization AI Assessment
Assess an AI supply chain optimization system for manipulation of demand forecasts and routing decisions.
Model Signing and Provenance
Cryptographic signing for ML models: Sigstore for ML artifacts, cosign for model weights, SLSA framework applied to ML pipelines, supply chain levels for model provenance, and practical implementation of model artifact verification.
Registry-Specific Attacks
Attack techniques targeting model registries: version confusion, dependency resolution exploitation, namespace squatting, model aliasing attacks, and practical exploitation of registry trust models.
Indirect Prompt Injection
How attackers embed malicious instructions in external data sources that LLMs process, enabling attacks without direct access to the model's input.
Model Merging & LoRA Composition Exploits
Exploiting model merging techniques (TIES, DARE, linear interpolation) and LoRA composition to introduce backdoors through individually benign model components.
Model Supply Chain Attacks
Comprehensive analysis of model supply chain attack vectors from training data through deployment.
Model Checkpoint & Recovery Attacks
Checkpoint file format vulnerabilities, modification attacks on safetensors and PyTorch formats, checkpoint poisoning, storage security, and supply chain implications.
Training Loop Vulnerabilities
Attacks on the training process itself including gradient manipulation, loss function tampering, learning rate schedule attacks, and training infrastructure compromise.
Poisoning Attacks on Synthetic Training Data
Comprehensive analysis of poisoning vectors in synthetic data generation pipelines, from teacher model manipulation to post-generation filtering evasion.
Security of Training Checkpoints
Threat analysis of model checkpoint storage, serialization, and restoration including checkpoint poisoning, deserialization attacks, and integrity verification.
Model Hub Supply Chain Attack
Attacking the ML model supply chain through hub repositories like Hugging Face, including typosquatting, model poisoning, and repository manipulation techniques.
Model Serialization RCE
Remote code execution through malicious model files using pickle deserialization, safetensors manipulation, and other model serialization format vulnerabilities.
Supply Chain Prompt Injection Walkthrough
Plant injection payloads in upstream data sources consumed by LLM applications including packages and documentation.
Full Engagement: Supply Chain AI Optimizer
End-to-end engagement for a supply chain AI with access to logistics, inventory, and supplier management systems.
Hugging Face Security Audit Walkthrough
Step-by-step walkthrough for auditing Hugging Face models: scanning for malicious model files, verifying model provenance, assessing model card completeness, and testing Spaces and Inference API security.
Hugging Face Hub Red Team Walkthrough
Walkthrough for assessing AI models on Hugging Face Hub: model security assessment, scanning for malicious models, Transformers library testing, and Spaces application evaluation.
代理 Supply Chain 攻擊s
Compromising AI agents through poisoned packages, backdoored MCP servers, malicious model registries, and weaponized agent frameworks -- including the Postmark MCP breach and NullBulge campaigns.
MCP Supply Chain 安全: Defending Against Backdoored MCP Packages
A defense-focused guide to securing the MCP package supply chain -- analyzing the Postmark MCP breach, understanding how malicious MCP servers are distributed, and implementing package verification, dependency scanning, and policy enforcement.
AI Supply Chain Incident Response
Incident response procedures for compromises in the AI supply chain, including model repositories, training pipelines, and dependency chains.
模型 Tampering Detection
Detecting model file tampering: weight hash verification, architecture validation, adapter inspection, quantization verification, and supply chain integrity checks.
模型 Compromise Incident Response Playbook
Playbook for responding to a compromised AI model: isolation procedures, replacement strategies, supply chain investigation, retraining considerations, and integrity restoration.
訓練 Data Provenance Forensics
Forensic techniques for tracing the origins, lineage, and integrity of training data used in machine learning models.
Practice Exam 3: 專家 紅隊
25-question expert-level practice exam covering research techniques, automation, fine-tuning attacks, supply chain security, and incident response.
章節評量:基礎設施
15 題校準評量,測試你對 AI 基礎設施安全的理解——供應鏈、API 安全、雲端部署與模型服務。
模型 Supply Chain 評量
評量 covering model provenance, checkpoint manipulation, and third-party model risks.
只需 250 份投毒文件:Anthropic 的資料投毒突破
Anthropic、英國 AI 安全研究所與 Turing 研究所證實,只要在預訓練資料中注入 250 份惡意文件,就能對 6 億到 130 億參數的大型語言模型植入後門。本文剖析這對模型安全的意涵。
OpenClaw:解剖 2026 年第一場重大 AI 代理安全危機
OpenClaw 如何從一鳴驚人成為 GitHub 最受歡迎的專案,同時暴露出關鍵的代理式 AI 漏洞——從 ClawJacked WebSocket 劫持(CVE-2026-25253)到散布 macOS 竊取程式的惡意技能。紅隊員與防禦者必須知道的事。
Capstone: 安全 Audit of an Open-Source LLM
Conduct a comprehensive security audit of an open-source large language model, covering model weights integrity, safety alignment evaluation, supply chain verification, and adversarial robustness testing.
Capstone: Supply Chain AI 安全
Red team assessment of AI-driven supply chain optimization covering data poisoning, decision manipulation, and operational disruption.
Capstone: Build an AI Supply Chain 安全 工具
Build a tool that scans, audits, and monitors the security of AI/ML supply chains including model provenance, dependency integrity, and artifact verification.
Case Study: GitHub Copilot Code Injection
Analysis of prompt injection vulnerabilities in GitHub Copilot through malicious repository content.
Case Study: GitHub Copilot Generating Vulnerable Code
Analysis of research findings demonstrating that GitHub Copilot and similar AI code assistants systematically generate code containing security vulnerabilities, and the implications for software supply chain security.
Case Study: LangChain Remote Code Execution Vulnerabilities (CVE-2023-29374 and CVE-2023-36258)
Technical analysis of critical remote code execution vulnerabilities in LangChain's LLMMathChain and PALChain components that allowed arbitrary Python execution through crafted LLM outputs.
Case Study: 訓練 Data 投毒 in Code Generation 模型s
Analysis of training data poisoning attacks targeting code generation models like GitHub Copilot and OpenAI Codex, where adversarial code patterns in training data cause models to suggest vulnerable or malicious code.
安全 Risks of Cloud AI 模型 Marketplaces
Assessing security risks in cloud AI model marketplaces including AWS Bedrock 模型 Garden, Azure AI 模型 Catalog, GCP Vertex AI 模型 Garden, and Hugging Face Hub, covering supply chain attacks, trojan models, and verification gaps.
Cloud 模型 Registry 安全
安全 of cloud model registries including SageMaker 模型 Registry, Azure ML Registry, and Vertex AI 模型 Registry.
Model Garden 風險
自 GCP Model Garden 部署模型之安全風險:第三方模型信任、模型來源驗證、自未受信任來源之部署,與供應鏈攻擊向量。
AI-Generated Dependency Confusion
利用ing LLM tendency to hallucinate package names for dependency confusion attacks.
IDE 擴充攻擊
IDE 擴充的攻擊面分析:惡意擴充、擴充對擴充通訊、遙測外洩,以及工作區信任利用。
AI 程式設計助理
AI 程式設計助理的安全版圖——涵蓋 GitHub Copilot、Cursor、Claude Code 與其他工具的攻擊面、供應鏈風險與安全評估方法。
Supply Chain Risks in AI Code Generation
Analysis of supply chain attack vectors introduced by AI code generation tools, including dependency confusion, typosquatting, and training data poisoning.
Dependency Suggestion 攻擊s
Manipulating AI coding assistants to suggest malicious dependencies, typosquatted packages, or vulnerable library versions.
程式碼生成安全
AI 程式設計助理如何透過建議投毒、訓練資料萃取、不安全程式碼生成與 IDE 擴充功能風險引入安全漏洞。
LLM-Generated Dockerfile 安全
Analyzing security vulnerabilities commonly introduced by AI-generated Dockerfiles and container configurations.
程式碼建議投毒
透過訓練資料投毒與推論期上下文操控來操控 AI 程式設計助理建議的攻擊概覽。
針對程式碼模型的訓練資料攻擊
對程式碼生成模型的訓練資料投毒:將漏洞模式植入熱門儲存庫、經由建議達成相依性混淆,以及特洛伊程式碼模式。
Repository Context 投毒
投毒 repository context (README, comments, issues) to influence code generation behavior.
July 2026: Supply Chain Audit Challenge
Audit an ML project's entire supply chain for security issues including dependencies, model provenance, data pipelines, training infrastructure, and deployment artifacts.
Data Provenance 攻擊s
Compromising training data integrity by attacking provenance tracking systems, falsifying data lineage, and manipulating data pipeline metadata.
Data Provenance and Lineage
Tracking data through ML pipelines, detecting contamination, verifying data integrity, and implementing provenance systems for training data security.
投毒 Fine-Tuning Datasets
Techniques for inserting backdoor triggers into fine-tuning datasets, clean-label poisoning that evades content filters, and scaling attacks across dataset sizes -- how adversarial training data compromises model behavior.
Malicious Adapter Injection
How attackers craft LoRA adapters containing backdoors, distribute poisoned adapters through model hubs, and exploit adapter stacking to compromise model safety -- techniques, detection challenges, and real-world supply chain risks.
Repository 投毒 for Code 模型s
Techniques for poisoning code repositories to influence code generation models, including training data poisoning through popular repositories, backdoor injection in open-source dependencies, and supply chain attacks targeting code model training pipelines.
程式碼建議投毒(前沿研究)
投毒訓練資料與套件生態系以影響 AI 程式碼建議:不安全模式播種、套件名稱混淆、木馬程式碼注入,與供應鏈風險。
Open-Source 模型 Governance
Governance frameworks for organizations using open-source AI models including security vetting and supply chain risks.
AI Supply Chain Governance
Governance frameworks for managing risks from third-party models, training data, and AI service dependencies.
AI Supply Chain Governance (Governance Compliance)
Governance frameworks for managing AI supply chain risks including model providers, data sources, and integrations.
Supply Chain AI 安全
安全 of AI-powered supply chain management, demand forecasting, and logistics optimization systems.
Food Safety AI Threats
Threat analysis for AI in food safety including supply chain monitoring, quality inspection, and recall prediction.
AI Supply Chain Incident Response
防禦-focused guide to responding to AI supply chain compromises, covering incident response playbooks, model tampering detection, rollback procedures, communication templates, and automated integrity monitoring.
AI Supply Chain 安全 概覽
Comprehensive overview of the AI/ML supply chain attack surface, covering model poisoning, data poisoning, dependency attacks, and risk assessment frameworks aligned with OWASP LLM03:2025.
Dependency Scanning for AI/ML
防禦-focused guide to scanning AI/ML dependencies for vulnerabilities, covering AI-specific dependency risks, malicious package detection, automated scanning pipelines, and policy enforcement for ML toolchains.
AI 基礎設施安全
AI 基礎設施安全顧慮的概覽,涵蓋模型供應鏈、API 安全、部署架構,以及 ML 系統的獨特攻擊面。
深入供應鏈分析
AI 供應鏈依賴樹之完整分析,涵蓋模型權重、tokenizer、資料集、函式庫與基礎設施元件,含稽核方法論。
攻擊ing ML CI/CD Pipelines
進階 techniques for compromising ML continuous integration and deployment pipelines, including pipeline injection, artifact tampering, training job hijacking, and exploiting the unique trust boundaries in automated ML workflows.
ML Pipeline 安全
防禦-focused guide to securing ML training and deployment pipelines, covering CI/CD cross-tenant attacks, safetensors conversion hijacking, pipeline hardening, and isolated build environments.
ML Pipeline Supply Chain 安全
Securing the ML pipeline supply chain from training framework dependencies to serving infrastructure components.
安全 of Dynamic 模型 Loading in Production
Analyzing risks of hot-swapping, dynamic loading, and A/B testing of ML models in production serving infrastructure
投毒 模型 Registries
進階 techniques for attacking model registries like MLflow, Weights & Biases, and Hugging Face Hub, including model replacement attacks, metadata manipulation, artifact poisoning, and supply chain compromise through registry infrastructure.
模型 Repository 安全
防禦-focused guide to securing model downloads from public repositories like Hugging Face, covering backdoored model detection, namespace attacks, signature verification, and safe download procedures.
模型 Serialization 攻擊s
Pickle, SafeTensors, and ONNX deserialization attacks targeting ML model files for arbitrary code execution.
模型 Signing and Verification
防禦-focused guide to implementing cryptographic model signing and verification, covering Sigstore for ML, certificate management, SBOM generation for AI systems, and deployment-time verification workflows.
模型供應鏈
AI 模型供應鏈中的安全風險——涵蓋模型登錄攻擊、序列化利用、依賴漏洞與模型完整性驗證。
AI 供應鏈利用
為利用 AI/ML 供應鏈之方法論:模型序列化 RCE、依賴混淆、資料集投毒、CI/CD 注入與容器逃逸。
AI Supply Chain Deep Dive
Deep analysis of AI supply chain security threats including sleeper agents, slopsquatting, malicious model uploads, pickle deserialization exploits, and model provenance verification challenges.
Supply Chain 安全 for ML Dependencies
Securing the ML dependency supply chain including PyTorch, transformers, and model weight downloads.
訓練 Data Integrity
防禦-focused guide to ensuring training data has not been poisoned, covering label flipping, backdoor insertion, clean-label attacks, data validation pipelines, provenance tracking, and anomaly detection.
Trojan 模型 Detection
防禦-focused guide to detecting backdoored and trojan AI models, covering BadEdit, TrojanPuzzle, PoisonGPT techniques and practical detection methods including activation analysis, weight inspection, and behavioral testing.
實驗室: 模型 Supply Chain 投毒
Simulate model supply chain attacks by injecting backdoors into model weights distributed through public registries.
Supply Chain Saboteur: Level 2 — 模型 Registry
Compromise a model registry to inject backdoored model weights into a deployment pipeline.
CTF: Supply Chain Saboteur
Identify and exploit supply chain vulnerabilities in a model deployment pipeline. Find poisoned models, exploit malicious packages, and compromise the ML infrastructure.
CTF:供應鏈攻擊
尋找並利用 ML 供應鏈漏洞,包括遭入侵相依、被投毒模型、被植後門訓練資料與惡意模型檔。練習 ML 特有的供應鏈安全評估。
Supply Chain Detective: Find the Backdoor
Analyze a model pipeline to find where a backdoor was inserted — data, training, or post-processing.
實驗室: ML Pipeline 投毒
Compromise an end-to-end machine learning pipeline by attacking data ingestion, preprocessing, training, evaluation, and deployment stages. Learn to identify and exploit weaknesses across the full ML lifecycle.
實驗室: 模型 Registry Compromise
Explore techniques for compromising model registries and substituting malicious models into production pipelines. Learn to detect model tampering, verify model provenance, and secure the model supply chain.
實驗室: Supply Chain Audit
Audit an ML project's dependencies for vulnerabilities, covering model files, Python packages, container images, and training data provenance.
實驗室: ML Supply Chain Scan
Hands-on lab for auditing machine learning model dependencies, detecting malicious packages in ML pipelines, and scanning model files for backdoors and supply chain threats.
Simulation: Open Source AI Project Audit
安全 audit simulation for an open-source AI application, covering code review, dependency analysis, model supply chain verification, and deployment configuration review.
模擬:AI 供應鏈攻擊調查
調查並回應影響 AI 系統之模型權重、訓練資料管線與第三方依賴之供應鏈受損。
AI Supply Chain Pipeline 評量
Assess the full ML pipeline from data ingestion through model deployment for supply chain attacks.
Supply Chain Optimization AI 評量
Assess an AI supply chain optimization system for manipulation of demand forecasts and routing decisions.
模型 Signing and Provenance
Cryptographic signing for ML models: Sigstore for ML artifacts, cosign for model weights, SLSA framework applied to ML pipelines, supply chain levels for model provenance, and practical implementation of model artifact verification.
Registry-Specific 攻擊s
攻擊 techniques targeting model registries: version confusion, dependency resolution exploitation, namespace squatting, model aliasing attacks, and practical exploitation of registry trust models.
間接提示詞注入
攻擊者如何在大型語言模型處理的外部資料來源中嵌入惡意指令,無需直接存取模型輸入即可發動攻擊。
模型合併與 LoRA 組合攻擊
利用模型合併技術(TIES、DARE、線性內插)與 LoRA 組合,透過個別無害的模型元件引入後門。
模型 Supply Chain 攻擊s
Comprehensive analysis of model supply chain attack vectors from training data through deployment.
模型 Checkpoint 與復原攻擊
Checkpoint 檔案格式漏洞、對 safetensors 與 PyTorch 格式之修改攻擊、checkpoint 投毒、儲存安全,以及供應鏈意涵。
訓練迴圈漏洞
對訓練過程本身之攻擊,含梯度操弄、loss 函式篡改、學習率時程攻擊,與訓練基礎設施受損。
投毒 攻擊s on Synthetic 訓練 Data
Comprehensive analysis of poisoning vectors in synthetic data generation pipelines, from teacher model manipulation to post-generation filtering evasion.
安全 of 訓練 Checkpoints
Threat analysis of model checkpoint storage, serialization, and restoration including checkpoint poisoning, deserialization attacks, and integrity verification.
模型 Hub Supply Chain 攻擊
攻擊ing the ML model supply chain through hub repositories like Hugging Face, including typosquatting, model poisoning, and repository manipulation techniques.
模型 Serialization RCE
Remote code execution through malicious model files using pickle deserialization, safetensors manipulation, and other model serialization format vulnerabilities.
Supply Chain 提示詞注入 導覽
Plant injection payloads in upstream data sources consumed by LLM applications including packages and documentation.
Full Engagement: Supply Chain AI Optimizer
End-to-end engagement for a supply chain AI with access to logistics, inventory, and supplier management systems.
Hugging Face 安全 Audit 導覽
Step-by-step walkthrough for auditing Hugging Face models: scanning for malicious model files, verifying model provenance, assessing model card completeness, and testing Spaces and Inference API security.
Hugging Face Hub 紅隊 導覽
導覽 for assessing AI models on Hugging Face Hub: model security assessment, scanning for malicious models, Transformers library testing, and Spaces application evaluation.