# deployment
標記為「deployment」的 47 篇文章
Infrastructure Security Assessment (Assessment)
Test your knowledge of AI infrastructure security including model serving, API security, deployment architectures, and supply chain risks with 10 intermediate-level questions.
CI/CD Pipeline AI Risks
Security implications of integrating AI into CI/CD pipelines — covering AI-powered code generation in builds, automated testing risks, deployment decision manipulation, and pipeline hardening.
June 2026: Cloud AI Security Challenge
Find and document vulnerabilities in a cloud-deployed AI service covering API security, model serving infrastructure, authentication, and data handling.
AI Deployment Patterns and Security Implications
How API-based, self-hosted, edge, and hybrid deployment patterns each create distinct security considerations and attack surfaces for AI systems.
Deployment Patterns and Security
Common LLM deployment patterns (API, self-hosted, edge) and their distinct security properties and attack surfaces.
LLM Deployment Patterns and Security
Common LLM deployment patterns and their security implications including direct API, RAG, agent, and pipeline architectures.
Common AI Deployment Patterns & Attack Surfaces
Analysis of deployment patterns — chatbots, copilots, autonomous agents, batch processing, and fine-tuned models — with their unique attack surfaces and security considerations.
AI System Architecture for Red Teamers
How AI systems are deployed in production — model API, prompt templates, orchestration, tools, memory, and guardrails — with attack surface analysis at each layer.
Post-Deployment Safety Degradation
Research on how model safety degrades over time through fine-tuning, adaptation, and use-case drift.
Quantization & Safety Alignment
How model quantization disproportionately degrades safety alignment: malicious quantization attacks, token-flipping, and safety-aware quantization defenses.
AI Model Governance Lifecycle
Governance processes for the complete AI model lifecycle from procurement through retirement.
Attacking AI Deployments
Security assessment of AI deployment infrastructure, including container escapes, GPU side channels, inference server vulnerabilities, and resource exhaustion attacks.
Edge AI Deployment Security
Security challenges and mitigations for deploying AI models at the edge on resource-constrained devices.
Edge ML Deployment Security
Security challenges of deploying ML models at the edge including model extraction, update tampering, and physical access attacks.
AI Infrastructure Security
Overview of security concerns in AI infrastructure, covering model supply chains, API security, deployment architecture, and the unique attack surfaces of ML systems.
Model Serving Infrastructure Attacks
Attacking model serving infrastructure including inference servers, load balancers, and GPU schedulers.
Blue-Green Deployment Attacks
Exploiting blue-green and canary deployment strategies to manipulate traffic routing and force deployment of compromised model versions.
Canary Deployments for AI Models
Implementing canary deployments that catch security regressions in AI model updates.
Deployment Pipeline Attacks
Comprehensive analysis of attack vectors in ML deployment pipelines including build system compromise, artifact tampering, and deployment manipulation.
LLMOps Security
Comprehensive overview of security across the LLMOps lifecycle: from data preparation and experiment tracking through model deployment and production monitoring. Attack surfaces, threat models, and defensive strategies for ML operations.
ML CI/CD Security
Security overview of ML continuous integration and deployment pipelines: how ML CI/CD differs from traditional CI/CD, unique attack surfaces in training workflows, and the security implications of automated model building and deployment.
Model Deployment Security
Security best practices for deploying LLMs to production environments.
LLM Honeypot Deployment
Deploy LLM honeypots to detect and study attacker behavior patterns and techniques.
LLM Guard Deployment and Testing
Deploy LLM Guard for input/output scanning and test its effectiveness against common attacks.
CI/CD 管線 AI 風險
將 AI 整合至 CI/CD 管線的安全意涵——涵蓋建構中的 AI 驅動程式碼生成、自動化測試風險、部署決策操控與管線強化。
June 2026: Cloud AI 安全 Challenge
Find and document vulnerabilities in a cloud-deployed AI service covering API security, model serving infrastructure, authentication, and data handling.
AI Deployment Patterns and 安全 Implications
How API-based, self-hosted, edge, and hybrid deployment patterns each create distinct security considerations and attack surfaces for AI systems.
Deployment Patterns and 安全
Common LLM deployment patterns (API, self-hosted, edge) and their distinct security properties and attack surfaces.
LLM Deployment Patterns and 安全
Common LLM deployment patterns and their security implications including direct API, RAG, agent, and pipeline architectures.
常見 AI 部署模式與攻擊面
部署模式之分析——聊天機器人、copilot、自主代理、批次處理與微調模型——及其獨特之攻擊面與安全考量。
紅隊視角的 AI 系統架構
AI 系統如何於生產環境部署——模型 API、提示範本、編排、工具、記憶體與防護機制——並於每一層進行攻擊面分析。
Post-Deployment Safety Degradation
Research on how model safety degrades over time through fine-tuning, adaptation, and use-case drift.
Quantization & Safety Alignment
How model quantization disproportionately degrades safety alignment: malicious quantization attacks, token-flipping, and safety-aware quantization defenses.
AI 模型 Governance Lifecycle
Governance processes for the complete AI model lifecycle from procurement through retirement.
攻擊 AI 部署
AI 部署基礎設施的安全評估,包括容器逃逸、GPU 側通道、推論伺服器漏洞以及資源耗盡攻擊。
Edge AI Deployment 安全
安全 challenges and mitigations for deploying AI models at the edge on resource-constrained devices.
Edge ML Deployment 安全
安全 challenges of deploying ML models at the edge including model extraction, update tampering, and physical access attacks.
AI 基礎設施安全
AI 基礎設施安全顧慮的概覽,涵蓋模型供應鏈、API 安全、部署架構,以及 ML 系統的獨特攻擊面。
模型 Serving Infrastructure 攻擊s
攻擊ing model serving infrastructure including inference servers, load balancers, and GPU schedulers.
Blue-Green Deployment 攻擊s
利用ing blue-green and canary deployment strategies to manipulate traffic routing and force deployment of compromised model versions.
Canary Deployments for AI 模型s
Implementing canary deployments that catch security regressions in AI model updates.
Deployment Pipeline 攻擊s
Comprehensive analysis of attack vectors in ML deployment pipelines including build system compromise, artifact tampering, and deployment manipulation.
LLMOps 安全
Comprehensive overview of security across the LLMOps lifecycle: from data preparation and experiment tracking through model deployment and production monitoring. 攻擊 surfaces, threat models, and defensive strategies for ML operations.
ML CI/CD 安全
ML 持續整合與部署管線的安全概觀:ML CI/CD 與傳統 CI/CD 的差異、訓練工作流程中的獨特攻擊面,以及自動化模型建構與部署的安全意涵。
模型 Deployment 安全
安全 best practices for deploying LLMs to production environments.
LLM Honeypot Deployment
Deploy LLM honeypots to detect and study attacker behavior patterns and techniques.
LLM Guard Deployment and Testing
Deploy LLM Guard for input/output scanning and test its effectiveness against common attacks.