# deployment
24 articlestagged with “deployment”
Infrastructure Security Assessment (Assessment)
Test your knowledge of AI infrastructure security including model serving, API security, deployment architectures, and supply chain risks with 10 intermediate-level questions.
CI/CD Pipeline AI Risks
Security implications of integrating AI into CI/CD pipelines — covering AI-powered code generation in builds, automated testing risks, deployment decision manipulation, and pipeline hardening.
June 2026: Cloud AI Security Challenge
Find and document vulnerabilities in a cloud-deployed AI service covering API security, model serving infrastructure, authentication, and data handling.
AI Deployment Patterns and Security Implications
How API-based, self-hosted, edge, and hybrid deployment patterns each create distinct security considerations and attack surfaces for AI systems.
Deployment Patterns and Security
Common LLM deployment patterns (API, self-hosted, edge) and their distinct security properties and attack surfaces.
LLM Deployment Patterns and Security
Common LLM deployment patterns and their security implications including direct API, RAG, agent, and pipeline architectures.
Common AI Deployment Patterns & Attack Surfaces
Analysis of deployment patterns — chatbots, copilots, autonomous agents, batch processing, and fine-tuned models — with their unique attack surfaces and security considerations.
AI System Architecture for Red Teamers
How AI systems are deployed in production — model API, prompt templates, orchestration, tools, memory, and guardrails — with attack surface analysis at each layer.
Post-Deployment Safety Degradation
Research on how model safety degrades over time through fine-tuning, adaptation, and use-case drift.
Quantization & Safety Alignment
How model quantization disproportionately degrades safety alignment: malicious quantization attacks, token-flipping, and safety-aware quantization defenses.
AI Model Governance Lifecycle
Governance processes for the complete AI model lifecycle from procurement through retirement.
Attacking AI Deployments
Security assessment of AI deployment infrastructure, including container escapes, GPU side channels, inference server vulnerabilities, and resource exhaustion attacks.
Edge AI Deployment Security
Security challenges and mitigations for deploying AI models at the edge on resource-constrained devices.
Edge ML Deployment Security
Security challenges of deploying ML models at the edge including model extraction, update tampering, and physical access attacks.
AI Infrastructure Security
Overview of security concerns in AI infrastructure, covering model supply chains, API security, deployment architecture, and the unique attack surfaces of ML systems.
Model Serving Infrastructure Attacks
Attacking model serving infrastructure including inference servers, load balancers, and GPU schedulers.
Blue-Green Deployment Attacks
Exploiting blue-green and canary deployment strategies to manipulate traffic routing and force deployment of compromised model versions.
Canary Deployments for AI Models
Implementing canary deployments that catch security regressions in AI model updates.
Deployment Pipeline Attacks
Comprehensive analysis of attack vectors in ML deployment pipelines including build system compromise, artifact tampering, and deployment manipulation.
LLMOps Security
Comprehensive overview of security across the LLMOps lifecycle: from data preparation and experiment tracking through model deployment and production monitoring. Attack surfaces, threat models, and defensive strategies for ML operations.
ML CI/CD Security
Security overview of ML continuous integration and deployment pipelines: how ML CI/CD differs from traditional CI/CD, unique attack surfaces in training workflows, and the security implications of automated model building and deployment.
Model Deployment Security
Security best practices for deploying LLMs to production environments.
LLM Honeypot Deployment
Deploy LLM honeypots to detect and study attacker behavior patterns and techniques.
LLM Guard Deployment and Testing
Deploy LLM Guard for input/output scanning and test its effectiveness against common attacks.