# security
148 articlestagged with “security”
Agent & Agentic Exploitation
Security overview of autonomous AI agents, covering the expanded attack surface created by tool use, persistent memory, multi-step reasoning, and multi-agent coordination.
AutoGen Security Analysis
Security analysis of Microsoft's AutoGen framework for multi-agent conversation exploitation.
LangChain Security Deep Dive
Comprehensive security analysis of LangChain including known CVEs and exploitation patterns.
LlamaIndex Agents Security Analysis
Security assessment of LlamaIndex agent implementations including tool use, memory, and query pipeline vulnerabilities.
DSPy Security Analysis
Security analysis of the DSPy framework including prompt optimization exploitation and pipeline injection.
Haystack Pipeline Security Analysis
Security analysis of deepset Haystack RAG pipelines including component injection and data exfiltration.
smolagents Security Analysis
Security analysis of Hugging Face smolagents including code execution risks and tool trust boundaries.
MCP SSE Transport Security Analysis
Security analysis of Server-Sent Events transport in MCP including reconnection attacks and event injection.
MCP Transport Security Vulnerabilities
Analysis of security vulnerabilities in MCP transport layers including stdio, SSE, and HTTP streaming.
A2A Protocol Security Analysis
Security analysis of Google's Agent-to-Agent protocol including authentication, task delegation, and trust boundaries.
Advanced AI Security Practice Exam 1
Advanced practice exam covering agentic exploitation, training attacks, and frontier research.
Advanced AI Security Practice Exam 2
Second advanced practice exam focusing on multimodal, cloud, and pipeline security.
Agent Security Practice Exam
Practice exam focused on agentic AI security including MCP, A2A, function calling, and multi-agent threats.
Cloud AI Security Assessment (Assessment)
Assessment covering AWS Bedrock, Azure OpenAI, GCP Vertex AI security configurations and threats.
Code Generation Security Assessment
Assessment on code assistant exploitation, insecure code generation, and code review AI attacks.
Infrastructure Security Assessment
Assessment covering model serving, container security, API gateway hardening, and deployment pipeline threats.
LLMOps Security Assessment
Assessment covering model deployment security, monitoring, CI/CD pipeline hardening, and operational threats.
AI Infrastructure Security Assessment
Assessment covering model serving, API gateways, container security, and GPU isolation.
Skill Verification: Infrastructure Security
Hands-on verification of cloud and infrastructure security assessment skills for AI deployments.
Agent Security Study Guide
Comprehensive study guide for agent and agentic exploitation topics including MCP and A2A protocols.
Infrastructure Security Study Guide
Study guide for AI infrastructure security covering cloud, container, and deployment pipeline topics.
Multimodal Security Study Guide
Study guide for multimodal attack and defense topics covering image, audio, and document modalities.
Capstone: Conduct a Full Model Security Audit
Perform a comprehensive security audit of an LLM deployment covering model behavior, API security, data handling, access controls, and compliance alignment.
Capstone: Comprehensive RAG Security Assessment
Conduct a thorough security assessment of a Retrieval-Augmented Generation system, testing document poisoning, retrieval manipulation, context window attacks, and data exfiltration vectors.
Capstone: Build an AI Supply Chain Security Tool
Build a tool that scans, audits, and monitors the security of AI/ML supply chains including model provenance, dependency integrity, and artifact verification.
Domain-Specific AI Security
Overview of AI security challenges across industry verticals including healthcare, finance, autonomous vehicles, content moderation, education, and customer service. Domain-specific threat models, regulations, and testing approaches.
Notable AI Security Incidents
A comprehensive timeline and analysis of major AI security incidents, from Bing Chat jailbreaks to ChatGPT data leaks and agent exploitation in the wild. Root cause analysis and impact assessment for each incident.
LangChain & LlamaIndex Security
Security analysis of popular LLM orchestration frameworks. Common misconfigurations, known CVEs, insecure defaults, and hardening guides for LangChain, LlamaIndex, and related LLM application frameworks.
AWS Bedrock Agents Security
Security assessment of AWS Bedrock Agents including action groups, knowledge bases, and guardrail integration.
AWS Bedrock Security Guide
Comprehensive security guide for AWS Bedrock including guardrails, IAM policies, and model access controls.
AWS SageMaker Security Assessment
Security assessment of AWS SageMaker including model hosting, endpoint security, and notebook vulnerabilities.
Azure AI Studio Security Assessment
Security assessment of Azure AI Studio including prompt flow, model catalog, and deployment security.
Azure OpenAI Security Guide
Security guide for Azure OpenAI Service including content filtering, managed identity, and network isolation.
Network Isolation for Cloud AI Workloads
Implementing network isolation strategies for cloud AI deployments including private endpoints, VPC configurations, service mesh integration, and data plane segmentation for LLM inference and training workloads.
Cloud AI Prompt Caching Security
Security implications of prompt caching features in cloud AI services including cache poisoning and information leakage.
Cloud Model Endpoint Security
Securing model endpoints in cloud deployments including authentication, authorization, and traffic management.
GCP Model Garden Security
Security assessment of GCP Model Garden including model deployment, versioning, and access control.
GCP Vertex AI Security Guide
Security guide for GCP Vertex AI including model garden, endpoints, and Gemini API security.
Hugging Face Inference Endpoints Security
Security analysis of Hugging Face Inference Endpoints including model isolation and API security.
Multi-Cloud AI Security Strategy (Cloud Ai Security)
Security strategy for organizations using AI services across multiple cloud providers.
Serverless AI Security Considerations
Security considerations for AI workloads running on serverless platforms including Lambda, Cloud Functions, and Azure Functions.
Autonomous Coding Agent Security
Security analysis of autonomous coding agents like Devin, including scope creep and unintended actions.
Jupyter Notebook AI Security
Security risks of AI-powered notebook features including code completion and execution.
Synthetic Data Security Risks
Security implications of using synthetic data for model training, including inherited biases, poisoning propagation, and privacy leakage.
Input Validation Architecture for LLMs
Designing input validation pipelines that detect and neutralize prompt injection before reaching the model.
LLM Monitoring and Anomaly Detection
Building monitoring systems that detect adversarial usage patterns in LLM applications.
MCP Server Security Hardening
Hardening MCP server implementations against tool poisoning, transport attacks, and privilege escalation.
Output Sanitization Patterns
Patterns for sanitizing LLM outputs to prevent information leakage and harmful content delivery.
RAG System Security Hardening
Comprehensive guide to hardening RAG systems against poisoning, injection, and data exfiltration.
Rate Limiting and Abuse Prevention
Implementing rate limiting and abuse prevention for LLM API endpoints and applications.
Multi-Tenant Isolation for LLM Services
Implementing strong tenant isolation in multi-tenant LLM services to prevent cross-tenant attacks.
Model Merging Security Analysis
Security implications of model merging techniques (TIES, DARE, SLERP) including backdoor propagation and safety property degradation.
Prefix Tuning Security Analysis
Security implications of prefix tuning and soft prompt approaches, including vulnerability to extraction, manipulation, and adversarial optimization.
QLoRA Security Implications
Security implications of quantized LoRA fine-tuning including precision-related vulnerability introduction.
The AI API Ecosystem
A red teamer's guide to the AI API landscape — OpenAI, Anthropic, Google, AWS, Azure, open-source APIs, authentication patterns, and common security misconfigurations.
AI Deployment Patterns and Security Implications
How API-based, self-hosted, edge, and hybrid deployment patterns each create distinct security considerations and attack surfaces for AI systems.
Attention Mechanisms and Security
How attention mechanisms work and their role in enabling prompt injection attacks.
Deployment Patterns and Security
Common LLM deployment patterns (API, self-hosted, edge) and their distinct security properties and attack surfaces.
LLM Deployment Patterns and Security
Common LLM deployment patterns and their security implications including direct API, RAG, agent, and pipeline architectures.
LLM Security Threat Model
Comprehensive threat model for LLM-powered applications covering all attack surfaces and threat actors.
LLM Trust Boundaries
Understanding trust boundaries in LLM applications: where data crosses privilege levels and how the lack of native trust enforcement creates attack surfaces.
Tokenization & Its Security Implications
How BPE and SentencePiece tokenizers work, and how tokenizer behavior creates exploitable attack surfaces including boundary attacks, homoglyphs, and encoding tricks.
Tokenization and Its Security Implications
How tokenization works and why it creates security-relevant behaviors in language models.
Code Generation Model Attacks
Overview of security risks in AI-powered code generation: Copilot, Cursor, code completion models, IDE integration attack surfaces, and code-specific exploitation techniques.
Security Implications of Emergent Capabilities
How emergent capabilities in frontier models create new and unpredictable security risks.
Model Merging Security Implications
Security analysis of model merging techniques and potential for backdoor propagation through merged models.
Mechanistic Interpretability for Security
Understanding model circuits to find vulnerabilities: feature identification, circuit analysis, attention pattern exploitation, and using mechanistic interpretability for offensive and defensive AI security.
Representation Engineering for Security (Frontier Research)
Using representation engineering for security analysis, behavior modification, and vulnerability detection.
Cross-Lingual Transfer and Security
Research on how cross-lingual transfer affects safety training and creates exploitable multilingual gaps.
Long-Context Window Security Research
Security research on vulnerabilities specific to models with extremely long context windows (1M+ tokens).
Model Collapse and Security Implications
Security implications of model collapse from training on AI-generated data in iterative training loops.
Neural Scaling Laws and Security Properties
How neural scaling laws affect the security properties of language models as they grow larger.
Sparse Attention Mechanism Security
Security implications of sparse and efficient attention mechanisms used in modern frontier models.
Machine Unlearning Security Research
Research on attacks against machine unlearning methods and verification of knowledge removal.
AI Security Frameworks Overview
Landscape of AI security frameworks including OWASP LLM Top 10, MITRE ATLAS, NIST AI RMF, and EU AI Act. How they relate, which to use when, and gap analysis.
Energy and Utilities AI Security
AI security in energy and utilities including grid management, predictive maintenance, and smart meters.
Government AI Security Requirements
Security requirements for AI systems in government settings including FedRAMP and classification considerations.
HR and Workforce AI Security
Security analysis of AI in HR including performance evaluation, workforce planning, and employee chatbots.
Manufacturing AI Security
Security considerations for AI in manufacturing including quality control, predictive maintenance, and robotics.
Telecommunications AI Security (Industry Verticals)
Security considerations for AI in telecommunications including network optimization, fraud detection, and customer service.
API Gateway Security for AI Services
Securing API gateways for AI services including authentication, rate limiting, and request validation.
Container Security for ML Workloads
Securing containerized ML workloads including Docker images, Kubernetes pods, and GPU isolation.
Distributed Training Security
Security considerations for distributed model training across multiple nodes and data centers.
Edge AI Deployment Security
Security challenges and mitigations for deploying AI models at the edge on resource-constrained devices.
GPU Cluster Security
Securing GPU clusters used for model training and inference against unauthorized access and data leakage.
Integration & Framework Security
Security analysis of AI integration frameworks including LangChain, LlamaIndex, and Semantic Kernel, covering common vulnerability patterns and exploitation techniques.
Deep Supply Chain Analysis
Comprehensive analysis of the AI supply chain dependency tree covering model weights, tokenizers, datasets, libraries, and infrastructure components with audit methodology.
ML Data Lake Security
Securing data lakes used for ML training data including access controls, encryption, lineage tracking, and poisoning prevention.
ML Experiment Infrastructure Security
Securing ML experimentation infrastructure including notebook servers, experiment trackers, and shared development environments.
ML Pipeline CI/CD Security
Securing ML training and deployment pipelines including GitHub Actions, Kubeflow, and MLflow.
Model Artifact Security
Securing model artifacts throughout the lifecycle including signing, verification, storage encryption, and tamper detection.
Model Registry Security
Securing model registries and artifact stores against tampering, poisoning, and unauthorized access.
Multi-Cloud ML Security
Security architecture for ML workloads spanning multiple cloud providers including identity federation, data sovereignty, and policy consistency.
Network Security for AI Deployments
Network security architecture for AI deployments including segmentation, encryption, and traffic analysis.
Serverless ML Security
Security considerations for serverless ML deployments including cold start attacks, function injection, and ephemeral storage risks.
Vector Database Security
Security hardening for vector databases including Pinecone, Weaviate, Chroma, and pgvector.
Attention Pattern Analysis for Security
Using attention maps to understand and exploit model behavior, identifying security-relevant attention patterns, and leveraging attention mechanics for red team operations.
Lab: Cloud AI Assessment
Hands-on lab for conducting an end-to-end security assessment of a cloud-deployed AI system including infrastructure review, API testing, model security evaluation, and data flow analysis.
Lab: Representation Engineering for Security
Use representation engineering to analyze and manipulate internal model representations for security research.
Lab: API Key Security
Learn common API key exposure vectors, secure key management with .env files, detect keys in git history, implement key rotation, and apply least-privilege principles.
Embedding Basics for Security
Understand text embeddings and their security relevance by generating, comparing, and manipulating embedding vectors.
Model Security Comparison Lab
Compare the security posture of different LLM models by running identical test suites across providers.
Lab: Build Agent Security Scanner
Build an automated security scanner for agentic AI systems that detects vulnerabilities in tool use, permission handling, memory management, and multi-step execution flows. Cover agent-specific attack surfaces that traditional LLM testing misses.
Lab: Supply Chain Audit
Audit an ML project's dependencies for vulnerabilities, covering model files, Python packages, container images, and training data provenance.
Lab: ML Supply Chain Scan
Hands-on lab for auditing machine learning model dependencies, detecting malicious packages in ML pipelines, and scanning model files for backdoors and supply chain threats.
Code Review Assistant Assessment
Test a code review AI for vulnerabilities in code analysis, suggestion generation, and repository access.
Cyber Threat Intelligence AI Assessment
Red team a cyber threat intelligence AI that processes IOCs, threat reports, and attack attribution.
A/B Testing Security Implications
Security implications of A/B testing AI models including differential behavior exploitation.
AI Observability for Security
Using observability platforms to detect security anomalies in AI system behavior.
Continuous Training Security
Securing continuous and online learning systems against adversarial data injection and model drift manipulation.
Feature Store Security
Securing feature stores used in ML pipelines against poisoning and unauthorized access.
Kubernetes ML Operator Security
Security analysis of Kubernetes-based ML operators (KServe, Seldon, Ray) including privilege escalation, resource manipulation, and cross-tenant attacks.
ML Experiment Tracking Security
Securing experiment tracking systems like MLflow, Weights & Biases, and Neptune.
MLflow Security Assessment
Security assessment of MLflow deployments including tracking server vulnerabilities, artifact store exploitation, and model registry attacks.
Model Deployment Security
Security best practices for deploying LLMs to production environments.
Model Gateway Security Patterns
Security patterns for centralized model gateway deployments including authentication, authorization, and auditing.
Model Rollback Security
Security implications of model rollback procedures including exposure windows and state consistency.
Model Serving Security Hardening
Best practices for securing model serving infrastructure including endpoint hardening, authentication, rate limiting, and output validation.
Model Versioning Security
Securing model version management including rollback safety and version validation.
Prompt Management Security
Securing prompt templates, system prompts, and prompt management infrastructure.
Prompt Template Versioning Security
Securing prompt template version management against unauthorized modifications and injection.
Claude Architecture Security Analysis
Deep security analysis of Claude's architecture including extended thinking, tool use, and safety mechanisms.
Distillation Security Analysis
Security implications of knowledge distillation including backdoor transfer, capability extraction, and safety property degradation in student models.
Gemini Architecture Security Analysis
Deep security analysis of Gemini's native multimodal architecture and long-context capabilities.
GPT-4 Architecture Security Analysis
Deep security analysis of GPT-4's architecture including function calling, vision, and safety layers.
Phi Models Security Analysis
Security analysis of Microsoft's Phi small language model family including safety vs capability tradeoffs.
Qwen Architecture Security
In-depth security assessment of Alibaba's Qwen model family including architecture-specific vulnerabilities and cross-language attack surfaces.
Qwen Models Security Analysis
Security analysis of Alibaba's Qwen model family including multilingual safety considerations.
Yi Model Security Assessment
Security analysis of 01.AI's Yi models focusing on bilingual capabilities, training data implications, and comparative safety properties.
AI Security Training Program Design
Designing and delivering AI security training programs for development and security teams.
Vendor Selection for AI Security Tools
Framework for evaluating and selecting AI security testing tools and services.
LLM Security Checklist
Comprehensive security checklist for LLM-powered applications covering input validation, prompt hardening, output filtering, tool security, RAG pipelines, and incident response.
Model API Security Reference
Security reference for major model APIs including authentication, rate limits, and safety features.
Advanced OPSEC for AI Red Teams
Advanced operational security practices for AI red team engagements including traffic obfuscation, attribution prevention, and covert testing.
Model Merging Security Analysis (Training Pipeline)
Security analysis of model merging techniques and propagation of vulnerabilities through merged models.
Security of RLHF: Reward Hacking and Reward Model Attacks
Comprehensive analysis of security vulnerabilities in RLHF pipelines, including reward hacking, reward model poisoning, and preference manipulation attacks.
Transfer Learning Security Analysis
Security implications of transfer learning including inherited vulnerabilities and cross-domain attack transfer.
Model Hub Supply Chain Attack
Attacking the ML model supply chain through hub repositories like Hugging Face, including typosquatting, model poisoning, and repository manipulation techniques.
Model Serialization RCE
Remote code execution through malicious model files using pickle deserialization, safetensors manipulation, and other model serialization format vulnerabilities.
Sandboxed Tool Execution
Step-by-step walkthrough for running LLM tool calls in isolated sandboxes, covering container-based isolation, resource limits, network restrictions, and output sanitization.
Session Isolation Patterns
Step-by-step walkthrough for isolating user sessions in LLM applications to prevent cross-contamination of context, memory, and permissions between users.
AI Security Threat Intelligence
Build a threat intelligence pipeline for staying current with AI security threats and attack techniques.
Full Engagement: AI Code Assistant
End-to-end engagement for assessing an AI-powered code assistant with repository access.
Full Engagement: AI Security Copilot
Red team engagement of an AI security copilot with access to SIEM, vulnerability scanners, and threat intelligence.
AI Security Metrics Framework
Framework for measuring and reporting on AI security posture using quantitative metrics.
Competitive Analysis of AI Security Tools
Methodology for evaluating and comparing AI security tools for red team operations.
AI Security Tabletop Exercises
Designing and facilitating tabletop exercises focused on AI security incident scenarios.
MCP Security Audit Tool
Build a tool for auditing MCP server implementations for common security vulnerabilities and misconfigurations.