# red-team
45 articlestagged with “red-team”
AI Red Team Evidence Collection
Systematic evidence collection methodologies for AI red team engagements, including artifact preservation, finding documentation, and chain of custody procedures.
Red Team Methodology Assessment (Assessment - W2)
Assessment covering scoping, attack trees, evidence collection, and professional reporting.
Full Red Team Engagement: End-to-End
Complete guide to AI red team engagements from scoping through attack execution, evidence collection, impact assessment, report delivery, and remediation validation.
AI Red Team Report Writing
Writing AI red team reports: executive summaries, finding templates, AI-adapted risk ratings, remediation recommendations, and common mistakes to avoid.
Capstone: Full Red Team Engagement
Scope, plan, execute, and report a complete AI red team engagement against a multi-component AI application including chatbot, RAG, agent, and API layers.
Bedrock Attack Surface
Comprehensive red team methodology for Amazon Bedrock: model invocation API abuse, guardrails bypass techniques, custom model endpoint exploitation, IAM misconfigurations, knowledge base poisoning, and Bedrock Agents exploitation.
AWS AI Services Security Overview
Red team methodology for AWS AI services including Bedrock, SageMaker, Comprehend, and Rekognition: service enumeration, attack surface mapping, and exploitation techniques.
SageMaker Exploitation
Red team attack methodology for Amazon SageMaker: endpoint exploitation, notebook instance attacks, training job manipulation, model artifact tampering, and VPC misconfigurations in ML workloads.
AWS Bedrock Security Deep Dive
Advanced security assessment of AWS Bedrock covering model invocation controls, guardrails bypass testing, VPC configurations, and red team methodologies for foundation model APIs.
Azure ML Exploitation
Red team attack methodology for Azure Machine Learning: workspace security, compute instance attacks, pipeline poisoning, model registry tampering, and data store exploitation.
Azure OpenAI Attack Surface
Red team methodology for Azure OpenAI Service: content filtering bypass, PTU security, deployment misconfiguration, managed identity abuse, and prompt flow exploitation.
Defender for AI Bypass
Red team techniques for understanding and bypassing Microsoft Defender for AI: detection capabilities, alert analysis, bypass strategies, coverage gaps, and alert fatigue exploitation.
Azure AI Services Security Overview
Red team methodology for Azure AI services including Azure OpenAI, Azure ML, AI Studio, and Cognitive Services: service enumeration, managed identity abuse, and attack surface mapping.
AI Cost & Billing Attacks
Red team techniques for AI cost exploitation: model invocation abuse for billing inflation, token exhaustion attacks, GPU compute abuse, auto-scaling exploitation, and denial-of-wallet attacks across cloud providers.
GCP AI Services Security Overview
Red team methodology for GCP AI services including Vertex AI, Model Garden, and AI Platform: service enumeration, service account exploitation, and attack surface mapping.
Model Garden Risks
Security risks of deploying models from GCP Model Garden: third-party model trust, model provenance verification, deployment from untrusted sources, and supply chain attack vectors.
Vertex AI Attack Surface
Red team methodology for Vertex AI: prediction endpoint abuse, custom training security gaps, feature store poisoning, model monitoring evasion, and pipeline exploitation.
Cross-Cloud Attack Scenarios
Red team attack scenarios spanning multiple cloud providers: credential pivoting between AWS, Azure, and GCP, data exfiltration across cloud boundaries, and model portability risks.
Multi-Cloud AI Security Overview
Security risks of multi-cloud AI deployments: cross-cloud attack surfaces, credential management challenges, inconsistent security controls, and governance gaps across AWS, Azure, and GCP AI services.
November 2026: Full Engagement Challenge
Complete a realistic red team engagement simulation from scoping through final report delivery, producing professional-grade deliverables.
Monthly Competition: Red vs Blue
Monthly head-to-head competitions where red teams attempt to break defenses built by blue teams, with scoring based on attack sophistication and defense robustness.
Red Team-Driven Defense Improvement
Using red team findings to systematically improve LLM application defenses.
Red Team vs Blue Team Asymmetry
Why attacking AI systems is fundamentally easier than defending them: asymmetric advantages, defender's dilemma, and strategies for closing the gap.
Mapping Red Team Activities to Regulations
Mapping AI red team activities to specific regulatory requirements for compliance evidence.
Penetration Testing Methodology for AI Infrastructure
A structured methodology for penetration testing AI/ML systems covering reconnaissance, vulnerability assessment, exploitation, and reporting
Lab: Building an Automated Red Team Pipeline
Build a complete automated red teaming pipeline with attack generation, execution, scoring, and reporting.
Building a Custom Red Team Harness
Build a complete red team testing harness with parallel execution, logging, and scoring.
Building a Red Team Results Dashboard
Build a real-time dashboard for tracking and visualizing red team campaign results across targets and techniques.
Promptfoo Red Team Test Suite Development
Build comprehensive red team test suites in Promptfoo with custom graders and multi-model targeting.
Full Engagement Simulations
End-to-end red team engagement simulations that replicate real-world AI security assessments, from scoping through report delivery.
Simulation: Customer Chatbot Red Team
Complete red team engagement simulation targeting a customer service chatbot, covering prompt injection, data leakage, and policy violation testing.
Professional Practice
Professional skills for AI red team practitioners, covering red team operations, report writing and communication, career development, and building organizational AI red team programs.
Red Team Lab & Operations
Operational foundations for AI red teaming: lab environments, evidence handling, engagement workflows, and team management for professional AI security assessments.
AI Red Team Quick Reference Cheat Sheet
Quick reference cheat sheet for common AI red team techniques, payloads, and tool commands.
AI Red Team OPSEC
Operational security for AI red team engagements including API key management and attribution avoidance.
Continuous Red Teaming Programs
Designing and operating ongoing AI red team programs with automated testing pipelines, metrics dashboards, KPI frameworks, alert-driven assessments, and integration with CI/CD and model deployment workflows.
Red Team Methodology Overview
A structured methodology for AI red team engagements: phases, deliverables, role definitions, and how AI-specific testing differs from traditional penetration testing.
Red Team-Defense Feedback Loop
Build a continuous red team-defense improvement loop with automated testing and metric tracking.
Azure OpenAI Red Team Walkthrough
Complete red team walkthrough for Azure OpenAI deployments: testing content filters, managed identity exploitation, prompt flow injection, data integration attacks, and Azure Monitor evasion.
AWS Bedrock Red Team Walkthrough
Complete guide to red teaming AWS Bedrock deployments: testing guardrails bypass techniques, knowledge base data exfiltration, agent prompt injection, model customization abuse, and CloudTrail evasion.
Vertex AI Red Team Walkthrough (Platform Walkthrough)
Complete red team walkthrough for Google Vertex AI: testing prediction endpoints, Model Garden assessments, Feature Store probing, and exploiting Vertex AI Agents and Extensions.
Automating Red Team Evaluations with Promptfoo
Complete walkthrough for setting up automated red team evaluation pipelines using Promptfoo, covering configuration, custom evaluators, adversarial dataset generation, CI integration, and result analysis.
Promptfoo for Red Team Evaluation
Configure Promptfoo for comprehensive red team evaluation with custom assertions and graders.
Promptfoo End-to-End Walkthrough
Complete walkthrough of promptfoo for AI red teaming: configuration files, provider setup, running evaluations, red team plugins, assertion-based scoring, reporting, and CI/CD integration.
Microsoft PyRIT for Orchestrated Multi-Turn Attacks
Comprehensive walkthrough for using Microsoft PyRIT to design and execute orchestrated multi-turn attack campaigns against LLM applications, covering orchestrator configuration, converter chains, scoring strategies, and campaign analysis.