# red-team
標記為「red-team」的 89 篇文章
AI Red Team Evidence Collection
Systematic evidence collection methodologies for AI red team engagements, including artifact preservation, finding documentation, and chain of custody procedures.
Red Team Methodology Assessment (Assessment - W2)
Assessment covering scoping, attack trees, evidence collection, and professional reporting.
Full Red Team Engagement: End-to-End
Complete guide to AI red team engagements from scoping through attack execution, evidence collection, impact assessment, report delivery, and remediation validation.
AI Red Team Report Writing
Writing AI red team reports: executive summaries, finding templates, AI-adapted risk ratings, remediation recommendations, and common mistakes to avoid.
Capstone: Full Red Team Engagement
Scope, plan, execute, and report a complete AI red team engagement against a multi-component AI application including chatbot, RAG, agent, and API layers.
Bedrock Attack Surface
Comprehensive red team methodology for Amazon Bedrock: model invocation API abuse, guardrails bypass techniques, custom model endpoint exploitation, IAM misconfigurations, knowledge base poisoning, and Bedrock Agents exploitation.
AWS AI Services Security Overview
Red team methodology for AWS AI services including Bedrock, SageMaker, Comprehend, and Rekognition: service enumeration, attack surface mapping, and exploitation techniques.
SageMaker Exploitation
Red team attack methodology for Amazon SageMaker: endpoint exploitation, notebook instance attacks, training job manipulation, model artifact tampering, and VPC misconfigurations in ML workloads.
AWS Bedrock Security Deep Dive
Advanced security assessment of AWS Bedrock covering model invocation controls, guardrails bypass testing, VPC configurations, and red team methodologies for foundation model APIs.
Azure ML Exploitation
Red team attack methodology for Azure Machine Learning: workspace security, compute instance attacks, pipeline poisoning, model registry tampering, and data store exploitation.
Azure OpenAI Attack Surface
Red team methodology for Azure OpenAI Service: content filtering bypass, PTU security, deployment misconfiguration, managed identity abuse, and prompt flow exploitation.
Defender for AI Bypass
Red team techniques for understanding and bypassing Microsoft Defender for AI: detection capabilities, alert analysis, bypass strategies, coverage gaps, and alert fatigue exploitation.
Azure AI Services Security Overview
Red team methodology for Azure AI services including Azure OpenAI, Azure ML, AI Studio, and Cognitive Services: service enumeration, managed identity abuse, and attack surface mapping.
AI Cost & Billing Attacks
Red team techniques for AI cost exploitation: model invocation abuse for billing inflation, token exhaustion attacks, GPU compute abuse, auto-scaling exploitation, and denial-of-wallet attacks across cloud providers.
GCP AI Services Security Overview
Red team methodology for GCP AI services including Vertex AI, Model Garden, and AI Platform: service enumeration, service account exploitation, and attack surface mapping.
Model Garden Risks
Security risks of deploying models from GCP Model Garden: third-party model trust, model provenance verification, deployment from untrusted sources, and supply chain attack vectors.
Vertex AI Attack Surface
Red team methodology for Vertex AI: prediction endpoint abuse, custom training security gaps, feature store poisoning, model monitoring evasion, and pipeline exploitation.
Cross-Cloud Attack Scenarios
Red team attack scenarios spanning multiple cloud providers: credential pivoting between AWS, Azure, and GCP, data exfiltration across cloud boundaries, and model portability risks.
Multi-Cloud AI Security Overview
Security risks of multi-cloud AI deployments: cross-cloud attack surfaces, credential management challenges, inconsistent security controls, and governance gaps across AWS, Azure, and GCP AI services.
November 2026: Full Engagement Challenge
Complete a realistic red team engagement simulation from scoping through final report delivery, producing professional-grade deliverables.
Monthly Competition: Red vs Blue
Monthly head-to-head competitions where red teams attempt to break defenses built by blue teams, with scoring based on attack sophistication and defense robustness.
Red Team-Driven Defense Improvement
Using red team findings to systematically improve LLM application defenses.
Red Team vs Blue Team Asymmetry
Why attacking AI systems is fundamentally easier than defending them: asymmetric advantages, defender's dilemma, and strategies for closing the gap.
Mapping Red Team Activities to Regulations
Mapping AI red team activities to specific regulatory requirements for compliance evidence.
Penetration Testing Methodology for AI Infrastructure
A structured methodology for penetration testing AI/ML systems covering reconnaissance, vulnerability assessment, exploitation, and reporting
Lab: Building an Automated Red Team Pipeline
Build a complete automated red teaming pipeline with attack generation, execution, scoring, and reporting.
Building a Custom Red Team Harness
Build a complete red team testing harness with parallel execution, logging, and scoring.
Building a Red Team Results Dashboard
Build a real-time dashboard for tracking and visualizing red team campaign results across targets and techniques.
Promptfoo Red Team Test Suite Development
Build comprehensive red team test suites in Promptfoo with custom graders and multi-model targeting.
Full Engagement Simulations
End-to-end red team engagement simulations that replicate real-world AI security assessments, from scoping through report delivery.
Simulation: Customer Chatbot Red Team
Complete red team engagement simulation targeting a customer service chatbot, covering prompt injection, data leakage, and policy violation testing.
Professional Practice
Professional skills for AI red team practitioners, covering red team operations, report writing and communication, career development, and building organizational AI red team programs.
Red Team Lab & Operations
Operational foundations for AI red teaming: lab environments, evidence handling, engagement workflows, and team management for professional AI security assessments.
AI Red Team Quick Reference Cheat Sheet
Quick reference cheat sheet for common AI red team techniques, payloads, and tool commands.
AI Red Team OPSEC
Operational security for AI red team engagements including API key management and attribution avoidance.
Continuous Red Teaming Programs
Designing and operating ongoing AI red team programs with automated testing pipelines, metrics dashboards, KPI frameworks, alert-driven assessments, and integration with CI/CD and model deployment workflows.
Red Team Methodology Overview
A structured methodology for AI red team engagements: phases, deliverables, role definitions, and how AI-specific testing differs from traditional penetration testing.
Red Team-Defense Feedback Loop
Build a continuous red team-defense improvement loop with automated testing and metric tracking.
Azure OpenAI Red Team Walkthrough
Complete red team walkthrough for Azure OpenAI deployments: testing content filters, managed identity exploitation, prompt flow injection, data integration attacks, and Azure Monitor evasion.
AWS Bedrock Red Team Walkthrough
Complete guide to red teaming AWS Bedrock deployments: testing guardrails bypass techniques, knowledge base data exfiltration, agent prompt injection, model customization abuse, and CloudTrail evasion.
Vertex AI Red Team Walkthrough (Platform Walkthrough)
Complete red team walkthrough for Google Vertex AI: testing prediction endpoints, Model Garden assessments, Feature Store probing, and exploiting Vertex AI Agents and Extensions.
Automating Red Team Evaluations with Promptfoo
Complete walkthrough for setting up automated red team evaluation pipelines using Promptfoo, covering configuration, custom evaluators, adversarial dataset generation, CI integration, and result analysis.
Promptfoo for Red Team Evaluation
Configure Promptfoo for comprehensive red team evaluation with custom assertions and graders.
Promptfoo End-to-End Walkthrough
Complete walkthrough of promptfoo for AI red teaming: configuration files, provider setup, running evaluations, red team plugins, assertion-based scoring, reporting, and CI/CD integration.
Microsoft PyRIT for Orchestrated Multi-Turn Attacks
Comprehensive walkthrough for using Microsoft PyRIT to design and execute orchestrated multi-turn attack campaigns against LLM applications, covering orchestrator configuration, converter chains, scoring strategies, and campaign analysis.
AI 紅隊 Evidence Collection
Systematic evidence collection methodologies for AI red team engagements, including artifact preservation, finding documentation, and chain of custody procedures.
紅隊 Methodology 評量 (評量 - W2)
評量 covering scoping, attack trees, evidence collection, and professional reporting.
完整紅隊委任:端對端
自範圍至攻擊執行、證據蒐集、影響評估、報告遞送與補救驗證之 AI 紅隊委任完整指南。
AI 紅隊報告寫作
撰寫 AI 紅隊報告:執行摘要、發現範本、AI 適配風險評級、補救建議,與要避免之常見錯誤。
Capstone: Full 紅隊 Engagement
Scope, plan, execute, and report a complete AI red team engagement against a multi-component AI application including chatbot, RAG, agent, and API layers.
Bedrock 攻擊 Surface
Comprehensive red team methodology for Amazon Bedrock: model invocation API abuse, guardrails bypass techniques, custom model endpoint exploitation, IAM misconfigurations, knowledge base poisoning, and Bedrock 代理s exploitation.
AWS AI 服務安全概觀
為 AWS AI 服務之紅隊方法論,含 Bedrock、SageMaker、Comprehend 與 Rekognition:服務列舉、攻擊面對應與利用技術。
SageMaker 利用ation
Red team attack methodology for Amazon SageMaker: endpoint exploitation, notebook instance attacks, training job manipulation, model artifact tampering, and VPC misconfigurations in ML workloads.
AWS Bedrock 安全 Deep Dive
進階 security assessment of AWS Bedrock covering model invocation controls, guardrails bypass testing, VPC configurations, and red team methodologies for foundation model APIs.
Azure ML 利用ation
Red team attack methodology for Azure Machine Learning: workspace security, compute instance attacks, pipeline poisoning, model registry tampering, and data store exploitation.
Azure OpenAI 攻擊 Surface
Red team methodology for Azure OpenAI Service: content filtering bypass, PTU security, deployment misconfiguration, managed identity abuse, and prompt flow exploitation.
Defender for AI Bypass
Red team techniques for understanding and bypassing Microsoft Defender for AI: detection capabilities, alert analysis, bypass strategies, coverage gaps, and alert fatigue exploitation.
Azure AI 服務安全概觀
為 Azure AI 服務之紅隊方法論,含 Azure OpenAI、Azure ML、AI Studio 與 Cognitive Services:服務列舉、受管身分濫用與攻擊面對應。
AI Cost & Billing 攻擊s
Red team techniques for AI cost exploitation: model invocation abuse for billing inflation, token exhaustion attacks, GPU compute abuse, auto-scaling exploitation, and denial-of-wallet attacks across cloud providers.
GCP AI 服務安全概觀
GCP AI 服務(包括 Vertex AI、Model Garden 與 AI Platform)之紅隊方法論:服務枚舉、服務帳號攻擊,以及攻擊面繪製。
Model Garden 風險
自 GCP Model Garden 部署模型之安全風險:第三方模型信任、模型來源驗證、自未受信任來源之部署,與供應鏈攻擊向量。
Vertex AI 攻擊面
為 Vertex AI 之紅隊方法論:預測端點濫用、自訂訓練安全缺口、特徵儲存投毒、模型監控逃避與管線利用。
Cross-Cloud 攻擊 Scenarios
Red team attack scenarios spanning multiple cloud providers: credential pivoting between AWS, Azure, and GCP, data exfiltration across cloud boundaries, and model portability risks.
2026 年 11 月:完整委任挑戰
完成自範圍界定至最終報告交付之現實紅隊委任模擬,產出專業級交付物。
Monthly Competition: Red vs Blue
Monthly head-to-head competitions where red teams attempt to break defenses built by blue teams, with scoring based on attack sophistication and defense robustness.
紅隊-Driven 防禦 Improvement
Using red team findings to systematically improve LLM application defenses.
紅隊 vs Blue Team Asymmetry
Why attacking AI systems is fundamentally easier than defending them: asymmetric advantages, defender's dilemma, and strategies for closing the gap.
Mapping 紅隊 Activities to Regulations
Mapping AI red team activities to specific regulatory requirements for compliance evidence.
Penetration Testing Methodology for AI Infrastructure
A structured methodology for penetration testing AI/ML systems covering reconnaissance, vulnerability assessment, exploitation, and reporting
實驗室: Building an Automated 紅隊 Pipeline
Build a complete automated red teaming pipeline with attack generation, execution, scoring, and reporting.
Building a Custom 紅隊 Harness
Build a complete red team testing harness with parallel execution, logging, and scoring.
Building a 紅隊 Results Dashboard
Build a real-time dashboard for tracking and visualizing red team campaign results across targets and techniques.
Promptfoo 紅隊 Test Suite Development
Build comprehensive red team test suites in Promptfoo with custom graders and multi-model targeting.
完整案件模擬
端對端紅隊案件模擬,複製真實世界 AI 安全評估,從範圍界定到報告交付。
模擬:客戶聊天機器人紅隊
針對客戶服務聊天機器人的完整紅隊案件模擬,涵蓋提示詞注入、資料洩漏與政策違規測試。
專業實務
AI 紅隊實務人員的專業技能,涵蓋紅隊營運、報告撰寫與溝通、職涯發展,以及建構組織 AI 紅隊計畫。
紅隊實驗室與營運
AI 紅隊演練的營運基礎:實驗室環境、證據處理、案件工作流程,以及專業 AI 安全評估的團隊管理。
AI 紅隊 Quick Reference Cheat Sheet
Quick reference cheat sheet for common AI red team techniques, payloads, and tool commands.
AI 紅隊 OPSEC
Operational security for AI red team engagements including API key management and attribution avoidance.
Continuous 紅隊演練 Programs
Designing and operating ongoing AI red team programs with automated testing pipelines, metrics dashboards, KPI frameworks, alert-driven assessments, and integration with CI/CD and model deployment workflows.
紅隊 Methodology 概覽
A structured methodology for AI red team engagements: phases, deliverables, role definitions, and how AI-specific testing differs from traditional penetration testing.
紅隊-防禦 Feedback Loop
Build a continuous red team-defense improvement loop with automated testing and metric tracking.
Azure OpenAI 紅隊 導覽
Complete red team walkthrough for Azure OpenAI deployments: testing content filters, managed identity exploitation, prompt flow injection, data integration attacks, and Azure Monitor evasion.
AWS Bedrock 紅隊 導覽
Complete guide to red teaming AWS Bedrock deployments: testing guardrails bypass techniques, knowledge base data exfiltration, agent prompt injection, model customization abuse, and CloudTrail evasion.
Vertex AI 紅隊 導覽 (Platform 導覽)
Complete red team walkthrough for Google Vertex AI: testing prediction endpoints, 模型 Garden assessments, Feature Store probing, and exploiting Vertex AI 代理s and Extensions.
Automating 紅隊 Evaluations with Promptfoo
Complete walkthrough for setting up automated red team evaluation pipelines using Promptfoo, covering configuration, custom evaluators, adversarial dataset generation, CI integration, and result analysis.
Promptfoo for 紅隊 Evaluation
Configure Promptfoo for comprehensive red team evaluation with custom assertions and graders.
Promptfoo End-to-End 導覽
Complete walkthrough of promptfoo for AI red teaming: configuration files, provider setup, running evaluations, red team plugins, assertion-based scoring, reporting, and CI/CD integration.
Microsoft PyRIT for Orchestrated Multi-Turn 攻擊s
Comprehensive walkthrough for using Microsoft PyRIT to design and execute orchestrated multi-turn attack campaigns against LLM applications, covering orchestrator configuration, converter chains, scoring strategies, and campaign analysis.