# automation
標記為「automation」的 100 篇文章
Automated AI Incident Triage
Building automated triage systems for AI security incidents using rule-based engines, anomaly detection, and LLM-assisted classification.
Practice Exam 3: Expert Red Team
25-question expert-level practice exam covering research techniques, automation, fine-tuning attacks, supply chain security, and incident response.
Tool Proficiency Assessment
Test your knowledge of AI red teaming tools, frameworks, automation platforms, and their appropriate application in security assessments with 9 intermediate-level questions.
Skill Verification: Red Team Automation
Practical verification of red team automation skills using Garak, PyRIT, and custom tooling.
Advanced Topics Study Guide
Study guide covering AI security research techniques, automation, forensics, emerging attack vectors, and tool development for advanced practitioners.
Capstone: Build a Complete AI Red Teaming Platform
Design and implement a comprehensive AI red teaming platform with automated attack orchestration, vulnerability tracking, and collaborative reporting.
Capstone: Build an AI Security Scanner
Design and implement an automated AI security testing tool that supports prompt injection detection, jailbreak testing, and output analysis.
Cloud AI Compliance Automation
Automating AI compliance checks and security assessments using cloud-native tools and policy-as-code approaches.
Secrets Rotation for Cloud AI Deployments
Implementing automated secrets rotation strategies for API keys, model endpoint credentials, and service accounts used in cloud AI/LLM deployments across AWS, Azure, and GCP.
CI/CD Pipeline AI Risks
Security implications of integrating AI into CI/CD pipelines — covering AI-powered code generation in builds, automated testing risks, deployment decision manipulation, and pipeline hardening.
LLM Security Testing Automation
Building automated security testing pipelines for LLM applications using CI/CD integration and continuous scanning.
Attack Automation Framework
Building end-to-end attack automation frameworks that orchestrate reconnaissance, payload generation, execution, and result analysis.
Red Teaming Automation
Frameworks and tools for automating AI red teaming at scale, including CART pipelines, jailbreak fuzzing, regression testing, and continuous monitoring.
Building Custom Red Team Tools
Guide to building custom AI red teaming tools, including target-specific harnesses, result analysis pipelines, and integration with existing security workflows.
Exploit Chain Builder
Building tools that automatically discover and chain multiple vulnerabilities into complete exploitation paths for complex LLM systems.
AI Exploit Development Overview
An introduction to developing exploits and tooling for AI red teaming, covering the unique challenges of building reliable attacks against probabilistic systems.
Red Team Reporting Automation
Automating report generation from red team testing data and findings.
Continuous Automated Red Teaming (CART)
Designing CART pipelines for ongoing AI security validation: architecture, test suites, telemetry, alerting, regression detection, and CI/CD integration.
Red Team Infrastructure & Tooling
AI red team C2 frameworks, automated attack pipelines, custom scanner development, and integration with Cobalt Strike, Mythic, and Sliver.
Reporting Tool Development
Building automated reporting tools that transform raw test results into professional assessment reports with reproducible findings.
Continuous Compliance Monitoring
Automated compliance monitoring for AI systems including continuous compliance checks, drift detection, regulatory change tracking, and integration with red team testing pipelines.
Building Evaluation Harnesses
Design and implement evaluation harnesses for AI red teaming: architecture patterns, judge model selection, prompt dataset management, scoring pipelines, and reproducible evaluation infrastructure.
Mining and Resource Extraction AI Security
AI security in mining operations including autonomous equipment, geological modeling, and safety systems.
Attacking ML CI/CD Pipelines
Advanced techniques for compromising ML continuous integration and deployment pipelines, including pipeline injection, artifact tampering, training job hijacking, and exploiting the unique trust boundaries in automated ML workflows.
Automated Jailbreak Pipelines
Building automated jailbreak systems with PAIR, TAP, AutoDAN, and custom pipeline architectures for systematic AI safety evaluation.
Injection Research
Advanced research in prompt injection, jailbreak automation, and multimodal attack vectors, covering cutting-edge techniques that push beyond standard injection approaches.
Jailbreak Research & Automation
Taxonomy of jailbreak primitives, crescendo attacks, many-shot jailbreaking, and automated jailbreak generation with TAP and PAIR.
Lab: Red Team Orchestration
Build an orchestration system that coordinates multiple attack strategies simultaneously, managing parallel attack campaigns and synthesizing results into comprehensive risk assessments.
Lab: Safety Regression Testing at Scale
Build automated pipelines that detect safety degradation across model versions, ensuring that updates and fine-tuning do not introduce new vulnerabilities or weaken existing protections.
Lab: Building a Simple Test Harness
Build a reusable Python test harness that automates sending test prompts, recording results, and calculating attack success metrics.
Lab: Build Jailbreak Automation
Build an automated jailbreak testing framework that generates, mutates, and evaluates attack prompts at scale. Covers prompt mutation engines, success classifiers, and campaign management for systematic red team testing.
Lab: Automated Red Team Pipeline
Hands-on lab for building a continuous AI red team testing pipeline using promptfoo, GitHub Actions, and automated attack generation to catch safety regressions before deployment.
Lab: Building an LLM Judge Evaluator
Hands-on lab for building an LLM-based evaluator to score red team attack outputs, compare model vulnerability, and lay the foundation for automated attack campaigns.
Simulation: Defense in Depth
Expert-level defense simulation implementing a full defense stack including input filter, output monitor, rate limiter, anomaly detector, and circuit breaker, then measuring effectiveness against automated attacks.
ML CI/CD Security
Security overview of ML continuous integration and deployment pipelines: how ML CI/CD differs from traditional CI/CD, unique attack surfaces in training workflows, and the security implications of automated model building and deployment.
Developing Custom AI Red Team Tools
Guide to designing, building, and maintaining custom tools for AI red team engagements.
Continuous Red Teaming for Production AI Systems
Implementing ongoing, automated red teaming programs for AI systems in production environments.
Red Team Automation Strategy
When and how to automate AI red teaming: tool selection, CI/CD integration, continuous automated red teaming (CART), human-in-the-loop design, and scaling assessment coverage through automation.
Injection Chain Automation
Automating the discovery and chaining of multiple injection techniques to create reliable multi-step attack sequences against hardened targets.
System Prompt Extraction Techniques
Catalog of system prompt extraction methods against LLM-powered applications: direct attacks, indirect techniques, multi-turn strategies, and defensive evasion.
Continuous Red Teaming Programs
Designing and operating ongoing AI red team programs with automated testing pipelines, metrics dashboards, KPI frameworks, alert-driven assessments, and integration with CI/CD and model deployment workflows.
Automated Defense Testing Pipeline
Build an automated pipeline that continuously tests defensive measures against evolving attack techniques.
Setting Up Continuous AI Red Teaming Pipelines
Walkthrough for building continuous AI red teaming pipelines that automatically test LLM applications on every deployment, covering automated scan configuration, CI/CD integration, alert thresholds, regression testing, and dashboard reporting.
Developing Comprehensive AI Security Test Plans
Step-by-step guide to developing structured test plans for AI red team engagements, covering test case design, automation strategy, coverage mapping, and execution scheduling.
Counterfit Walkthrough
Complete walkthrough of Microsoft's Counterfit adversarial ML testing framework: installation, target configuration, running attacks against ML models, interpreting results, and automating adversarial robustness assessments.
Integrating Garak into CI/CD Pipelines
Intermediate walkthrough on automating garak vulnerability scans within CI/CD pipelines, including GitHub Actions, GitLab CI, threshold-based gating, result caching, and cost management strategies.
Garak End-to-End Walkthrough
Complete walkthrough of NVIDIA's garak LLM vulnerability scanner: installation, configuration, running probes against local and hosted models, interpreting results, writing custom probes, and CI/CD integration.
Automating Red Team Evaluations with Promptfoo
Complete walkthrough for setting up automated red team evaluation pipelines using Promptfoo, covering configuration, custom evaluators, adversarial dataset generation, CI integration, and result analysis.
Python Red Team Automation
Building custom AI red team automation with Python: test harnesses with httpx and aiohttp, result collection and analysis, automated reporting, and integration with existing tools like promptfoo and garak.
Automated Red Team Report Generation
Build an automated system for generating structured red team reports from testing data and findings.
Automated AI Incident Triage
Building automated triage systems for AI security incidents using rule-based engines, anomaly detection, and LLM-assisted classification.
Practice Exam 3: 專家 紅隊
25-question expert-level practice exam covering research techniques, automation, fine-tuning attacks, supply chain security, and incident response.
Skill Verification: 紅隊 Automation
Practical verification of red team automation skills using Garak, PyRIT, and custom tooling.
進階 Topics Study 指南
Study guide covering AI security research techniques, automation, forensics, emerging attack vectors, and tool development for advanced practitioners.
Capstone: Build a Complete AI 紅隊ing Platform
Design and implement a comprehensive AI red teaming platform with automated attack orchestration, vulnerability tracking, and collaborative reporting.
Capstone: Build an AI 安全 Scanner
Design and implement an automated AI security testing tool that supports prompt injection detection, jailbreak testing, and output analysis.
Cloud AI Compliance Automation
Automating AI compliance checks and security assessments using cloud-native tools and policy-as-code approaches.
Secrets Rotation for Cloud AI Deployments
Implementing automated secrets rotation strategies for API keys, model endpoint credentials, and service accounts used in cloud AI/LLM deployments across AWS, Azure, and GCP.
CI/CD 管線 AI 風險
將 AI 整合至 CI/CD 管線的安全意涵——涵蓋建構中的 AI 驅動程式碼生成、自動化測試風險、部署決策操控與管線強化。
LLM 安全 Testing Automation
Building automated security testing pipelines for LLM applications using CI/CD integration and continuous scanning.
攻擊 Automation Framework
Building end-to-end attack automation frameworks that orchestrate reconnaissance, payload generation, execution, and result analysis.
紅隊自動化
大規模自動化 AI 紅隊的框架與工具,涵蓋 CART 管線、越獄模糊測試、回歸測試與持續監控。
打造自訂紅隊工具
打造自訂 AI 紅隊工具之指南,含目標特定 harness、結果分析管線,與與現有安全工作流程之整合。
利用 Chain Builder
Building tools that automatically discover and chain multiple vulnerabilities into complete exploitation paths for complex LLM systems.
AI 利用開發概覽
為 AI 紅隊演練開發利用程式與工具的介紹,涵蓋建構對機率性系統之可靠攻擊的獨特挑戰。
紅隊 Reporting Automation
Automating report generation from red team testing data and findings.
持續自動化紅隊(CART)
為持續 AI 安全驗證設計 CART 管線:架構、測試套件、遙測、警報、回歸偵測與 CI/CD 整合。
紅隊基礎設施與工具
AI 紅隊 C2 框架、自動化攻擊管線、自製掃描器開發,以及與 Cobalt Strike、Mythic、Sliver 的整合。
Reporting 工具 Development
Building automated reporting tools that transform raw test results into professional assessment reports with reproducible findings.
AI 驅動紅隊演練
使用 AI 自動化與擴展 AI 安全測試——涵蓋 PAIR、TAP、LLM 攻擊者框架與強化學習攻擊最佳化。
Continuous Compliance Monitoring
Automated compliance monitoring for AI systems including continuous compliance checks, drift detection, regulatory change tracking, and integration with red team testing pipelines.
Building Evaluation Harnesses
Design and implement evaluation harnesses for AI red teaming: architecture patterns, judge model selection, prompt dataset management, scoring pipelines, and reproducible evaluation infrastructure.
Mining and Resource Extraction AI 安全
AI security in mining operations including autonomous equipment, geological modeling, and safety systems.
攻擊ing ML CI/CD Pipelines
進階 techniques for compromising ML continuous integration and deployment pipelines, including pipeline injection, artifact tampering, training job hijacking, and exploiting the unique trust boundaries in automated ML workflows.
Automated 越獄 Pipelines
Building automated jailbreak systems with PAIR, TAP, AutoDAN, and custom pipeline architectures for systematic AI safety evaluation.
注入研究
提示詞注入、越獄自動化與多模態攻擊向量的進階研究,涵蓋超越標準注入方法的尖端技術。
越獄 Research & Automation
Taxonomy of jailbreak primitives, crescendo attacks, many-shot jailbreaking, and automated jailbreak generation with TAP and PAIR.
實驗室: 紅隊 Orchestration
Build an orchestration system that coordinates multiple attack strategies simultaneously, managing parallel attack campaigns and synthesizing results into comprehensive risk assessments.
實驗室: Safety Regression Testing at Scale
Build automated pipelines that detect safety degradation across model versions, ensuring that updates and fine-tuning do not introduce new vulnerabilities or weaken existing protections.
實驗室: Building a Simple Test Harness
Build a reusable Python test harness that automates sending test prompts, recording results, and calculating attack success metrics.
實驗室: Build 越獄 Automation
Build an automated jailbreak testing framework that generates, mutates, and evaluates attack prompts at scale. Covers prompt mutation engines, success classifiers, and campaign management for systematic red team testing.
實驗室: Automated 紅隊 Pipeline
Hands-on lab for building a continuous AI red team testing pipeline using promptfoo, GitHub Actions, and automated attack generation to catch safety regressions before deployment.
實驗:建立 LLM 裁判評估器
為建立 LLM 基評估器以對紅隊攻擊輸出評分、比較模型脆弱度並為自動化攻擊活動奠基之實作實驗。
Simulation: 防禦 in Depth
專家-level defense simulation implementing a full defense stack including input filter, output monitor, rate limiter, anomaly detector, and circuit breaker, then measuring effectiveness against automated attacks.
ML CI/CD 安全
ML 持續整合與部署管線的安全概觀:ML CI/CD 與傳統 CI/CD 的差異、訓練工作流程中的獨特攻擊面,以及自動化模型建構與部署的安全意涵。
Developing Custom AI 紅隊 工具s
指南 to designing, building, and maintaining custom tools for AI red team engagements.
Continuous 紅隊演練 for Production AI Systems
Implementing ongoing, automated red teaming programs for AI systems in production environments.
紅隊 Automation Strategy
When and how to automate AI red teaming: tool selection, CI/CD integration, continuous automated red teaming (CART), human-in-the-loop design, and scaling assessment coverage through automation.
Injection Chain Automation
Automating the discovery and chaining of multiple injection techniques to create reliable multi-step attack sequences against hardened targets.
系統提示擷取技術
針對 LLM 應用之系統提示擷取方法的目錄:直接攻擊、間接技術、多輪策略與規避偵測。
Continuous 紅隊演練 Programs
Designing and operating ongoing AI red team programs with automated testing pipelines, metrics dashboards, KPI frameworks, alert-driven assessments, and integration with CI/CD and model deployment workflows.
Automated 防禦 Testing Pipeline
Build an automated pipeline that continuously tests defensive measures against evolving attack techniques.
Setting Up Continuous AI 紅隊ing Pipelines
導覽 for building continuous AI red teaming pipelines that automatically test LLM applications on every deployment, covering automated scan configuration, CI/CD integration, alert thresholds, regression testing, and dashboard reporting.
Developing Comprehensive AI 安全 Test Plans
Step-by-step guide to developing structured test plans for AI red team engagements, covering test case design, automation strategy, coverage mapping, and execution scheduling.
Counterfit 導覽
Complete walkthrough of Microsoft's Counterfit adversarial ML testing framework: installation, target configuration, running attacks against ML models, interpreting results, and automating adversarial robustness assessments.
Integrating Garak into CI/CD Pipelines
中階 walkthrough on automating garak vulnerability scans within CI/CD pipelines, including GitHub Actions, Git實驗室 CI, threshold-based gating, result caching, and cost management strategies.
Garak End-to-End 導覽
Complete walkthrough of NVIDIA's garak LLM vulnerability scanner: installation, configuration, running probes against local and hosted models, interpreting results, writing custom probes, and CI/CD integration.
Automating 紅隊 Evaluations with Promptfoo
Complete walkthrough for setting up automated red team evaluation pipelines using Promptfoo, covering configuration, custom evaluators, adversarial dataset generation, CI integration, and result analysis.
Python 紅隊 Automation
Building custom AI red team automation with Python: test harnesses with httpx and aiohttp, result collection and analysis, automated reporting, and integration with existing tools like promptfoo and garak.
Automated 紅隊 Report Generation
Build an automated system for generating structured red team reports from testing data and findings.