# assessment
標記為「assessment」的 224 篇文章
MCP Security Testing: How to Test MCP Servers for Vulnerabilities
A defense-focused guide to security testing MCP server implementations -- methodology for MCP security assessments, scanning tools, common test cases for auth bypass, injection, traversal, and data leaks, with working test scripts and reporting templates.
Post-Incident AI Assessment
Conducting post-incident assessments for AI security events including root cause analysis.
Practice Exams Overview
Overview of AI red teaming practice exams, preparation strategies, exam structure, and tips for maximizing your score.
A2A Protocol Security Assessment
Assessment covering multi-agent system vulnerabilities, trust boundary attacks, and agent-to-agent protocol exploitation.
Agent Architecture Security Assessment
Assessment covering agent design patterns, tool sandboxing, multi-agent trust, and MCP security.
Agent Exploitation Assessment
Test your understanding of AI agent security, tool-use attacks, confused deputy scenarios, and agentic system exploitation with 10 intermediate-level questions.
Agent Memory Security Assessment
Assessment covering memory poisoning, context manipulation, exfiltration, and cross-session persistence attacks.
Agentic Exploitation Assessment (Assessment)
Test your knowledge of agentic AI attacks, MCP exploitation, function calling abuse, and multi-agent system vulnerabilities with 15 intermediate-level questions.
Automated Red Teaming Assessment
Assessment of automated attack generation tools including PAIR, TAP, GCG, and custom harness development.
Advanced Cloud AI Security Assessment
15-question advanced assessment covering cloud AI attack surfaces across AWS, Azure, and GCP: guardrail bypass, knowledge base exploitation, managed identity abuse, model customization risks, and multi-cloud attack paths.
Cloud AI Security Assessment
Test your knowledge of AWS, Azure, and GCP AI service security with 15 intermediate-level questions covering cloud-specific attack surfaces and misconfigurations.
Multi-Cloud AI Security Assessment
Assessment spanning AWS Bedrock, Azure OpenAI, and GCP Vertex AI security configurations and misconfigurations.
Cloud AI Platforms Assessment
Assessment covering AWS Bedrock, Azure OpenAI, GCP Vertex AI, and multi-cloud security strategies.
Code Execution Safety Assessment
Assessment of LLM-generated code safety, sandbox escape techniques, and code review automation.
Code Generation Security Assessment (Assessment)
Test your knowledge of AI code generation security including coding assistant risks, suggestion poisoning, IDE integration threats, and secure AI-assisted development with 15 questions.
Context Window Security Assessment
Assessment of context window overflow, attention manipulation, and long-context exploitation techniques.
Continuous AI Monitoring Assessment
Assessment on monitoring strategies, anomaly detection, alerting thresholds, and operational security.
Cross-Model Transfer Assessment
Assessment of attack transferability across model families, versions, and providers.
Data Poisoning Assessment
Comprehensive assessment of training data poisoning, synthetic data attacks, and supply chain vulnerabilities.
Data Privacy in AI Assessment
Assessment on training data privacy, membership inference, data extraction, and privacy-preserving techniques.
Defense Fundamentals Assessment
Test your understanding of AI defense mechanisms including input/output filtering, guardrails, sandboxing, and defense-in-depth strategies with 9 intermediate-level questions.
Defense & Mitigation Assessment (Assessment)
Test your knowledge of AI guardrails, monitoring systems, incident response, and defense-in-depth strategies with 15 intermediate-level questions.
Embedding and Vector Attack Assessment
Assessment of adversarial embedding perturbation, similarity manipulation, and vector database poisoning.
Embedding & Vector Security Assessment (Assessment)
Test your understanding of embedding inversion attacks, vector database security, similarity search manipulation, and privacy risks of stored embeddings with 10 questions.
AI Ethics and Legal Assessment
Assessment on ethical frameworks, legal considerations, and responsible disclosure in AI security.
EU AI Act Compliance Assessment
Comprehensive assessment of organizational readiness for EU AI Act requirements including red team testing mandates.
Fine-Tuning Attack Assessment
Assessment of safety degradation through fine-tuning, backdoor insertion, and alignment removal techniques.
Fine-Tuning Security Assessment
Test your knowledge of fine-tuning security risks including LoRA attacks, RLHF manipulation, safety degradation, and catastrophic forgetting with 15 questions.
AI Forensics Assessment
Test your knowledge of AI incident response, log analysis, evidence preservation, behavioral analysis, and forensic investigation techniques with 15 questions.
Foundations Assessment
Test your understanding of LLM fundamentals, core terminology, and the AI threat landscape with 15 intermediate-level questions.
Frontier Research Assessment
Comprehensive assessment covering adversarial robustness, alignment faking, sleeper agents, and emerging research directions in AI security.
Function Calling Security Assessment
Assessment focused on JSON schema injection, parameter manipulation, recursive calling, and result poisoning attacks.
Governance Assessment
Test your knowledge of AI governance, regulatory frameworks, compliance requirements, and responsible AI practices with 15 intermediate-level questions.
Guardrails Implementation Assessment
Test your understanding of guardrail implementation strategies, content classification systems, safety taxonomies, and guardrail bypass techniques with 9 intermediate-level questions.
IAM for AI Systems Assessment
Assessment of identity and access management vulnerabilities specific to AI service deployments.
Impact Assessment
Test your understanding of AI system impact scenarios including misinformation generation, harmful content, reputation damage, denial of service, data corruption, financial fraud, and compliance violations with 10 questions.
AI Incident Response Assessment
Assessment of AI-specific incident response procedures, forensics, and recovery capabilities.
Incident Response Assessment
Assessment on AI incident response procedures, evidence collection, and post-incident analysis.
Infrastructure Security Assessment
Assessment covering model serving, container security, API gateway hardening, and deployment pipeline threats.
Infrastructure Security Assessment (Assessment)
Test your knowledge of AI infrastructure security including model serving, API security, deployment architectures, and supply chain risks with 10 intermediate-level questions.
Jailbreaking Techniques Assessment
Test your knowledge of LLM jailbreaking methods, bypass strategies, and the mechanics behind safety training circumvention with 10 intermediate-level questions.
Legal & Ethical Considerations Assessment
Test your understanding of the legal frameworks, ethical boundaries, responsible disclosure, and professional standards governing AI red teaming with 8 beginner-level questions.
LLM Architecture Security Assessment
Assessment on transformer internals, tokenization security, attention vulnerabilities, and model-level attacks.
LLM Fingerprinting Assessment
Assessment of model identification, behavioral fingerprinting, and architecture inference techniques.
LLMOps Security Assessment (Assessment)
Test your understanding of MLOps pipeline security, model deployment attacks, API security, monitoring gaps, model registry poisoning, and CI/CD for ML with 10 questions.
Advanced MCP Security Assessment
Comprehensive assessment of MCP protocol vulnerabilities including transport attacks, tool poisoning, and capability escalation.
MCP Security Assessment
Evaluate your knowledge of Model Context Protocol security, tool registration vulnerabilities, transport-layer risks, and MCP-specific attack vectors with 10 intermediate-level questions.
Red Team Methodology Assessment
Test your understanding of AI red team engagement methodology, from scoping through reporting, including structured approaches, attack planning, and finding documentation with 9 intermediate-level questions.
Model Extraction & Privacy Assessment
Test your advanced knowledge of model extraction, model stealing, membership inference, and intellectual property theft attacks against AI systems with 9 questions.
Model Supply Chain Assessment
Assessment covering model provenance, checkpoint manipulation, and third-party model risks.
Monitoring & Detection Assessment
Test your understanding of AI security monitoring, anomaly detection, logging strategies, and incident detection for LLM-based applications with 9 intermediate-level questions.
Multi-Turn Attack Assessment
Assessment of crescendo attacks, conversational manipulation, and progressive jailbreaking techniques.
Advanced Multimodal Assessment
In-depth assessment of cross-modal attack vectors including image injection, audio manipulation, and steganographic techniques.
Multimodal Defense Assessment
Assessment covering defenses against visual injection, audio attacks, and cross-modal exploitation.
Multimodal Attack Assessment
Test your understanding of attacks against multimodal AI systems, including image-based injection, audio adversarial examples, and cross-modal manipulation with 10 intermediate-level questions.
NIST AI RMF Assessment
Assessment covering implementation of NIST AI Risk Management Framework across all four functions.
Output Safety Assessment
Assessment of output filtering, content classification, watermarking, and data leakage prevention.
Privacy Attack Assessment
Test your advanced knowledge of privacy attacks against AI systems including data leakage, PII extraction, differential privacy failures, and inference-time privacy risks with 9 questions.
Professional Skills Assessment
Test your knowledge of AI red teaming methodology, report writing, client engagement, and professional practice with 15 intermediate-level questions.
Prompt Injection Assessment
Test your knowledge of prompt injection types, techniques, defense mechanisms, and real-world exploitation with 15 intermediate-level questions.
Prompt Leakage Assessment
Assessment of system prompt extraction techniques including direct probing, logprob analysis, and side-channel methods.
RAG & Data Attack Assessment
Test your knowledge of Retrieval-Augmented Generation attack vectors, knowledge base poisoning, embedding manipulation, and data exfiltration through RAG systems with 10 intermediate-level questions.
Rate Limiting and Abuse Assessment
Assessment of rate limiting bypass techniques, cost-based attacks, and billing abuse in AI services.
Reasoning Model Security Assessment
Assessment of chain-of-thought exploitation, reasoning trace manipulation, and thinking-token attacks.
Recon & Fingerprinting Assessment
Test your knowledge of AI system reconnaissance, model fingerprinting, architecture enumeration, and information gathering techniques with 8 beginner-level questions.
Red Team Methodology Assessment (Assessment)
Assessment on scoping, planning, execution, and reporting of AI red team engagements.
Red Team Engagement Planning Assessment
Assessment of planning, scoping, authorization, and execution methodology for AI red team engagements.
Responsible AI Disclosure Assessment
Assessment of responsible disclosure practices, vulnerability reporting, and coordinated disclosure for AI systems.
RLHF Exploitation Assessment
Assessment of reinforcement learning from human feedback pipeline vulnerabilities and reward hacking.
Steganographic Attack Assessment
Assessment of hidden payload delivery through steganography, zero-width characters, and encoding tricks.
AI Supply Chain Assessment
Assessment covering model provenance, dependency security, artifact integrity, and deployment verification.
Advanced Tool Proficiency Assessment
Advanced assessment on Garak, PyRIT, HarmBench, and custom tool development proficiency.
Tool Proficiency Assessment
Test your knowledge of AI red teaming tools, frameworks, automation platforms, and their appropriate application in security assessments with 9 intermediate-level questions.
Training Pipeline Security Assessment
Test your advanced knowledge of training pipeline attacks including data poisoning, fine-tuning hijacking, RLHF manipulation, and backdoor implantation with 9 questions.
Workflow Patterns Security Assessment
Assessment of sequential, parallel, and hierarchical agent workflow exploitation techniques.
Skill Verification Overview
Overview of timed skill verification labs for AI red teaming, including format, pass/fail criteria, and preparation guidance.
Capstone: Autonomous Vehicle AI Security
Full-scope security assessment of an autonomous vehicle AI decision system covering perception manipulation, planning attacks, and safety override bypass.
Capstone: Code Assistant Assessment
Capstone exercise: security assessment of an AI code assistant with repository and CI/CD access.
Capstone: Educational AI Platform
Security assessment of an AI tutoring platform addressing content safety, student data privacy, and academic integrity.
Capstone: Enterprise RAG Assessment
Capstone exercise: complete red team assessment of an enterprise RAG system with role-based access.
Capstone: Deep Assessment with Garak
Tool-specific capstone using Garak for comprehensive vulnerability scanning including plugin development and custom probe creation.
Capstone: Legal AI Review System
End-to-end security assessment of an AI-powered legal document review system covering data confidentiality, output integrity, and adversarial manipulation.
Capstone: Conduct a Full Model Security Audit
Perform a comprehensive security audit of an LLM deployment covering model behavior, API security, data handling, access controls, and compliance alignment.
Capstone: Multi-Agent System Assessment
Capstone exercise: end-to-end security assessment of a multi-agent platform with MCP and A2A.
Capstone: Multimodal System Assessment
Capstone exercise: red team assessment of a multimodal AI system processing images, documents, and text.
Capstone: Comprehensive RAG Security Assessment
Conduct a thorough security assessment of a Retrieval-Augmented Generation system, testing document poisoning, retrieval manipulation, context window attacks, and data exfiltration vectors.
Capstone: Supply Chain AI Security
Red team assessment of AI-driven supply chain optimization covering data poisoning, decision manipulation, and operational disruption.
AI Audit Methodologies
Structured methodologies for auditing AI systems covering technical, organizational, and compliance dimensions.
AI Impact Assessment Methodology
Methodology for conducting algorithmic impact assessments required by emerging regulations.
AI Insurance and Risk Transfer
Understanding AI insurance products and risk transfer mechanisms for organizational protection.
AI Vendor Security Assessment Framework
Framework for evaluating the security posture of AI vendors, model providers, and service integrations.
NIST AI 600-1 GenAI Risk Profile
NIST AI 600-1 Generative AI risk profile covering risk categories, control mappings, assessment methodology, and practical application for red team engagements.
Supplier AI Risk Assessment Guide
Conducting AI risk assessments of third-party suppliers and their AI components.
AI Governance Maturity Model
Assessing and advancing organizational AI governance maturity across multiple capability dimensions.
Education AI Security
Security risks of AI in education — covering academic integrity threats, adaptive learning manipulation, student data privacy, AI tutoring attacks, and assessment system exploitation.
Education Assessment AI Security
Security of AI-powered grading, plagiarism detection, personalized learning, and student evaluation systems.
Government AI Procurement Security
Security assessment considerations for government AI procurement and vendor evaluation.
Education Assessment AI Security (Industry Verticals)
Security of AI in educational assessment including automated grading, proctoring, and plagiarism detection.
Penetration Testing Methodology for AI Infrastructure
A structured methodology for penetration testing AI/ML systems covering reconnaissance, vulnerability assessment, exploitation, and reporting
Lab: Cloud AI Security Assessment
Conduct an end-to-end security assessment of a cloud-deployed AI service, covering API security, model vulnerabilities, data handling, and infrastructure configuration.
Lab: Cloud AI Assessment
Hands-on lab for conducting an end-to-end security assessment of a cloud-deployed AI system including infrastructure review, API testing, model security evaluation, and data flow analysis.
Full Engagement Simulations
End-to-end red team engagement simulations that replicate real-world AI security assessments, from scoping through report delivery.
FinTech Chatbot Security Assessment
Conduct a full security assessment of a financial services chatbot handling sensitive transactions.
Legal AI Document Review Assessment
Assess a legal AI system that reviews contracts for vulnerabilities in document processing and privilege escalation.
Simulation: Startup AI Assessment
Red team a startup's AI-powered product with limited scope and budget, making pragmatic tradeoffs between thoroughness and time constraints.
MLflow Security Assessment
Security assessment of MLflow deployments including tracking server vulnerabilities, artifact store exploitation, and model registry attacks.
Methodology for Red Teaming Multimodal Systems
Structured methodology for conducting security assessments of multimodal AI systems, covering scoping, attack surface enumeration, test execution, and reporting with MITRE ATLAS mappings.
AI Red Team Maturity Model (Professional)
A structured maturity model for assessing and advancing the capabilities of AI red team programs across five progressive levels.
AI Red Teaming Methodology
A structured methodology for AI red teaming engagements, covering reconnaissance, target profiling, attack planning, and the tradecraft that distinguishes professional assessments.
Defense Mapping Methodology
Methodologies for systematically identifying and mapping the defensive controls protecting a target AI system before launching attacks.
Agentic System Assessment Methodology
Comprehensive methodology for assessing agentic AI systems including tool use, memory, and multi-agent interactions.
Multi-Model Assessment Methodology
Methodology for assessing applications that use multiple AI models in pipelines or ensemble configurations.
MCP 安全 Testing: How to Test MCP Servers for Vulnerabilities
A defense-focused guide to security testing MCP server implementations -- methodology for MCP security assessments, scanning tools, common test cases for auth bypass, injection, traversal, and data leaks, with working test scripts and reporting templates.
Post-Incident AI 評量
Conducting post-incident assessments for AI security events including root cause analysis.
練習考試概覽
AI 紅隊練習考試概覽、準備策略、考試結構與最大化你分數的訣竅。
A2A Protocol 安全 評量
評量 covering multi-agent system vulnerabilities, trust boundary attacks, and agent-to-agent protocol exploitation.
代理 Architecture 安全 評量
評量 covering agent design patterns, tool sandboxing, multi-agent trust, and MCP security.
章節評量:代理利用
15 題校準評量,測試你對代理式 AI 利用的理解——工具濫用、MCP 攻擊、記憶體投毒與多代理安全。
代理 記憶體 安全 評量
評量 covering memory poisoning, context manipulation, exfiltration, and cross-session persistence attacks.
章節評量:代理式
15 題校準評量,測試你對代理式 AI 利用的理解。
Automated 紅隊演練 評量
評量 of automated attack generation tools including PAIR, TAP, GCG, and custom harness development.
進階 Cloud AI 安全 評量
15-question advanced assessment covering cloud AI attack surfaces across AWS, Azure, and GCP: guardrail bypass, knowledge base exploitation, managed identity abuse, model customization risks, and multi-cloud attack paths.
章節評量:雲端 AI
15 題校準評量,測試你對雲端 AI 平台安全的理解。
Multi-Cloud AI 安全 評量
評量 spanning AWS Bedrock, Azure OpenAI, and GCP Vertex AI security configurations and misconfigurations.
Cloud AI Platforms 評量
評量 covering AWS Bedrock, Azure OpenAI, GCP Vertex AI, and multi-cloud security strategies.
Code Execution Safety 評量
評量 of LLM-generated code safety, sandbox escape techniques, and code review automation.
章節評量:程式碼生成安全
15 題校準評量,測試你對 AI 程式碼生成安全的理解——建議投毒、訓練資料萃取與 IDE 風險。
Context Window 安全 評量
評量 of context window overflow, attention manipulation, and long-context exploitation techniques.
Continuous AI Monitoring 評量
評量 on monitoring strategies, anomaly detection, alerting thresholds, and operational security.
Cross-模型 Transfer 評量
評量 of attack transferability across model families, versions, and providers.
Data 投毒 評量
Comprehensive assessment of training data poisoning, synthetic data attacks, and supply chain vulnerabilities.
Data Privacy in AI 評量
評量 on training data privacy, membership inference, data extraction, and privacy-preserving techniques.
章節評量:防禦基礎
15 題校準評量,測試你對 AI 防禦機制基礎的理解。
章節評量:防禦
15 題校準評量,測試你對AI 防禦與緩解策略的理解。
Embedding and Vector 攻擊 評量
評量 of adversarial embedding perturbation, similarity manipulation, and vector database poisoning.
章節評量:嵌入向量
15 題校準評量,測試你對嵌入向量與向量安全的理解。
AI Ethics and Legal 評量
評量 on ethical frameworks, legal considerations, and responsible disclosure in AI security.
EU AI Act Compliance 評量
Comprehensive assessment of organizational readiness for EU AI Act requirements including red team testing mandates.
Fine-Tuning 攻擊 評量
評量 of safety degradation through fine-tuning, backdoor insertion, and alignment removal techniques.
章節評量:微調安全
15 題校準評量,測試你對微調安全的理解——對齊侵蝕、後門植入與 LoRA 適配器風險。
章節評量:AI 鑑識
15 題校準評量,測試你對 AI 鑑識與事件應變的理解——證據收集、日誌分析與模型行為調查。
章節評量:基礎
15 題校準評量,測試你對AI 安全基礎知識的理解。
Frontier Research 評量
Comprehensive assessment covering adversarial robustness, alignment faking, sleeper agents, and emerging research directions in AI security.
Function Calling 安全 評量
評量 focused on JSON schema injection, parameter manipulation, recursive calling, and result poisoning attacks.
章節評量:治理
15 題校準評量,測試你對AI 治理與合規的理解。
章節評量:護欄
15 題校準評量,測試你對護欄架構與安全層實作的理解。
IAM for AI Systems 評量
評量 of identity and access management vulnerabilities specific to AI service deployments.
章節評量:影響
15 題校準評量,測試你對 AI 攻擊之真實世界影響類別的理解。
AI Incident Response 評量
評量 of AI-specific incident response procedures, forensics, and recovery capabilities.
Incident Response 評量
評量 on AI incident response procedures, evidence collection, and post-incident analysis.
Infrastructure 安全 評量
評量 covering model serving, container security, API gateway hardening, and deployment pipeline threats.
章節評量:基礎設施
15 題校準評量,測試你對 AI 基礎設施安全的理解——供應鏈、API 安全、雲端部署與模型服務。
章節評量:越獄
15 題校準評量,測試你對越獄技術的理解——角色扮演、編碼、多範例、漸進與基於梯度的方法。
章節評量:法律與倫理
15 題校準評量,測試你對 AI 紅隊演練中法律與倫理考量的理解——授權、揭露、國際法與合規。
LLM Architecture 安全 評量
評量 on transformer internals, tokenization security, attention vulnerabilities, and model-level attacks.
LLM Fingerprinting 評量
評量 of model identification, behavioral fingerprinting, and architecture inference techniques.
章節評量:LLMOps 安全
15 題校準評量,測試你對 LLMOps 安全的理解——模型服務、推論安全、快取風險與 ML 管線安全。
進階 MCP 安全 評量
Comprehensive assessment of MCP protocol vulnerabilities including transport attacks, tool poisoning, and capability escalation.
章節評量:MCP 安全
15 題校準評量,測試你對模型上下文協議安全的理解——工具遮蔽、傳輸攻擊、伺服器審查與設定安全。
章節評量:方法論
15 題校準評量,測試你對 AI 紅隊案件方法論的理解——範圍界定、偵察、執行與報告。
章節評量:模型萃取
15 題校準評量,測試你對模型萃取與智慧財產盜竊技術的理解。
模型 Supply Chain 評量
評量 covering model provenance, checkpoint manipulation, and third-party model risks.
章節評量:監控
15 題校準評量,測試你對 AI 系統監控與可觀測性的理解——異常偵測、行為基準與安全事件關聯。
Multi-Turn 攻擊 評量
評量 of crescendo attacks, conversational manipulation, and progressive jailbreaking techniques.
進階 Multimodal 評量
In-depth assessment of cross-modal attack vectors including image injection, audio manipulation, and steganographic techniques.
Multimodal 防禦 評量
評量 covering defenses against visual injection, audio attacks, and cross-modal exploitation.
章節評量:多模態安全
15 題校準評量,測試你對多模態 AI 安全的理解——視覺注入、音訊攻擊、跨模態利用。
NIST AI RMF 評量
評量 covering implementation of NIST AI Risk Management Framework across all four functions.
Output Safety 評量
評量 of output filtering, content classification, watermarking, and data leakage prevention.
章節評量:隱私攻擊
15 題校準評量,測試你對 AI 系統中隱私攻擊的理解——PII 萃取、成員推論與模型反演。
章節評量:專業實務
15 題校準評量,測試你對專業紅隊營運與報告的理解。
章節評量:提示詞注入
15 題校準評量,測試你對提示詞注入與越獄技術的理解。
Prompt Leakage 評量
評量 of system prompt extraction techniques including direct probing, logprob analysis, and side-channel methods.
章節評量:RAG 攻擊
15 題校準評量,測試你對 RAG 管線攻擊的理解——知識庫投毒、檢索操控與資料萃取。
Rate Limiting and Abuse 評量
評量 of rate limiting bypass techniques, cost-based attacks, and billing abuse in AI services.
Reasoning 模型 安全 評量
評量 of chain-of-thought exploitation, reasoning trace manipulation, and thinking-token attacks.
章節評量:偵察
15 題校準評量,測試你對 AI 紅隊偵察技術的理解——系統提示詞萃取、能力描繪與目標剖析。
紅隊 Methodology 評量 (評量)
評量 on scoping, planning, execution, and reporting of AI red team engagements.
紅隊 Engagement Planning 評量
評量 of planning, scoping, authorization, and execution methodology for AI red team engagements.
Responsible AI Disclosure 評量
評量 of responsible disclosure practices, vulnerability reporting, and coordinated disclosure for AI systems.
RLHF 利用ation 評量
評量 of reinforcement learning from human feedback pipeline vulnerabilities and reward hacking.
Steganographic 攻擊 評量
評量 of hidden payload delivery through steganography, zero-width characters, and encoding tricks.
AI Supply Chain 評量
評量 covering model provenance, dependency security, artifact integrity, and deployment verification.
進階 工具 Proficiency 評量
進階 assessment on Garak, PyRIT, HarmBench, and custom tool development proficiency.
章節評量:工具熟練度
15 題校準評量,測試你對 AI 紅隊演練工具生態系的理解——Garak、PyRIT、Promptfoo 與自訂自動化。
章節評量:訓練管線
15 題校準評量,測試你對訓練管線安全的理解——資料投毒、RLHF 操控與架構層級攻擊。
Workflow Patterns 安全 評量
評量 of sequential, parallel, and hierarchical agent workflow exploitation techniques.
技能驗證概覽
AI 紅隊計時技能驗證實驗室概覽,包含格式、通過/失敗標準與準備指引。
Capstone: Autonomous Vehicle AI 安全
Full-scope security assessment of an autonomous vehicle AI decision system covering perception manipulation, planning attacks, and safety override bypass.
Capstone: Code Assistant 評量
Capstone exercise: security assessment of an AI code assistant with repository and CI/CD access.
Capstone: Educational AI Platform
安全 assessment of an AI tutoring platform addressing content safety, student data privacy, and academic integrity.
Capstone: Enterprise RAG 評量
Capstone exercise: complete red team assessment of an enterprise RAG system with role-based access.
Capstone: Deep 評量 with Garak
工具-specific capstone using Garak for comprehensive vulnerability scanning including plugin development and custom probe creation.
Capstone: Legal AI Review System
End-to-end security assessment of an AI-powered legal document review system covering data confidentiality, output integrity, and adversarial manipulation.
Capstone: Conduct a Full 模型 安全 Audit
Perform a comprehensive security audit of an LLM deployment covering model behavior, API security, data handling, access controls, and compliance alignment.
Capstone: Multi-代理 System 評量
Capstone exercise: end-to-end security assessment of a multi-agent platform with MCP and A2A.
Capstone: Multimodal System 評量
Capstone exercise: red team assessment of a multimodal AI system processing images, documents, and text.
Capstone: Comprehensive RAG 安全 評量
Conduct a thorough security assessment of a Retrieval-Augmented Generation system, testing document poisoning, retrieval manipulation, context window attacks, and data exfiltration vectors.
Capstone: Supply Chain AI 安全
Red team assessment of AI-driven supply chain optimization covering data poisoning, decision manipulation, and operational disruption.
AI Audit Methodologies
Structured methodologies for auditing AI systems covering technical, organizational, and compliance dimensions.
AI Impact 評量 Methodology
Methodology for conducting algorithmic impact assessments required by emerging regulations.
AI Insurance and Risk Transfer
Understanding AI insurance products and risk transfer mechanisms for organizational protection.
AI Vendor 安全 評量 Framework
Framework for evaluating the security posture of AI vendors, model providers, and service integrations.
NIST AI 600-1 GenAI Risk Profile
NIST AI 600-1 Generative AI risk profile covering risk categories, control mappings, assessment methodology, and practical application for red team engagements.
Supplier AI Risk 評量 指南
Conducting AI risk assessments of third-party suppliers and their AI components.
AI Governance Maturity 模型
Assessing and advancing organizational AI governance maturity across multiple capability dimensions.
教育 AI 安全
教育中 AI 的安全風險——涵蓋學術誠信威脅、適性學習操控、學生資料隱私、AI 家教攻擊與評量系統利用。
Education 評量 AI 安全
安全 of AI-powered grading, plagiarism detection, personalized learning, and student evaluation systems.
Government AI Procurement 安全
安全 assessment considerations for government AI procurement and vendor evaluation.
Education 評量 AI 安全 (Industry Verticals)
安全 of AI in educational assessment including automated grading, proctoring, and plagiarism detection.
Penetration Testing Methodology for AI Infrastructure
A structured methodology for penetration testing AI/ML systems covering reconnaissance, vulnerability assessment, exploitation, and reporting
實驗室: Cloud AI 安全 評量
Conduct an end-to-end security assessment of a cloud-deployed AI service, covering API security, model vulnerabilities, data handling, and infrastructure configuration.
實驗室: Cloud AI 評量
Hands-on lab for conducting an end-to-end security assessment of a cloud-deployed AI system including infrastructure review, API testing, model security evaluation, and data flow analysis.
完整案件模擬
端對端紅隊案件模擬,複製真實世界 AI 安全評估,從範圍界定到報告交付。
FinTech Chatbot 安全 評量
Conduct a full security assessment of a financial services chatbot handling sensitive transactions.
Legal AI Document Review 評量
Assess a legal AI system that reviews contracts for vulnerabilities in document processing and privilege escalation.
模擬:新創 AI 評估
以有限範圍與預算對新創之 AI 驅動產品紅隊,於徹底與時間約束間作務實權衡。
MLflow 安全 評量
安全 assessment of MLflow deployments including tracking server vulnerabilities, artifact store exploitation, and model registry attacks.
Methodology for 紅隊演練 Multimodal Systems
Structured methodology for conducting security assessments of multimodal AI systems, covering scoping, attack surface enumeration, test execution, and reporting with MITRE ATLAS mappings.
AI 紅隊 Maturity 模型 (Professional)
A structured maturity model for assessing and advancing the capabilities of AI red team programs across five progressive levels.
AI 紅隊演練方法論
AI 紅隊案件的結構化方法論,涵蓋偵察、目標剖析、攻擊規劃,以及區分專業評估的技藝。
防禦 Mapping Methodology
Methodologies for systematically identifying and mapping the defensive controls protecting a target AI system before launching attacks.
代理式 System 評量 Methodology
Comprehensive methodology for assessing agentic AI systems including tool use, memory, and multi-agent interactions.
Multi-模型 評量 Methodology
Methodology for assessing applications that use multiple AI models in pipelines or ensemble configurations.