# tradecraft
標記為「tradecraft」的 82 篇文章
Advanced Tradecraft Assessment
Assessment covering OPSEC, evasion, persistence, attribution avoidance, and professional red team operations.
Red Team Tradecraft Assessment
Assessment covering OPSEC, evasion, reconnaissance, and professional red team operations.
Mapping Model Capabilities
Systematic approaches to discovering and mapping the full capability surface of an AI system, including tools, integrations, permissions, and hidden features.
AI Red Teaming Methodology
A structured methodology for AI red teaming engagements, covering reconnaissance, target profiling, attack planning, and the tradecraft that distinguishes professional assessments.
System Prompt Extraction
Techniques for extracting hidden system prompts from AI applications, revealing safety rules, tool definitions, behavioral constraints, and sensitive configuration.
Profiling AI Systems
Techniques for identifying the model, provider, version, safety configuration, and capabilities of a target AI system through behavioral analysis and fingerprinting.
Advanced Reconnaissance for AI Targets
Fingerprinting LLM providers, API reverse engineering, infrastructure detection, and shadow AI discovery for red team engagements.
System Prompt Extraction Techniques
Catalog of system prompt extraction methods against LLM-powered applications: direct attacks, indirect techniques, multi-turn strategies, and defensive evasion.
Adversarial Persistence Mechanisms
Techniques for maintaining persistent access to AI systems including conversation memory manipulation, cached response poisoning, and model weight persistence.
AI Red Team Communication Protocols
Communication and coordination protocols for AI red team operations including finding classification, escalation procedures, and deconfliction.
Advanced OPSEC for AI Red Teams
Advanced operational security practices for AI red team engagements including traffic obfuscation, attribution prevention, and covert testing.
AI Red Team OPSEC
Operational security for AI red team engagements including API key management and attribution avoidance.
AI Red Team Reconnaissance
Reconnaissance techniques for mapping AI system architecture, models, and defense configurations.
Attribution Avoidance in AI Testing
Techniques for conducting authorized testing while minimizing attribution signals in system logs.
Chaining AI Vulnerabilities
Techniques for chaining multiple AI vulnerabilities into reliable multi-step exploitation paths.
Counter-Forensics for AI Attacks
Techniques for evading forensic analysis during and after AI red team operations, including log manipulation and behavioral normalization.
Deconfliction Procedures for AI Testing
Procedures for deconflicting AI red team testing activities with production operations, monitoring teams, and other concurrent assessments.
Defense Mapping Methodology
Methodologies for systematically identifying and mapping the defensive controls protecting a target AI system before launching attacks.
Engagement Lifecycle Management
End-to-end management of AI red team engagements from proposal through execution to reporting and remediation verification.
Evasion Techniques for AI Classifiers
Advanced techniques for evading input/output safety classifiers in LLM applications.
Evidence Handling Procedures
Proper procedures for collecting, documenting, and preserving evidence during AI red team engagements to ensure findings are defensible.
Evidence Handling for Red Team Operations
Proper evidence handling procedures during AI red team engagements including collection, preservation, and documentation.
Fingerprinting LLM Models
Techniques for identifying which model, version, and configuration underlies an AI application.
Tradecraft
Advanced AI red team tradecraft covering reconnaissance techniques, AI-specific threat modeling, and structured engagement methodology for professional adversarial assessments.
Lateral Movement in AI Systems
Techniques for moving laterally through AI system architectures after initial compromise, including agent-to-agent pivoting and tool exploitation.
Continuous Red Teaming Programs
Designing and operating ongoing AI red team programs with automated testing pipelines, metrics dashboards, KPI frameworks, alert-driven assessments, and integration with CI/CD and model deployment workflows.
Evidence Collection & Chain of Custody (Tradecraft)
Standards for capturing, preserving, and documenting AI red team findings: conversation logs, API traces, bypass rate measurement, and evidence packaging for reproducible reporting.
Red Team Methodology Overview
A structured methodology for AI red team engagements: phases, deliverables, role definitions, and how AI-specific testing differs from traditional penetration testing.
Purple Teaming for AI
Collaborative attack-defense exercises for AI systems: structuring purple team engagements, real-time knowledge transfer, joint attack simulation, and measuring defensive improvement through iterative testing.
Scoping & Rules of Engagement
Defining scope, rules of engagement, authorization boundaries, and success criteria for AI red team engagements, with templates and checklists for common engagement types.
AI-Specific Threat Modeling
Adapting STRIDE for AI systems, building attack trees for LLM applications, identifying AI-specific threat categories, and producing actionable threat models that drive red team test plans.
Model Enumeration Techniques
Systematic techniques for identifying specific models, versions, and configurations behind API endpoints through behavioral analysis and probing.
Multi-Stage Attack Planning
Planning and executing multi-stage attacks against AI systems that chain multiple vulnerability classes into complete exploitation paths.
Payload Staging Techniques
Techniques for staging and delivering adversarial payloads in multi-step sequences that avoid detection by real-time monitoring systems.
Persistence in AI Systems
Achieving persistent access and influence in AI systems through memory, fine-tuning, and context manipulation.
Pivoting from AI to Traditional Infrastructure
Techniques for pivoting from AI system compromise to traditional infrastructure access.
Scope Management for AI Engagements
Managing engagement scope for AI red team assessments including boundary definition, escalation criteria, and responsible disclosure protocols.
Social Engineering in AI Context
Social engineering techniques adapted for AI-mediated interactions and agent-based systems.
Stealth Data Extraction Techniques
Stealthy techniques for extracting sensitive data from AI systems without triggering alerts.
Target Profiling for AI Systems
Building comprehensive profiles of target AI systems including architecture, capabilities, defenses, and known weaknesses before engagement.
Tool Selection for AI Red Teaming
Framework for selecting and configuring tools for AI red team engagements based on target architecture, engagement scope, and team capabilities.
進階 Tradecraft 評量
評量 covering OPSEC, evasion, persistence, attribution avoidance, and professional red team operations.
紅隊 Tradecraft 評量
評量 covering OPSEC, evasion, reconnaissance, and professional red team operations.
繪製模型能力
有系統地發掘並繪製 AI 系統完整能力面的做法,涵蓋工具、整合、權限與隱藏功能。
AI 紅隊演練方法論
AI 紅隊案件的結構化方法論,涵蓋偵察、目標剖析、攻擊規劃,以及區分專業評估的技藝。
系統提示擷取
從 AI 應用中擷取隱藏系統提示的技術,揭露安全規則、工具定義、行為約束與敏感組態。
剖析 AI 系統
透過行為分析與指紋識別,辨識目標 AI 系統的模型、供應商、版本、安全組態與能力。
針對 AI 目標的進階偵察
針對紅隊委任的 LLM 供應商指紋識別、API 逆向工程、基礎設施偵測,以及影子 AI 發掘。
系統提示擷取技術
針對 LLM 應用之系統提示擷取方法的目錄:直接攻擊、間接技術、多輪策略與規避偵測。
Adversarial Persistence Mechanisms
Techniques for maintaining persistent access to AI systems including conversation memory manipulation, cached response poisoning, and model weight persistence.
AI 紅隊 Communication Protocols
Communication and coordination protocols for AI red team operations including finding classification, escalation procedures, and deconfliction.
進階 OPSEC for AI 紅隊s
進階 operational security practices for AI red team engagements including traffic obfuscation, attribution prevention, and covert testing.
AI 紅隊 OPSEC
Operational security for AI red team engagements including API key management and attribution avoidance.
AI 紅隊 Reconnaissance
Reconnaissance techniques for mapping AI system architecture, models, and defense configurations.
Attribution Avoidance in AI Testing
Techniques for conducting authorized testing while minimizing attribution signals in system logs.
Chaining AI Vulnerabilities
Techniques for chaining multiple AI vulnerabilities into reliable multi-step exploitation paths.
Counter-Forensics for AI 攻擊s
Techniques for evading forensic analysis during and after AI red team operations, including log manipulation and behavioral normalization.
Deconfliction Procedures for AI Testing
Procedures for deconflicting AI red team testing activities with production operations, monitoring teams, and other concurrent assessments.
防禦 Mapping Methodology
Methodologies for systematically identifying and mapping the defensive controls protecting a target AI system before launching attacks.
Engagement Lifecycle Management
End-to-end management of AI red team engagements from proposal through execution to reporting and remediation verification.
Evasion Techniques for AI Classifiers
進階 techniques for evading input/output safety classifiers in LLM applications.
Evidence Handling Procedures
Proper procedures for collecting, documenting, and preserving evidence during AI red team engagements to ensure findings are defensible.
Evidence Handling for 紅隊 Operations
Proper evidence handling procedures during AI red team engagements including collection, preservation, and documentation.
Fingerprinting LLM 模型s
Techniques for identifying which model, version, and configuration underlies an AI application.
技藝
涵蓋偵察技術、AI 特定威脅建模,以及專業對抗性評估之結構化案件方法論的進階 AI 紅隊技藝。
Lateral Movement in AI Systems
Techniques for moving laterally through AI system architectures after initial compromise, including agent-to-agent pivoting and tool exploitation.
Continuous 紅隊演練 Programs
Designing and operating ongoing AI red team programs with automated testing pipelines, metrics dashboards, KPI frameworks, alert-driven assessments, and integration with CI/CD and model deployment workflows.
Evidence Collection & Chain of Custody (Tradecraft)
Standards for capturing, preserving, and documenting AI red team findings: conversation logs, API traces, bypass rate measurement, and evidence packaging for reproducible reporting.
紅隊 Methodology 概覽
A structured methodology for AI red team engagements: phases, deliverables, role definitions, and how AI-specific testing differs from traditional penetration testing.
Purple Teaming for AI
Collaborative attack-defense exercises for AI systems: structuring purple team engagements, real-time knowledge transfer, joint attack simulation, and measuring defensive improvement through iterative testing.
Scoping & Rules of Engagement
Defining scope, rules of engagement, authorization boundaries, and success criteria for AI red team engagements, with templates and checklists for common engagement types.
AI-Specific Threat 模型ing
Adapting STRIDE for AI systems, building attack trees for LLM applications, identifying AI-specific threat categories, and producing actionable threat models that drive red team test plans.
模型 Enumeration Techniques
Systematic techniques for identifying specific models, versions, and configurations behind API endpoints through behavioral analysis and probing.
Multi-Stage 攻擊 Planning
Planning and executing multi-stage attacks against AI systems that chain multiple vulnerability classes into complete exploitation paths.
Payload Staging Techniques
Techniques for staging and delivering adversarial payloads in multi-step sequences that avoid detection by real-time monitoring systems.
Persistence in AI Systems
Achieving persistent access and influence in AI systems through memory, fine-tuning, and context manipulation.
Pivoting from AI to Traditional Infrastructure
Techniques for pivoting from AI system compromise to traditional infrastructure access.
Scope Management for AI Engagements
Managing engagement scope for AI red team assessments including boundary definition, escalation criteria, and responsible disclosure protocols.
Social Engineering in AI Context
Social engineering techniques adapted for AI-mediated interactions and agent-based systems.
Stealth Data Extraction Techniques
Stealthy techniques for extracting sensitive data from AI systems without triggering alerts.
Target Profiling for AI Systems
Building comprehensive profiles of target AI systems including architecture, capabilities, defenses, and known weaknesses before engagement.
工具 Selection for AI 紅隊ing
Framework for selecting and configuring tools for AI red team engagements based on target architecture, engagement scope, and team capabilities.