# tradecraft
41 articlestagged with “tradecraft”
Advanced Tradecraft Assessment
Assessment covering OPSEC, evasion, persistence, attribution avoidance, and professional red team operations.
Red Team Tradecraft Assessment
Assessment covering OPSEC, evasion, reconnaissance, and professional red team operations.
Mapping Model Capabilities
Systematic approaches to discovering and mapping the full capability surface of an AI system, including tools, integrations, permissions, and hidden features.
AI Red Teaming Methodology
A structured methodology for AI red teaming engagements, covering reconnaissance, target profiling, attack planning, and the tradecraft that distinguishes professional assessments.
System Prompt Extraction
Techniques for extracting hidden system prompts from AI applications, revealing safety rules, tool definitions, behavioral constraints, and sensitive configuration.
Profiling AI Systems
Techniques for identifying the model, provider, version, safety configuration, and capabilities of a target AI system through behavioral analysis and fingerprinting.
Advanced Reconnaissance for AI Targets
Fingerprinting LLM providers, API reverse engineering, infrastructure detection, and shadow AI discovery for red team engagements.
System Prompt Extraction Techniques
Catalog of system prompt extraction methods against LLM-powered applications: direct attacks, indirect techniques, multi-turn strategies, and defensive evasion.
Adversarial Persistence Mechanisms
Techniques for maintaining persistent access to AI systems including conversation memory manipulation, cached response poisoning, and model weight persistence.
AI Red Team Communication Protocols
Communication and coordination protocols for AI red team operations including finding classification, escalation procedures, and deconfliction.
Advanced OPSEC for AI Red Teams
Advanced operational security practices for AI red team engagements including traffic obfuscation, attribution prevention, and covert testing.
AI Red Team OPSEC
Operational security for AI red team engagements including API key management and attribution avoidance.
AI Red Team Reconnaissance
Reconnaissance techniques for mapping AI system architecture, models, and defense configurations.
Attribution Avoidance in AI Testing
Techniques for conducting authorized testing while minimizing attribution signals in system logs.
Chaining AI Vulnerabilities
Techniques for chaining multiple AI vulnerabilities into reliable multi-step exploitation paths.
Counter-Forensics for AI Attacks
Techniques for evading forensic analysis during and after AI red team operations, including log manipulation and behavioral normalization.
Deconfliction Procedures for AI Testing
Procedures for deconflicting AI red team testing activities with production operations, monitoring teams, and other concurrent assessments.
Defense Mapping Methodology
Methodologies for systematically identifying and mapping the defensive controls protecting a target AI system before launching attacks.
Engagement Lifecycle Management
End-to-end management of AI red team engagements from proposal through execution to reporting and remediation verification.
Evasion Techniques for AI Classifiers
Advanced techniques for evading input/output safety classifiers in LLM applications.
Evidence Handling Procedures
Proper procedures for collecting, documenting, and preserving evidence during AI red team engagements to ensure findings are defensible.
Evidence Handling for Red Team Operations
Proper evidence handling procedures during AI red team engagements including collection, preservation, and documentation.
Fingerprinting LLM Models
Techniques for identifying which model, version, and configuration underlies an AI application.
Tradecraft
Advanced AI red team tradecraft covering reconnaissance techniques, AI-specific threat modeling, and structured engagement methodology for professional adversarial assessments.
Lateral Movement in AI Systems
Techniques for moving laterally through AI system architectures after initial compromise, including agent-to-agent pivoting and tool exploitation.
Continuous Red Teaming Programs
Designing and operating ongoing AI red team programs with automated testing pipelines, metrics dashboards, KPI frameworks, alert-driven assessments, and integration with CI/CD and model deployment workflows.
Evidence Collection & Chain of Custody (Tradecraft)
Standards for capturing, preserving, and documenting AI red team findings: conversation logs, API traces, bypass rate measurement, and evidence packaging for reproducible reporting.
Red Team Methodology Overview
A structured methodology for AI red team engagements: phases, deliverables, role definitions, and how AI-specific testing differs from traditional penetration testing.
Purple Teaming for AI
Collaborative attack-defense exercises for AI systems: structuring purple team engagements, real-time knowledge transfer, joint attack simulation, and measuring defensive improvement through iterative testing.
Scoping & Rules of Engagement
Defining scope, rules of engagement, authorization boundaries, and success criteria for AI red team engagements, with templates and checklists for common engagement types.
AI-Specific Threat Modeling
Adapting STRIDE for AI systems, building attack trees for LLM applications, identifying AI-specific threat categories, and producing actionable threat models that drive red team test plans.
Model Enumeration Techniques
Systematic techniques for identifying specific models, versions, and configurations behind API endpoints through behavioral analysis and probing.
Multi-Stage Attack Planning
Planning and executing multi-stage attacks against AI systems that chain multiple vulnerability classes into complete exploitation paths.
Payload Staging Techniques
Techniques for staging and delivering adversarial payloads in multi-step sequences that avoid detection by real-time monitoring systems.
Persistence in AI Systems
Achieving persistent access and influence in AI systems through memory, fine-tuning, and context manipulation.
Pivoting from AI to Traditional Infrastructure
Techniques for pivoting from AI system compromise to traditional infrastructure access.
Scope Management for AI Engagements
Managing engagement scope for AI red team assessments including boundary definition, escalation criteria, and responsible disclosure protocols.
Social Engineering in AI Context
Social engineering techniques adapted for AI-mediated interactions and agent-based systems.
Stealth Data Extraction Techniques
Stealthy techniques for extracting sensitive data from AI systems without triggering alerts.
Target Profiling for AI Systems
Building comprehensive profiles of target AI systems including architecture, capabilities, defenses, and known weaknesses before engagement.
Tool Selection for AI Red Teaming
Framework for selecting and configuring tools for AI red team engagements based on target architecture, engagement scope, and team capabilities.