# walkthrough
156 articlestagged with “walkthrough”
A2A Trust Boundary Attack
Advanced walkthrough of exploiting trust boundaries between agents in multi-agent systems using the Agent-to-Agent (A2A) protocol.
Agent Context Overflow
Walkthrough of overflowing agent context windows to push safety instructions out of the LLM's attention, enabling bypasses of system prompts and guardrails.
Agent Loop Hijacking
Advanced walkthrough of hijacking agentic loops to redirect autonomous agent behavior, alter reasoning chains, and achieve persistent control over multi-step agent workflows.
Agent Persistence via Memory
Advanced walkthrough of using agent memory systems to create persistent backdoors that survive restarts, updates, and session boundaries.
Callback Abuse in MCP
Advanced walkthrough of abusing MCP callback mechanisms for unauthorized actions, data exfiltration, and privilege escalation in agent-tool interactions.
Competition-Style Jailbreak Techniques
Walkthrough of jailbreak techniques used in AI security competitions and CTF events.
Computer Use Agent Injection Walkthrough
Walkthrough of injecting prompts through UI elements and screenshots processed by computer-use agents.
Data Harvesting Through LLM Apps
Complete walkthrough of systematic data extraction from LLM applications using various exfiltration channels.
Encoding Chain Bypass Walkthrough
Walkthrough of chaining Base64, URL encoding, and Unicode tricks to bypass multi-layer input filters.
Function Calling Parameter Injection
Walkthrough of manipulating function call parameters through prompt-level techniques, injecting malicious values into LLM-generated API calls.
MCP Tool Shadowing
Advanced walkthrough of creating shadow tools that override legitimate MCP (Model Context Protocol) tools, enabling interception and manipulation of agent-tool interactions.
Memory Persistence Attack Walkthrough
Walkthrough of achieving persistent memory manipulation in agent systems for cross-session influence.
Memory Poisoning Step by Step
Walkthrough of persisting injection payloads in agent memory systems to achieve long-term compromise of LLM-based agents.
Multi-Agent Prompt Relay
Advanced walkthrough of relaying prompt injection payloads across multiple agents in a pipeline, achieving cascading compromise of multi-agent systems.
Orchestrator Manipulation
Advanced walkthrough of attacking the orchestrator layer in multi-agent systems to gain control over task delegation, agent coordination, and system-wide behavior.
Plugin Confusion Attack
Walkthrough of confusing LLM agents about which plugin or tool to invoke, causing them to call the wrong tool or pass data to unintended destinations.
Agent Privilege Escalation Walkthrough
Walkthrough of escalating privileges in multi-agent systems through trust chain exploitation.
Semantic Camouflage Walkthrough
Walkthrough of crafting semantically camouflaged injections that evade both classifiers and human review.
Model Supply Chain Poisoning
Walkthrough of poisoning ML supply chains through dependency confusion, model weight manipulation, and hub attacks.
Tool Call Injection
Step-by-step walkthrough of injecting malicious parameters into LLM tool and function calls to execute unauthorized actions in agent systems.
Vision Model Attack Walkthrough (Attack Walkthrough)
Step-by-step walkthrough of visual prompt injection, adversarial images, and OCR exploitation in vision-language models.
XML and JSON Injection in LLM Apps
Walkthrough of exploiting XML and JSON parsing in LLM applications for injection and data manipulation.
Building a Production Input Sanitizer
Step-by-step walkthrough for building a production-grade input sanitizer that cleans, normalizes, and validates user prompts before they reach an LLM, covering encoding normalization, injection pattern stripping, length enforcement, and integration testing.
Canary Token Deployment
Step-by-step walkthrough for deploying canary tokens in LLM system prompts and context to detect prompt injection and data exfiltration attempts, covering token generation, placement strategies, monitoring, and alerting.
Capability-Based Access Control
Step-by-step walkthrough for implementing fine-grained capability controls for LLM features, covering capability token design, permission scoping, dynamic capability grants, and audit trails.
Constitutional Classifier Setup
Step-by-step walkthrough for implementing constitutional AI-style classifiers that evaluate LLM outputs against a set of principles, covering principle definition, classifier training, chain-of-thought evaluation, and deployment.
Setting Up Content Filtering
Step-by-step walkthrough for implementing multi-layer content filtering for AI applications: keyword filtering, classifier-based detection, LLM-as-judge evaluation, testing effectiveness, and tuning for production.
Deploying NeMo Guardrails
Step-by-step walkthrough for setting up NVIDIA NeMo Guardrails in production, covering installation, Colang configuration, custom actions, topical and safety rails, testing, and monitoring.
Dual LLM Architecture Setup
Step-by-step walkthrough for implementing a dual LLM pattern where one model generates responses and a second model validates them, covering architecture design, validator prompt engineering, latency optimization, and failure handling.
Setting Up AI Guardrails
Step-by-step walkthrough for implementing AI guardrails: input validation with NVIDIA NeMo Guardrails, prompt injection detection with rebuff, output filtering for PII and sensitive data, and content policy enforcement.
Hallucination Detection
Step-by-step walkthrough for detecting and flagging hallucinated content in LLM outputs, covering factual grounding checks, self-consistency verification, source attribution validation, and confidence scoring.
Building Input Guardrails for LLM Applications
Step-by-step walkthrough for implementing production-grade input guardrails that protect LLM applications from prompt injection, content policy violations, and resource abuse through multi-layer validation, classification, and rate limiting.
Incident Response Playbook for AI Security Breaches
Walkthrough for building an incident response playbook tailored to AI security breaches, covering detection triggers, triage procedures, containment strategies, investigation workflows, remediation validation, and post-incident review processes.
AI Incident Response Preparation
Step-by-step walkthrough for building AI incident response capabilities: playbook development, tabletop exercises, containment procedures, communication templates, and evidence collection workflows.
Defense Implementation Walkthroughs
Step-by-step guides for implementing AI security defenses: guardrail configuration, monitoring and detection setup, and incident response preparation for AI systems.
Instruction Hierarchy Enforcement (Defense Walkthrough)
Step-by-step walkthrough for enforcing instruction priority in LLM applications, ensuring system-level instructions always take precedence over user inputs through privilege separation, instruction tagging, and validation layers.
LLM Judge Implementation
Step-by-step walkthrough for using an LLM to judge another LLM's outputs for safety and quality, covering judge prompt design, scoring rubrics, calibration, cost optimization, and deployment patterns.
Validating and Sanitizing Model Outputs
Walkthrough for building output validation systems that verify LLM responses meet structural, factual, and safety requirements before delivery, covering schema validation, factual grounding checks, response consistency verification, and safe rendering.
Production Monitoring for LLM Security Events
Walkthrough for building production monitoring systems that detect LLM security events in real time, covering log collection, anomaly detection, alert configuration, dashboard design, and incident correlation.
AI Monitoring Setup
Step-by-step walkthrough for implementing AI system monitoring: inference logging, behavioral anomaly detection, alert configuration, dashboard creation, and integration with existing SIEM platforms.
Multi-Layer Input Validation
Step-by-step walkthrough for building a defense-in-depth input validation pipeline that combines regex matching, semantic similarity, ML classification, and rate limiting into a unified validation system for LLM applications.
Output Content Classifier
Step-by-step walkthrough for building a classifier to filter harmful LLM outputs, covering taxonomy definition, multi-label classification, threshold calibration, and deployment as a real-time output gate.
Output Filtering and Content Safety Implementation
Walkthrough for building output filtering systems that inspect and sanitize LLM responses before they reach users, covering content classifiers, PII detection, response validation, canary tokens, and filter bypass resistance.
PII Redaction Pipeline
Step-by-step walkthrough for building an automated PII detection and redaction pipeline for LLM outputs, covering regex-based detection, NER-based detection, presidio integration, redaction strategies, and compliance testing.
Prompt Classifier Training
Step-by-step walkthrough for training a machine learning classifier to detect malicious prompts, covering dataset curation, feature engineering, model selection, training pipeline, evaluation, and deployment as a real-time detection service.
ML-Based Prompt Injection Detection Systems
Walkthrough for building and deploying ML-based prompt injection detection systems, covering training data collection, feature engineering, model architecture selection, threshold tuning, production deployment, and continuous improvement.
Implementing Access Control in RAG Pipelines
Walkthrough for building access control systems in RAG pipelines that enforce document-level permissions, prevent cross-user data leakage, filter retrieved context based on user authorization, and resist retrieval poisoning attacks.
Rate Limiting and Abuse Prevention for LLM APIs
Walkthrough for implementing rate limiting and abuse prevention systems for LLM API endpoints, covering token bucket algorithms, per-user quotas, cost-based limiting, anomaly detection, and graduated enforcement.
AI Rate Limiting Walkthrough
Step-by-step walkthrough for implementing token-aware rate limiting for AI applications: request-level limiting, token budget enforcement, sliding window algorithms, abuse detection, and production deployment.
Regex-Based Prompt Filter
Step-by-step walkthrough for building a regex-based prompt filter that detects common injection payloads using pattern matching, covering pattern library construction, performance optimization, false positive management, and continuous updates.
Response Boundary Enforcement
Step-by-step walkthrough for keeping LLM responses within defined topic, format, and content boundaries, covering boundary definition, violation detection, response rewriting, and boundary drift monitoring.
Sandboxed Tool Execution
Step-by-step walkthrough for running LLM tool calls in isolated sandboxes, covering container-based isolation, resource limits, network restrictions, and output sanitization.
Sandboxing and Permission Models for Tool-Using Agents
Walkthrough for implementing sandboxing and permission models that constrain tool-using LLM agents, covering least-privilege design, parameter validation, execution sandboxes, approval workflows, and audit logging.
Semantic Similarity Detection
Step-by-step walkthrough for using text embeddings to detect semantically similar prompt injection attempts, covering embedding model selection, vector database setup, similarity threshold tuning, and production deployment.
Session Isolation Patterns
Step-by-step walkthrough for isolating user sessions in LLM applications to prevent cross-contamination of context, memory, and permissions between users.
Structured Output Validation
Step-by-step walkthrough for validating structured LLM outputs against schemas, covering JSON schema validation, type coercion, constraint enforcement, and handling malformed model outputs gracefully.
Toxicity Scoring Pipeline
Step-by-step walkthrough for building a toxicity scoring pipeline for LLM output filtering, covering model selection, multi-dimensional scoring, threshold calibration, and production deployment with real-time scoring.
Unicode Normalization Defense
Step-by-step walkthrough for implementing Unicode normalization to prevent encoding-based prompt injection bypasses, covering homoglyph detection, invisible character stripping, bidirectional text handling, and normalization testing.
Agent System Red Team Engagement
Complete walkthrough for testing tool-using AI agents: scoping agent capabilities, exploiting function calling, testing permission boundaries, multi-step attack chains, and session manipulation.
AI API Red Team Engagement
Complete walkthrough for testing AI APIs: endpoint enumeration, authentication bypass, rate limit evasion, input validation testing, output data leakage, and model fingerprinting through API behavior.
Chatbot Red Team Engagement
Step-by-step walkthrough for a complete chatbot red team assessment: scoping, system prompt extraction, content filter bypass, PII leakage testing, multi-turn manipulation, and professional reporting.
Engagement Walkthroughs Overview
Step-by-step walkthroughs for complete AI red team engagements: from scoping and reconnaissance through attack execution to reporting, organized by target system type.
Multi-Model System Red Team Engagement
Complete walkthrough for testing systems that use multiple AI models: model-to-model injection, routing logic exploitation, fallback chain abuse, inter-model data leakage, and orchestration layer attacks.
RAG System Red Team Engagement
Complete walkthrough for testing RAG applications: document injection, cross-scope retrieval exploitation, embedding manipulation, data exfiltration through retrieval, and chunk boundary attacks.
Measuring and Reporting AI Red Team Effectiveness
Walkthrough for defining, collecting, and reporting metrics that measure the effectiveness of AI red teaming programs, covering coverage metrics, detection rates, time-to-find analysis, remediation tracking, and ROI calculation.
Building AI-Specific Threat Models
Step-by-step walkthrough for creating threat models tailored to AI and LLM systems, covering asset identification, threat enumeration, attack tree construction, and risk prioritization.
Attack Execution Workflow
Step-by-step workflow for executing AI red team attacks: selecting techniques from recon findings, building attack chains, documenting findings in real-time, managing evidence, and knowing when to escalate or stop.
Mapping the Attack Surface of AI Systems
Systematic walkthrough for identifying and mapping every attack surface in an AI system, from user inputs through model inference to output delivery and tool integrations.
Communicating AI Red Team Findings to Stakeholders
Walkthrough for effectively communicating AI red team findings to diverse stakeholders, covering executive summaries, technical deep dives, live demonstrations, risk narratives, and remediation roadmaps tailored to audience expertise levels.
Setting Up Continuous AI Red Teaming Pipelines
Walkthrough for building continuous AI red teaming pipelines that automatically test LLM applications on every deployment, covering automated scan configuration, CI/CD integration, alert thresholds, regression testing, and dashboard reporting.
Engagement Kickoff Walkthrough
Step-by-step guide to launching an AI red team engagement: initial client meetings, scope definition, rules of engagement, legal agreements, environment setup, and tool selection.
Testing for EU AI Act Compliance
Walkthrough for conducting red team assessments that evaluate compliance with the EU AI Act requirements, covering risk classification, mandatory testing obligations, and documentation requirements.
Evidence Collection and Documentation Best Practices
Walkthrough for systematic evidence collection during AI red team engagements, covering request/response capture, screenshot methodology, chain-of-custody documentation, reproducibility requirements, and evidence organization for reports.
Evidence Collection Methods for AI Red Teams
Comprehensive methods for collecting, preserving, and organizing red team evidence from AI system assessments, including API logs, screenshots, reproduction scripts, and chain-of-custody procedures.
Writing Executive Summaries for AI Red Team Reports
Guide to writing clear, impactful executive summaries for AI red team assessment reports that communicate risk to non-technical stakeholders and drive remediation decisions.
Classifying AI Vulnerability Severity
Framework for consistently classifying the severity of AI and LLM vulnerabilities, with scoring criteria, impact assessment, and examples across common finding categories.
Methodology Walkthroughs
Step-by-step walkthroughs for each phase of an AI red team engagement: kickoff, reconnaissance, attack execution, and report writing.
Preparing for ISO 42001 AI Management System Audit
Advanced walkthrough for preparing organizations for ISO 42001 AI management system audits, covering control assessment, evidence preparation, gap remediation, and audit readiness.
Using MITRE ATLAS for AI Attack Mapping
Walkthrough for mapping AI red team activities and findings to the MITRE ATLAS framework, covering tactic and technique identification, attack chain construction, and navigator visualization.
Mapping Findings to OWASP LLM Top 10
Walkthrough for mapping AI red team findings to the OWASP Top 10 for LLM Applications, with classification guidance, reporting templates, and remediation mapping.
Comparative Security Testing Across Multiple LLMs
Walkthrough for conducting systematic comparative security testing across multiple LLM providers and configurations, covering test standardization, parallel execution, cross-model analysis, and differential vulnerability reporting.
NIST AI RMF Assessment Walkthrough
Step-by-step guide for conducting assessments aligned with the NIST AI Risk Management Framework, covering the Govern, Map, Measure, and Manage functions for AI system security.
Pre-Engagement Preparation Checklist
Complete pre-engagement preparation checklist for AI red team operations covering team readiness, infrastructure setup, legal requirements, and initial reconnaissance planning.
Reconnaissance Workflow
Systematic reconnaissance workflow for AI red team engagements: system prompt extraction, model identification, capability mapping, API enumeration, and documenting the attack surface.
Verifying That Remediations Are Effective
Walkthrough for planning and executing remediation verification testing (retesting) to confirm that AI vulnerability fixes are effective and do not introduce regressions.
Report Writing Walkthrough
Step-by-step guide to writing AI red team reports: structure, executive summary, technical findings, risk ratings, remediation recommendations, peer review, and delivery.
Risk Scoring Frameworks for AI Vulnerabilities
Walkthrough for applying risk scoring frameworks to AI and LLM vulnerabilities, covering CVSS adaptation for AI, custom AI risk scoring matrices, severity classification, business impact assessment, and integration with existing vulnerability management processes.
Rules of Engagement Template for AI Red Team Operations
Step-by-step guide to creating comprehensive rules of engagement documents for AI red team assessments, covering authorization, scope, constraints, communication, and legal protections.
How to Scope an AI Red Team Engagement
Comprehensive walkthrough for scoping AI red team engagements from initial client contact through statement of work, covering target enumeration, risk-based prioritization, resource estimation, boundary definition, and legal considerations.
AI Red Team Scoping Checklist Walkthrough
Systematic walkthrough of the pre-engagement scoping process for AI red team assessments: stakeholder identification, target enumeration, scope boundary definition, resource estimation, and rules of engagement documentation.
Creating Detailed Technical Appendices
Guide to building comprehensive technical appendices for AI red team reports, including evidence formatting, reproduction procedures, tool output presentation, and raw data organization.
Developing Comprehensive AI Security Test Plans
Step-by-step guide to developing structured test plans for AI red team engagements, covering test case design, automation strategy, coverage mapping, and execution scheduling.
Threat Modeling for LLM-Powered Applications
Step-by-step walkthrough for conducting threat modeling sessions specifically tailored to LLM-powered applications, covering data flow analysis, trust boundary identification, AI-specific threat enumeration, risk assessment, and mitigation planning.
AI Threat Modeling Workshop Walkthrough
Step-by-step guide to running an AI-focused threat modeling workshop: adapting STRIDE for AI systems, constructing attack trees for LLM applications, participant facilitation techniques, and producing actionable threat models.
Anyscale Ray Serve ML Testing
End-to-end walkthrough for security testing Ray Serve ML deployments on Anyscale: cluster enumeration, serve endpoint exploitation, Ray Dashboard exposure, actor isolation testing, and observability review.
AutoGen Multi-Agent System Testing
End-to-end walkthrough for security testing AutoGen multi-agent systems: agent enumeration, inter-agent injection, code execution sandbox assessment, conversation manipulation, and escalation path analysis.
AWS SageMaker Red Teaming
End-to-end walkthrough for red teaming ML models deployed on AWS SageMaker: endpoint enumeration, IAM policy analysis, model extraction testing, inference pipeline exploitation, and CloudTrail log review.
Azure ML Security Testing
End-to-end walkthrough for security testing Azure Machine Learning endpoints: workspace enumeration, managed online endpoint exploitation, compute instance assessment, data store access review, and Azure Monitor analysis.
Azure OpenAI Red Team Walkthrough
Complete red team walkthrough for Azure OpenAI deployments: testing content filters, managed identity exploitation, prompt flow injection, data integration attacks, and Azure Monitor evasion.
Azure OpenAI Red Team Walkthrough (Platform Walkthrough)
End-to-end walkthrough for red teaming Azure OpenAI deployments: deployment configuration review, content filtering bypass testing, managed identity exploitation, prompt flow assessment, and diagnostic log analysis.
AWS Bedrock Red Team Walkthrough
Complete guide to red teaming AWS Bedrock deployments: testing guardrails bypass techniques, knowledge base data exfiltration, agent prompt injection, model customization abuse, and CloudTrail evasion.
AWS Bedrock Red Team Walkthrough (Platform Walkthrough)
End-to-end walkthrough for red teaming AI systems on AWS Bedrock: setting up access, invoking models via the Converse API, testing Bedrock Guardrails, exploiting knowledge bases, and analyzing CloudTrail logs.
CrewAI Agent Application Security Testing
End-to-end walkthrough for security testing CrewAI agent applications: crew enumeration, agent role exploitation, task injection, tool security assessment, delegation chain manipulation, and output validation.
Databricks MLflow Deployment Audit
End-to-end walkthrough for auditing MLflow deployments on Databricks: workspace enumeration, model registry security, serving endpoint testing, Unity Catalog integration review, and audit log analysis.
DSPy Pipeline Security Testing
End-to-end walkthrough for security testing DSPy optimized LLM pipelines: module enumeration, signature exploitation, optimizer manipulation, retrieval module assessment, and compiled prompt analysis.
GCP Vertex AI Security Testing
End-to-end walkthrough for security testing Vertex AI deployments on Google Cloud: endpoint enumeration, IAM policy analysis, model serving exploitation, pipeline assessment, and Cloud Audit Logs review.
Hugging Face Security Audit Walkthrough
Step-by-step walkthrough for auditing Hugging Face models: scanning for malicious model files, verifying model provenance, assessing model card completeness, and testing Spaces and Inference API security.
HuggingFace Spaces Security Testing
End-to-end walkthrough for security testing HuggingFace Spaces applications: Space enumeration, Gradio/Streamlit exploitation, API endpoint testing, secret management review, and model access control assessment.
Hugging Face Hub Red Team Walkthrough
Walkthrough for assessing AI models on Hugging Face Hub: model security assessment, scanning for malicious models, Transformers library testing, and Spaces application evaluation.
Cloud AI Platform Walkthroughs
Hands-on walkthroughs for red teaming AI systems deployed on major cloud platforms: AWS Bedrock, Azure OpenAI, Google Vertex AI, and Hugging Face Hub.
LangChain Application Security Testing
End-to-end walkthrough for security testing LangChain applications: chain enumeration, prompt injection through chains, tool and agent exploitation, retrieval augmented generation attacks, and memory manipulation.
LlamaIndex RAG Application Security Testing
End-to-end walkthrough for security testing LlamaIndex RAG applications: index enumeration, query engine exploitation, data connector assessment, response synthesis manipulation, and agent pipeline testing.
Modal Serverless AI Deployment Testing
End-to-end walkthrough for security testing Modal serverless AI deployments: function enumeration, web endpoint exploitation, secret management assessment, volume security testing, and container escape analysis.
Ollama Security Testing Walkthrough
Complete walkthrough for security testing locally-hosted models with Ollama: comparing safety across models, testing system prompt extraction, API security assessment, and Modelfile configuration hardening.
Replicate API Security Testing
End-to-end walkthrough for security testing models on Replicate: model enumeration, prediction API exploitation, webhook security, Cog container assessment, and billing abuse prevention.
RunPod Serverless GPU Endpoint Testing
End-to-end walkthrough for security testing RunPod serverless GPU endpoints: endpoint enumeration, handler exploitation, webhook security, Docker template assessment, and cost abuse prevention.
Microsoft Semantic Kernel Security Testing
End-to-end walkthrough for security testing Semantic Kernel applications: kernel enumeration, plugin exploitation, planner manipulation, memory and RAG assessment, and Azure integration security review.
Together AI Security Testing
End-to-end walkthrough for security testing Together AI deployments: API enumeration, inference endpoint exploitation, fine-tuning security review, function calling assessment, and rate limit analysis.
Vertex AI Red Team Walkthrough
End-to-end walkthrough for red teaming Google Cloud Vertex AI: prediction endpoint testing, Model Garden security assessment, Feature Store probing, and Cloud Logging analysis.
Vertex AI Red Team Walkthrough (Platform Walkthrough)
Complete red team walkthrough for Google Vertex AI: testing prediction endpoints, Model Garden assessments, Feature Store probing, and exploiting Vertex AI Agents and Extensions.
Adversarial Robustness Testing with ARTKit
Walkthrough for using ARTKit (Adversarial Robustness Testing Kit) to evaluate LLM application resilience through automated adversarial testing, covering test flow configuration, challenger setup, evaluator design, and results analysis.
Burp Suite for AI APIs
Using Burp Suite to intercept, analyze, and fuzz LLM API calls: proxy setup, intercepting streaming responses, parameter fuzzing with Intruder, and building custom extensions for AI-specific testing.
Using Burp Suite for LLM API Endpoint Testing
Walkthrough for using Burp Suite to intercept, analyze, and attack LLM API endpoints, covering proxy configuration, request manipulation, automated scanning for injection flaws, and custom extensions for AI-specific testing.
Counterfit Walkthrough
Complete walkthrough of Microsoft's Counterfit adversarial ML testing framework: installation, target configuration, running attacks against ML models, interpreting results, and automating adversarial robustness assessments.
Writing Custom Garak Probes for Novel Attack Vectors
Advanced walkthrough for building custom Garak probes that target novel and emerging attack vectors, covering probe architecture, payload generation, detector pairing, and integration into automated scanning pipelines.
Integrating Garak into CI/CD Pipelines
Intermediate walkthrough on automating garak vulnerability scans within CI/CD pipelines, including GitHub Actions, GitLab CI, threshold-based gating, result caching, and cost management strategies.
Writing Custom Garak Probes
Intermediate walkthrough on creating custom garak probes tailored to application-specific attack surfaces, including probe structure, prompt engineering, custom detectors, and testing workflows.
Building Custom Garak Detectors
Advanced walkthrough on creating custom garak detectors for specific success criteria, including regex-based detectors, ML-based classifiers, multi-signal scoring, and integration with external evaluation services.
Running Your First Garak Scan
Step-by-step beginner walkthrough for running your very first garak vulnerability scan from zero, covering installation, target setup, probe selection, and basic result interpretation.
Writing Garak Generator Plugins for Custom API Targets
Advanced walkthrough on writing garak generator plugins to connect to custom API endpoints, proprietary model servers, and non-standard inference interfaces for vulnerability scanning.
Setting Up Garak Probes for MCP Tool Interactions
Advanced walkthrough on configuring garak probes that target Model Context Protocol (MCP) tool interactions, testing for tool misuse, privilege escalation through tools, and data exfiltration via tool calls.
Comparing Vulnerability Profiles Across Models with Garak
Intermediate walkthrough on using garak to run identical vulnerability scans across multiple models, comparing results to understand relative security postures and make informed model selection decisions.
Deep Dive into Garak Scan Report Analysis
Intermediate walkthrough on analyzing garak scan reports, including JSONL parsing, false positive identification, vulnerability categorization, executive summary generation, and trend tracking.
Garak End-to-End Walkthrough
Complete walkthrough of NVIDIA's garak LLM vulnerability scanner: installation, configuration, running probes against local and hosted models, interpreting results, writing custom probes, and CI/CD integration.
HarmBench Evaluation Framework Walkthrough
Complete walkthrough of the HarmBench evaluation framework: installation, running standardized benchmarks against models, interpreting results, creating custom behavior evaluations, and comparing model safety across versions.
Inspect AI Walkthrough
Complete walkthrough of UK AISI's Inspect AI framework: installation, writing evaluations, running against models, custom scorers, benchmark suites, and producing compliance-ready reports.
Security Testing LangChain Applications
Step-by-step walkthrough for identifying and exploiting security vulnerabilities in LangChain-based applications, covering chain injection, agent manipulation, tool abuse, retrieval poisoning, and memory extraction attacks.
Langfuse Observability Walkthrough
Complete walkthrough for using Langfuse to monitor AI applications for security anomalies: setting up tracing, building security dashboards, detecting prompt injection patterns, and creating automated alerts.
NeMo Guardrails Walkthrough
End-to-end walkthrough of NVIDIA NeMo Guardrails: installation, Colang configuration, dialog flow design, integration with LLM applications, and red team bypass testing techniques.
Local Model Analysis and Testing with Ollama
Walkthrough for using Ollama to run, analyze, and security-test local LLMs, covering model configuration, safety boundary testing, system prompt extraction, fine-tuning vulnerability assessment, and building a local red team lab.
Ollama for Local Red Teaming
Using Ollama as a local red teaming environment: model selection, running uncensored models, API-based testing, comparing safety across model families, and building a cost-free testing lab.
Running Your First Promptfoo Evaluation
Beginner walkthrough for running your first promptfoo evaluation from scratch, covering installation, configuration, test case creation, assertion writing, and result interpretation.
Automating Red Team Evaluations with Promptfoo
Complete walkthrough for setting up automated red team evaluation pipelines using Promptfoo, covering configuration, custom evaluators, adversarial dataset generation, CI integration, and result analysis.
Promptfoo End-to-End Walkthrough
Complete walkthrough of promptfoo for AI red teaming: configuration files, provider setup, running evaluations, red team plugins, assertion-based scoring, reporting, and CI/CD integration.
Integrating PyRIT with Azure OpenAI and Content Safety
Intermediate walkthrough on integrating PyRIT with Azure OpenAI Service and Azure AI Content Safety for enterprise red teaming, including managed identity authentication, content filtering analysis, and compliance reporting.
Building Converter Pipelines for Payload Transformation in PyRIT
Intermediate walkthrough on using PyRIT's converter system to transform attack payloads through encoding, translation, paraphrasing, and other obfuscation techniques to evade input filters.
Creating Custom Scorers for PyRIT Attack Evaluation
Intermediate walkthrough on building custom PyRIT scorers for evaluating attack success, including pattern-based, LLM-based, and multi-criteria scoring approaches.
Running Your First PyRIT Red Team Campaign
Beginner walkthrough for running your first PyRIT red team campaign from scratch, covering installation, target configuration, orchestrator setup, and basic result analysis.
Using the PyRIT UI Frontend
Beginner walkthrough on using PyRIT's web-based UI frontend for visual red team campaign management, including launching campaigns, monitoring progress, and reviewing results without writing code.
Orchestrating Multi-Turn Attack Sequences with PyRIT
Intermediate walkthrough on using PyRIT's orchestration capabilities for multi-turn red team campaigns, including attack strategy design, conversation management, and adaptive scoring.
Microsoft PyRIT for Orchestrated Multi-Turn Attacks
Comprehensive walkthrough for using Microsoft PyRIT to design and execute orchestrated multi-turn attack campaigns against LLM applications, covering orchestrator configuration, converter chains, scoring strategies, and campaign analysis.
Generating Professional Reports from PyRIT Campaigns
Intermediate walkthrough on generating professional red team reports from PyRIT campaign data, including executive summaries, technical findings, remediation guidance, and visual dashboards.
Configuring Diverse Targets in PyRIT
Intermediate walkthrough on configuring PyRIT targets for various model providers, custom APIs, local models, and application endpoints including authentication, system prompts, and rate limiting.
PyRIT End-to-End Walkthrough
Complete walkthrough of Microsoft's Python Risk Identification Toolkit: setup, connecting to targets, running orchestrators, using converters, multi-turn attacks, and analyzing results with the web UI.
Python Red Team Automation
Building custom AI red team automation with Python: test harnesses with httpx and aiohttp, result collection and analysis, automated reporting, and integration with existing tools like promptfoo and garak.
Testing Prompt Injection Defenses with Rebuff
Walkthrough for using Rebuff to test and evaluate prompt injection detection capabilities, covering installation, detection pipeline analysis, adversarial evasion testing, custom rule development, and benchmarking detection accuracy.