# framework
33 articlestagged with “framework”
LangChain Security Deep Dive
Comprehensive security analysis of LangChain including known CVEs and exploitation patterns.
DSPy Security Analysis
Security analysis of the DSPy framework including prompt optimization exploitation and pipeline injection.
AI Incident Classification Framework
Framework for classifying AI security incidents by type, severity, and response priority.
IR Playbook Framework for AI Systems
Incident response playbook framework for AI systems: playbook design principles, common structure, adaptation guidelines, and integration with existing IR processes.
LangChain & LlamaIndex Security
Security analysis of popular LLM orchestration frameworks. Common misconfigurations, known CVEs, insecure defaults, and hardening guides for LangChain, LlamaIndex, and related LLM application frameworks.
Function Calling Authorization Framework
Building fine-grained authorization frameworks for function calling that enforce capability-based security.
AI Defense Taxonomy
A comprehensive categorization of all AI defense approaches organized by layer, method, and effectiveness, providing a structured framework for evaluating defense strategies.
Attack Automation Framework
Building end-to-end attack automation frameworks that orchestrate reconnaissance, payload generation, execution, and result analysis.
Harness Development Guide
Building reusable test harnesses for LLM vulnerability assessment including target abstraction, payload delivery, and result collection.
Multi-Target Testing Framework
Build a framework for testing the same attack suite across multiple model providers simultaneously.
Inspect AI: UK AISI Evaluation Framework
Deep dive into the UK AI Safety Institute's Inspect framework: task design, solvers, scorers, building custom evaluations, and comparison to other AI evaluation frameworks.
Attack Result Scoring Framework
Develop a framework for automatically scoring attack results based on multiple success criteria.
AI Governance Framework Design
Designing organizational AI governance frameworks that integrate security, ethics, and compliance.
MITRE ATLAS Walkthrough
MITRE ATLAS tactics, techniques, and procedures for AI systems. How to use ATLAS for red team engagement planning and map attacks to ATLAS IDs.
OWASP LLM Top 10 Deep Dive
Each OWASP LLM Top 10 item explained with real-world examples, testing methodology for each category, and how to map red team findings to OWASP classifications.
AI Risk Appetite Framework Development
Developing organizational AI risk appetite frameworks that balance innovation with security and compliance.
UK AI Regulation Framework Analysis
Analysis of the UK's sector-specific AI regulation approach and its implications for red teaming.
Integration & Framework Security
Security analysis of AI integration frameworks including LangChain, LlamaIndex, and Semantic Kernel, covering common vulnerability patterns and exploitation techniques.
Lab: Building a Production Red Team Harness
Build a full-featured, production-quality red team harness with multi-model support, async testing, structured result storage, and HTML reporting.
Lab: Build Jailbreak Automation
Build an automated jailbreak testing framework that generates, mutates, and evaluates attack prompts at scale. Covers prompt mutation engines, success classifiers, and campaign management for systematic red team testing.
Methodology for Red Teaming Multimodal Systems
Structured methodology for conducting security assessments of multimodal AI systems, covering scoping, attack surface enumeration, test execution, and reporting with MITRE ATLAS mappings.
AI Red Team Methodology Standard
Standardized methodology for conducting AI red team assessments from scoping through reporting.
Ethics Framework for AI Red Teaming
Ethical framework for AI red team operations covering responsible disclosure, dual-use considerations, and professional conduct standards.
Prompt Injection Taxonomy
A comprehensive classification framework for prompt injection attacks, covering direct and indirect vectors, delivery mechanisms, target layers, and severity assessment for systematic red team testing.
MITRE ATLAS Quick Reference
Quick reference guide for MITRE ATLAS tactics, techniques, and procedures for AI systems.
Adversarial Robustness Testing Framework
Build a framework for continuously testing adversarial robustness of deployed LLM defense mechanisms.
Tool Call Authorization Framework
Implement a tool call authorization framework that validates tool invocations against policy before execution.
AI Security Metrics Framework
Framework for measuring and reporting on AI security posture using quantitative metrics.
Attack Prioritization Framework
Prioritize attack techniques based on target architecture, time constraints, and likelihood of success.
Using MITRE ATLAS for AI Attack Mapping
Walkthrough for mapping AI red team activities and findings to the MITRE ATLAS framework, covering tactic and technique identification, attack chain construction, and navigator visualization.
NIST AI RMF Assessment Walkthrough
Step-by-step guide for conducting assessments aligned with the NIST AI Risk Management Framework, covering the Govern, Map, Measure, and Manage functions for AI system security.
AI Vulnerability Prioritization Framework
Framework for prioritizing AI vulnerabilities by exploitability, impact, and remediation cost.
RAG Security Testing Framework
Build a framework for systematic security testing of RAG applications including poisoning and exfiltration.