# api
33 articlestagged with “api”
Case Study: Real-World Model Extraction
Analysis of documented model extraction attacks against commercial ML APIs.
API Provider Security Comparison
Comparative analysis of security features across major LLM API providers including OpenAI, Anthropic, Google, Mistral, and Cohere. Rate limiting, content filtering, data retention, and security controls.
Cloud Model Endpoint Security
Securing model endpoints in cloud deployments including authentication, authorization, and traffic management.
Embedding Model Extraction
Extracting embedding model behavior through systematic API querying.
Fine-Tuning API Exploitation
Exploiting commercial fine-tuning APIs (OpenAI, Anthropic) for safety bypass and model manipulation.
The AI API Ecosystem
A red teamer's guide to the AI API landscape — OpenAI, Anthropic, Google, AWS, Azure, open-source APIs, authentication patterns, and common security misconfigurations.
AI Deployment Patterns and Security Implications
How API-based, self-hosted, edge, and hybrid deployment patterns each create distinct security considerations and attack surfaces for AI systems.
Anatomy of an LLM API Call
Understand the HTTP request structure for OpenAI, Anthropic, and other LLM APIs — system messages, parameters, function calling, and common misconfigurations.
Lab: Anthropic Claude API Basics
Set up the Anthropic Claude API for red teaming, learn authentication, the Messages API, system prompts, and how temperature and top-p affect attack success rates.
Multi-Provider API Exploration
Explore and compare API behaviors across OpenAI, Anthropic, and Google AI to understand provider-specific security characteristics.
API Rate Limit and Error Handling
Test LLM API rate limits and implement proper error handling for automated testing workflows.
Lab: API Response Parsing and Analysis
Parse and analyze LLM API responses to identify anomalies, safety filter triggers, and information leakage patterns.
Lab: API-Based Model Testing
Learn to test language models through their APIs including OpenAI, Anthropic, and local Ollama endpoints. Build reusable API testing functions with proper error handling.
Your First Claude API Call
Set up the Anthropic SDK and make your first Claude API call with system prompts and messages.
Your First LLM API Call with OpenAI
Set up your Python environment and make your first LLM API call to understand request/response patterns.
API Response Header Analysis
Analyze HTTP response headers from LLM APIs to fingerprint providers, versions, and middleware.
Model Extraction via API Access
Extract a functionally equivalent model using only API query access.
API Abuse Pattern Lab
Discover and exploit API abuse patterns including parameter injection, header manipulation, and endpoint confusion.
API Parameter Fuzzing for LLMs
Systematically fuzz LLM API parameters to discover unexpected behaviors and bypasses.
Fine-Tuning API Security Probing
Probe fine-tuning APIs for security weaknesses including insufficient validation and unsafe default configurations.
Lab: Basic Model Extraction
Hands-on lab for API-based model extraction attacks, querying a target model to approximate its behavior, measuring fidelity, and understanding query budgets.
AI API Enumeration
Discovering AI API endpoints, parameters, model configurations, and undocumented features through systematic enumeration techniques.
LLM API Enumeration
Advanced techniques for enumerating LLM API capabilities, restrictions, hidden parameters, and undocumented features to build a comprehensive attack surface map.
LLM API Endpoint Reference
Reference for LLM API endpoints across providers with security-relevant parameters and options.
Model API Security Reference
Security reference for major model APIs including authentication, rate limits, and safety features.
AI API Reverse Engineering
Techniques for reverse engineering AI APIs including mapping undocumented endpoints, parameter discovery, rate limit profiling, and extracting implementation details from API behavior.
Advanced Reconnaissance for AI Targets
Fingerprinting LLM providers, API reverse engineering, infrastructure detection, and shadow AI discovery for red team engagements.
API Abuse Chain Attack Walkthrough
Chain multiple API calls to achieve unauthorized actions that no single call would permit.
API Chaining Exploitation Walkthrough
Walkthrough of chaining multiple API calls in agent systems to achieve multi-step unauthorized actions.
API Rate Limit Bypass
Techniques to bypass API rate limiting on LLM services, including header manipulation, distributed requests, authentication rotation, and endpoint discovery.
Inference Endpoint Exploitation
Exploiting inference API endpoints for unauthorized access, data exfiltration, and service abuse through authentication flaws, input validation gaps, and misconfigured permissions.
Model Extraction Attack Walkthrough
Walkthrough of extracting model weights/behavior through systematic API querying.
AI API Red Team Engagement
Complete walkthrough for testing AI APIs: endpoint enumeration, authentication bypass, rate limit evasion, input validation testing, output data leakage, and model fingerprinting through API behavior.