# api
標記為「api」的 66 篇文章
Case Study: Real-World Model Extraction
Analysis of documented model extraction attacks against commercial ML APIs.
API Provider Security Comparison
Comparative analysis of security features across major LLM API providers including OpenAI, Anthropic, Google, Mistral, and Cohere. Rate limiting, content filtering, data retention, and security controls.
Cloud Model Endpoint Security
Securing model endpoints in cloud deployments including authentication, authorization, and traffic management.
Embedding Model Extraction
Extracting embedding model behavior through systematic API querying.
Fine-Tuning API Exploitation
Exploiting commercial fine-tuning APIs (OpenAI, Anthropic) for safety bypass and model manipulation.
The AI API Ecosystem
A red teamer's guide to the AI API landscape — OpenAI, Anthropic, Google, AWS, Azure, open-source APIs, authentication patterns, and common security misconfigurations.
AI Deployment Patterns and Security Implications
How API-based, self-hosted, edge, and hybrid deployment patterns each create distinct security considerations and attack surfaces for AI systems.
Anatomy of an LLM API Call
Understand the HTTP request structure for OpenAI, Anthropic, and other LLM APIs — system messages, parameters, function calling, and common misconfigurations.
Lab: Anthropic Claude API Basics
Set up the Anthropic Claude API for red teaming, learn authentication, the Messages API, system prompts, and how temperature and top-p affect attack success rates.
Multi-Provider API Exploration
Explore and compare API behaviors across OpenAI, Anthropic, and Google AI to understand provider-specific security characteristics.
API Rate Limit and Error Handling
Test LLM API rate limits and implement proper error handling for automated testing workflows.
Lab: API Response Parsing and Analysis
Parse and analyze LLM API responses to identify anomalies, safety filter triggers, and information leakage patterns.
Lab: API-Based Model Testing
Learn to test language models through their APIs including OpenAI, Anthropic, and local Ollama endpoints. Build reusable API testing functions with proper error handling.
Your First Claude API Call
Set up the Anthropic SDK and make your first Claude API call with system prompts and messages.
Your First LLM API Call with OpenAI
Set up your Python environment and make your first LLM API call to understand request/response patterns.
API Response Header Analysis
Analyze HTTP response headers from LLM APIs to fingerprint providers, versions, and middleware.
Model Extraction via API Access
Extract a functionally equivalent model using only API query access.
API Abuse Pattern Lab
Discover and exploit API abuse patterns including parameter injection, header manipulation, and endpoint confusion.
API Parameter Fuzzing for LLMs
Systematically fuzz LLM API parameters to discover unexpected behaviors and bypasses.
Fine-Tuning API Security Probing
Probe fine-tuning APIs for security weaknesses including insufficient validation and unsafe default configurations.
Lab: Basic Model Extraction
Hands-on lab for API-based model extraction attacks, querying a target model to approximate its behavior, measuring fidelity, and understanding query budgets.
AI API Enumeration
Discovering AI API endpoints, parameters, model configurations, and undocumented features through systematic enumeration techniques.
LLM API Enumeration
Advanced techniques for enumerating LLM API capabilities, restrictions, hidden parameters, and undocumented features to build a comprehensive attack surface map.
LLM API Endpoint Reference
Reference for LLM API endpoints across providers with security-relevant parameters and options.
Model API Security Reference
Security reference for major model APIs including authentication, rate limits, and safety features.
AI API Reverse Engineering
Techniques for reverse engineering AI APIs including mapping undocumented endpoints, parameter discovery, rate limit profiling, and extracting implementation details from API behavior.
Advanced Reconnaissance for AI Targets
Fingerprinting LLM providers, API reverse engineering, infrastructure detection, and shadow AI discovery for red team engagements.
API Abuse Chain Attack Walkthrough
Chain multiple API calls to achieve unauthorized actions that no single call would permit.
API Chaining Exploitation Walkthrough
Walkthrough of chaining multiple API calls in agent systems to achieve multi-step unauthorized actions.
API Rate Limit Bypass
Techniques to bypass API rate limiting on LLM services, including header manipulation, distributed requests, authentication rotation, and endpoint discovery.
Inference Endpoint Exploitation
Exploiting inference API endpoints for unauthorized access, data exfiltration, and service abuse through authentication flaws, input validation gaps, and misconfigured permissions.
Model Extraction Attack Walkthrough
Walkthrough of extracting model weights/behavior through systematic API querying.
AI API Red Team Engagement
Complete walkthrough for testing AI APIs: endpoint enumeration, authentication bypass, rate limit evasion, input validation testing, output data leakage, and model fingerprinting through API behavior.
Case Study: Real-World 模型 Extraction
Analysis of documented model extraction attacks against commercial ML APIs.
API Provider 安全 Comparison
Comparative analysis of security features across major LLM API providers including OpenAI, Anthropic, Google, Mistral, and Cohere. Rate limiting, content filtering, data retention, and security controls.
Cloud 模型 Endpoint 安全
Securing model endpoints in cloud deployments including authentication, authorization, and traffic management.
Embedding 模型 Extraction
Extracting embedding model behavior through systematic API querying.
Fine-Tuning API 利用ation
利用ing commercial fine-tuning APIs (OpenAI, Anthropic) for safety bypass and model manipulation.
AI API 生態系
紅隊員之 AI API 生態系指南——OpenAI、Anthropic、Google、AWS、Azure、開源 API、身分驗證模式,與常見安全錯誤組態。
AI Deployment Patterns and 安全 Implications
How API-based, self-hosted, edge, and hybrid deployment patterns each create distinct security considerations and attack surfaces for AI systems.
LLM API 呼叫解剖
理解 OpenAI、Anthropic 及其他 LLM API 的 HTTP 請求結構——系統訊息、參數、函式呼叫與常見組態錯誤。
實驗室: Anthropic Claude API Basics
Set up the Anthropic Claude API for red teaming, learn authentication, the Messages API, system prompts, and how temperature and top-p affect attack success rates.
Multi-Provider API Exploration
Explore and compare API behaviors across OpenAI, Anthropic, and Google AI to understand provider-specific security characteristics.
API Rate Limit and Error Handling
Test LLM API rate limits and implement proper error handling for automated testing workflows.
實驗室: API Response Parsing and Analysis
Parse and analyze LLM API responses to identify anomalies, safety filter triggers, and information leakage patterns.
實驗室: API-Based 模型 Testing
Learn to test language models through their APIs including OpenAI, Anthropic, and local Ollama endpoints. Build reusable API testing functions with proper error handling.
Your First Claude API Call
Set up the Anthropic SDK and make your first Claude API call with system prompts and messages.
Your First LLM API Call with OpenAI
Set up your Python environment and make your first LLM API call to understand request/response patterns.
API Response Header Analysis
Analyze HTTP response headers from LLM APIs to fingerprint providers, versions, and middleware.
模型 Extraction via API Access
Extract a functionally equivalent model using only API query access.
API Abuse Pattern 實驗室
Discover and exploit API abuse patterns including parameter injection, header manipulation, and endpoint confusion.
API Parameter Fuzzing for LLMs
Systematically fuzz LLM API parameters to discover unexpected behaviors and bypasses.
Fine-Tuning API 安全 Probing
Probe fine-tuning APIs for security weaknesses including insufficient validation and unsafe default configurations.
實作:基礎模型提取
為以 API 為本之模型提取攻擊之實作,查詢目標模型以近似其行為、量測保真度,並理解查詢預算。
AI API Enumeration
Discovering AI API endpoints, parameters, model configurations, and undocumented features through systematic enumeration techniques.
LLM API Enumeration
進階 techniques for enumerating LLM API capabilities, restrictions, hidden parameters, and undocumented features to build a comprehensive attack surface map.
LLM API Endpoint Reference
Reference for LLM API endpoints across providers with security-relevant parameters and options.
模型 API 安全 Reference
安全 reference for major model APIs including authentication, rate limits, and safety features.
AI API Reverse Engineering
Techniques for reverse engineering AI APIs including mapping undocumented endpoints, parameter discovery, rate limit profiling, and extracting implementation details from API behavior.
針對 AI 目標的進階偵察
針對紅隊委任的 LLM 供應商指紋識別、API 逆向工程、基礎設施偵測,以及影子 AI 發掘。
API Abuse Chain 攻擊 導覽
Chain multiple API calls to achieve unauthorized actions that no single call would permit.
API Chaining 利用ation 導覽
導覽 of chaining multiple API calls in agent systems to achieve multi-step unauthorized actions.
API Rate Limit Bypass
Techniques to bypass API rate limiting on LLM services, including header manipulation, distributed requests, authentication rotation, and endpoint discovery.
Inference Endpoint 利用ation
利用ing inference API endpoints for unauthorized access, data exfiltration, and service abuse through authentication flaws, input validation gaps, and misconfigured permissions.
模型 Extraction 攻擊 導覽
導覽 of extracting model weights/behavior through systematic API querying.
AI API 紅隊 Engagement
Complete walkthrough for testing AI APIs: endpoint enumeration, authentication bypass, rate limit evasion, input validation testing, output data leakage, and model fingerprinting through API behavior.