# api-security
標記為「api-security」的 26 篇文章
Function Calling Exploitation
Overview of how LLM function/tool calling works, the attack surface it creates, and systematic approaches to exploiting function calling interfaces in AI systems.
Infrastructure Security Assessment (Assessment)
Test your knowledge of AI infrastructure security including model serving, API security, deployment architectures, and supply chain risks with 10 intermediate-level questions.
LLMOps Security Assessment (Assessment)
Test your understanding of MLOps pipeline security, model deployment attacks, API security, monitoring gaps, model registry poisoning, and CI/CD for ML with 10 questions.
Security of AI-Generated API Endpoints
Analysis of security vulnerabilities in AI-generated REST and GraphQL API code, covering authentication bypass, BOLA, mass assignment, and rate limiting failures.
June 2026: Cloud AI Security Challenge
Find and document vulnerabilities in a cloud-deployed AI service covering API security, model serving infrastructure, authentication, and data handling.
AI API Abuse Detection
Detecting and mitigating API abuse patterns targeting AI inference endpoints including prompt extraction and model theft.
LLM API Security Testing
Security testing methodology for LLM APIs, covering authentication, rate limiting, input validation, output filtering, and LLM-specific API vulnerabilities.
Integration & Webhook Security
Methodology for exploiting SSRF through LLM tools, webhook hijacking, insecure function dispatch, output parsing vulnerabilities, OAuth/API key management flaws, and MCP server security in AI pipelines.
AI Infrastructure Security
Overview of security concerns in AI infrastructure, covering model supply chains, API security, deployment architecture, and the unique attack surfaces of ML systems.
Lab: Cloud AI Security Assessment
Conduct an end-to-end security assessment of a cloud-deployed AI service, covering API security, model vulnerabilities, data handling, and infrastructure configuration.
Simulation: Financial AI Platform
Expert-level red team engagement simulation targeting a fictional fintech AI-powered financial advisor, covering API mapping, advice manipulation, credential extraction, and regulatory impact assessment.
Simulation: SaaS AI Product
Red team engagement simulation targeting a B2B SaaS platform with AI-powered document analysis, search, and automation features, covering multi-tenant isolation, API security, and cross-tenant data leakage.
Function Calling Parameter Injection
Walkthrough of manipulating function call parameters through prompt-level techniques, injecting malicious values into LLM-generated API calls.
Rate Limiting and Abuse Prevention for LLM APIs
Walkthrough for implementing rate limiting and abuse prevention systems for LLM API endpoints, covering token bucket algorithms, per-user quotas, cost-based limiting, anomaly detection, and graduated enforcement.
函式呼叫攻擊(Function Calling Exploitation)
概述 LLM 函式/工具呼叫的運作方式、其產生的攻擊面,以及對 AI 系統中函式呼叫介面進行利用的系統化方法。
安全 of AI-Generated API Endpoints
Analysis of security vulnerabilities in AI-generated REST and GraphQL API code, covering authentication bypass, BOLA, mass assignment, and rate limiting failures.
June 2026: Cloud AI 安全 Challenge
Find and document vulnerabilities in a cloud-deployed AI service covering API security, model serving infrastructure, authentication, and data handling.
AI API Abuse Detection
Detecting and mitigating API abuse patterns targeting AI inference endpoints including prompt extraction and model theft.
LLM API 安全
大型語言模型 API 的安全——涵蓋認證、速率限制、輸入驗證、輸出過濾與 API 特定攻擊向量。
Integration & Webhook 安全
Methodology for exploiting SSRF through LLM tools, webhook hijacking, insecure function dispatch, output parsing vulnerabilities, OAuth/API key management flaws, and MCP server security in AI pipelines.
AI 基礎設施安全
AI 基礎設施安全顧慮的概覽,涵蓋模型供應鏈、API 安全、部署架構,以及 ML 系統的獨特攻擊面。
實驗室: Cloud AI 安全 評量
Conduct an end-to-end security assessment of a cloud-deployed AI service, covering API security, model vulnerabilities, data handling, and infrastructure configuration.
Simulation: Financial AI Platform
專家-level red team engagement simulation targeting a fictional fintech AI-powered financial advisor, covering API mapping, advice manipulation, credential extraction, and regulatory impact assessment.
Simulation: SaaS AI Product
Red team engagement simulation targeting a B2B SaaS platform with AI-powered document analysis, search, and automation features, covering multi-tenant isolation, API security, and cross-tenant data leakage.
Function Calling Parameter Injection
導覽 of manipulating function call parameters through prompt-level techniques, injecting malicious values into LLM-generated API calls.
Rate Limiting and Abuse Prevention for LLM APIs
導覽 for implementing rate limiting and abuse prevention systems for LLM API endpoints, covering token bucket algorithms, per-user quotas, cost-based limiting, anomaly detection, and graduated enforcement.