# api-security
14 articlestagged with “api-security”
Function Calling Exploitation
Overview of how LLM function/tool calling works, the attack surface it creates, and systematic approaches to exploiting function calling interfaces in AI systems.
Infrastructure Security Assessment (Assessment)
Test your knowledge of AI infrastructure security including model serving, API security, deployment architectures, and supply chain risks with 10 intermediate-level questions.
LLMOps Security Assessment (Assessment)
Test your understanding of MLOps pipeline security, model deployment attacks, API security, monitoring gaps, model registry poisoning, and CI/CD for ML with 10 questions.
Security of AI-Generated API Endpoints
Analysis of security vulnerabilities in AI-generated REST and GraphQL API code, covering authentication bypass, BOLA, mass assignment, and rate limiting failures.
June 2026: Cloud AI Security Challenge
Find and document vulnerabilities in a cloud-deployed AI service covering API security, model serving infrastructure, authentication, and data handling.
AI API Abuse Detection
Detecting and mitigating API abuse patterns targeting AI inference endpoints including prompt extraction and model theft.
LLM API Security Testing
Security testing methodology for LLM APIs, covering authentication, rate limiting, input validation, output filtering, and LLM-specific API vulnerabilities.
Integration & Webhook Security
Methodology for exploiting SSRF through LLM tools, webhook hijacking, insecure function dispatch, output parsing vulnerabilities, OAuth/API key management flaws, and MCP server security in AI pipelines.
AI Infrastructure Security
Overview of security concerns in AI infrastructure, covering model supply chains, API security, deployment architecture, and the unique attack surfaces of ML systems.
Lab: Cloud AI Security Assessment
Conduct an end-to-end security assessment of a cloud-deployed AI service, covering API security, model vulnerabilities, data handling, and infrastructure configuration.
Simulation: Financial AI Platform
Expert-level red team engagement simulation targeting a fictional fintech AI-powered financial advisor, covering API mapping, advice manipulation, credential extraction, and regulatory impact assessment.
Simulation: SaaS AI Product
Red team engagement simulation targeting a B2B SaaS platform with AI-powered document analysis, search, and automation features, covering multi-tenant isolation, API security, and cross-tenant data leakage.
Function Calling Parameter Injection
Walkthrough of manipulating function call parameters through prompt-level techniques, injecting malicious values into LLM-generated API calls.
Rate Limiting and Abuse Prevention for LLM APIs
Walkthrough for implementing rate limiting and abuse prevention systems for LLM API endpoints, covering token bucket algorithms, per-user quotas, cost-based limiting, anomaly detection, and graduated enforcement.