# function-calling
40 articlestagged with “function-calling”
Forced Function Calling Attacks
Forcing models to call specific functions through crafted inputs that override intended tool selection.
Function Calling Context Injection
Injecting adversarial content through function call results that influences subsequent model reasoning.
Function Calling Data Exfiltration
Using function calls as data exfiltration channels to extract information from constrained environments.
Function Calling Error Exploitation
Leveraging error handling paths in function calling implementations to leak information or bypass controls.
Function Calling Race Conditions
Exploiting timing and ordering vulnerabilities in parallel and sequential function call execution.
Default Parameter Abuse in Function Calling
Manipulating default parameter values and optional fields to achieve unintended function behavior.
Function Result Poisoning
Poisoning function call results to inject instructions back into the model's reasoning chain.
Function Schema Injection
Injecting malicious instructions through function parameter descriptions and schema definitions.
Function Type Confusion Attacks
Exploiting type system weaknesses in function calling interfaces to trigger unexpected behavior through type confusion.
Function Calling Exploitation
Overview of how LLM function/tool calling works, the attack surface it creates, and systematic approaches to exploiting function calling interfaces in AI systems.
Parallel Function Call Exploitation
Exploiting parallel function calling to create race conditions and bypass sequential validation.
Parameter Manipulation
Techniques for crafting malicious parameter values in LLM function calls, including type confusion, boundary violations, injection through parameters, and exploiting weak validation.
Recursive Function Calling
Techniques for triggering infinite loops, resource exhaustion, and call chain manipulation in LLM function calling systems through recursive and repeated invocations.
Function Result Poisoning (Agentic Exploitation)
Techniques for manipulating function return values to influence LLM behavior, inject instructions via tool results, and chain poisoned results into multi-step exploitation.
JSON Schema Injection
Techniques for manipulating function definitions and JSON schemas to alter LLM behavior, inject additional parameters, and exploit schema validation gaps in tool calling systems.
Function Calling Chain Confusion
Confuse multi-step function calling chains to skip validation steps and execute unintended operation sequences.
Function Calling Race Conditions (Agentic Exploitation)
Exploit race conditions in parallel function calling to bypass sequential validation and authorization checks.
Function Hallucination Exploitation
Exploit model tendency to hallucinate function calls to non-existent APIs for information disclosure.
Function Parameter Injection Deep Dive
Advanced techniques for injecting adversarial content through function calling parameter values and defaults.
Tool Selection Manipulation
Manipulate model tool selection decisions through crafted prompts that bias toward attacker-preferred functions.
Agentic Exploitation
Comprehensive coverage of security vulnerabilities in agentic AI systems, including MCP tool exploitation, multi-agent protocol attacks, function calling abuse, memory system compromise, framework-specific weaknesses, and workflow pattern attacks.
Agentic Exploitation Assessment (Assessment)
Test your knowledge of agentic AI attacks, MCP exploitation, function calling abuse, and multi-agent system vulnerabilities with 15 intermediate-level questions.
Function Calling Security Assessment
Assessment focused on JSON schema injection, parameter manipulation, recursive calling, and result poisoning attacks.
Skill Verification: Function Calling Attacks
Skill verification for schema injection, parameter manipulation, and result poisoning techniques.
Secure Function Calling Design
Designing secure function calling interfaces that prevent unauthorized tool use and data exfiltration.
Function Calling Authorization Framework
Building fine-grained authorization frameworks for function calling that enforce capability-based security.
Integration & Webhook Security
Methodology for exploiting SSRF through LLM tools, webhook hijacking, insecure function dispatch, output parsing vulnerabilities, OAuth/API key management flaws, and MCP server security in AI pipelines.
Function Calling Fortress Breach
Bypass function calling restrictions to invoke unauthorized tools and extract a flag from a sandboxed agent.
Lab: Advanced Function Calling Exploitation
Exploit advanced function calling patterns including nested calls, parallel execution, and schema manipulation.
Lab: Function Calling & Tool Use Abuse
Hands-on lab exploring how attackers can manipulate LLM function calling and tool use to execute unauthorized actions, exfiltrate data, and chain tool calls for maximum impact.
Lab: Function Calling Injection
Hands-on lab for exploiting function calling mechanisms by crafting inputs that manipulate which functions get called and with what parameters.
GPT-4 Attack Surface
Comprehensive analysis of GPT-4-specific attack vectors including function calling exploitation, vision input attacks, system message hierarchy abuse, structured output manipulation, and known jailbreak patterns.
Injection via Function Calling
Exploiting function calling and tool-use interfaces to inject adversarial instructions through structured tool inputs and outputs.
Function Calling Exploitation Guide
Complete walkthrough of exploiting function calling in OpenAI, Anthropic, and Google AI APIs.
Function Calling Parameter Injection
Walkthrough of manipulating function call parameters through prompt-level techniques, injecting malicious values into LLM-generated API calls.
Tool Call Injection
Step-by-step walkthrough of injecting malicious parameters into LLM tool and function calls to execute unauthorized actions in agent systems.
Tool Use Confusion Attack Walkthrough
Walkthrough of confusing model tool-use decisions to invoke unintended functions or skip safety-critical tools.
Function Calling Guardrails Implementation
Implement guardrails for function calling that validate tool selection, parameters, and execution scope.
Agent System Red Team Engagement
Complete walkthrough for testing tool-using AI agents: scoping agent capabilities, exploiting function calling, testing permission boundaries, multi-step attack chains, and session manipulation.
Together AI Security Testing
End-to-end walkthrough for security testing Together AI deployments: API enumeration, inference endpoint exploitation, fine-tuning security review, function calling assessment, and rate limit analysis.