# function-calling
標記為「function-calling」的 81 篇文章
Function Calling Exploitation
Practical attacks against OpenAI function calling, Anthropic tool use, and similar APIs -- injecting function calls through prompt injection, exploiting parameter validation gaps, and chaining calls.
Forced Function Calling Attacks
Forcing models to call specific functions through crafted inputs that override intended tool selection.
Function Calling Context Injection
Injecting adversarial content through function call results that influences subsequent model reasoning.
Function Calling Data Exfiltration
Using function calls as data exfiltration channels to extract information from constrained environments.
Function Calling Error Exploitation
Leveraging error handling paths in function calling implementations to leak information or bypass controls.
Function Calling Race Conditions
Exploiting timing and ordering vulnerabilities in parallel and sequential function call execution.
Default Parameter Abuse in Function Calling
Manipulating default parameter values and optional fields to achieve unintended function behavior.
Function Result Poisoning
Poisoning function call results to inject instructions back into the model's reasoning chain.
Function Schema Injection
Injecting malicious instructions through function parameter descriptions and schema definitions.
Function Type Confusion Attacks
Exploiting type system weaknesses in function calling interfaces to trigger unexpected behavior through type confusion.
Function Calling Exploitation
Overview of how LLM function/tool calling works, the attack surface it creates, and systematic approaches to exploiting function calling interfaces in AI systems.
Parallel Function Call Exploitation
Exploiting parallel function calling to create race conditions and bypass sequential validation.
Parameter Manipulation
Techniques for crafting malicious parameter values in LLM function calls, including type confusion, boundary violations, injection through parameters, and exploiting weak validation.
Recursive Function Calling
Techniques for triggering infinite loops, resource exhaustion, and call chain manipulation in LLM function calling systems through recursive and repeated invocations.
Function Result Poisoning (Agentic Exploitation)
Techniques for manipulating function return values to influence LLM behavior, inject instructions via tool results, and chain poisoned results into multi-step exploitation.
JSON Schema Injection
Techniques for manipulating function definitions and JSON schemas to alter LLM behavior, inject additional parameters, and exploit schema validation gaps in tool calling systems.
Function Calling Chain Confusion
Confuse multi-step function calling chains to skip validation steps and execute unintended operation sequences.
Function Calling Race Conditions (Agentic Exploitation)
Exploit race conditions in parallel function calling to bypass sequential validation and authorization checks.
Function Hallucination Exploitation
Exploit model tendency to hallucinate function calls to non-existent APIs for information disclosure.
Function Parameter Injection Deep Dive
Advanced techniques for injecting adversarial content through function calling parameter values and defaults.
Tool Selection Manipulation
Manipulate model tool selection decisions through crafted prompts that bias toward attacker-preferred functions.
Agentic Exploitation
Comprehensive coverage of security vulnerabilities in agentic AI systems, including MCP tool exploitation, multi-agent protocol attacks, function calling abuse, memory system compromise, framework-specific weaknesses, and workflow pattern attacks.
Agentic Exploitation Assessment (Assessment)
Test your knowledge of agentic AI attacks, MCP exploitation, function calling abuse, and multi-agent system vulnerabilities with 15 intermediate-level questions.
Function Calling Security Assessment
Assessment focused on JSON schema injection, parameter manipulation, recursive calling, and result poisoning attacks.
Skill Verification: Function Calling Attacks
Skill verification for schema injection, parameter manipulation, and result poisoning techniques.
Secure Function Calling Design
Designing secure function calling interfaces that prevent unauthorized tool use and data exfiltration.
Function Calling Authorization Framework
Building fine-grained authorization frameworks for function calling that enforce capability-based security.
Integration & Webhook Security
Methodology for exploiting SSRF through LLM tools, webhook hijacking, insecure function dispatch, output parsing vulnerabilities, OAuth/API key management flaws, and MCP server security in AI pipelines.
Function Calling Fortress Breach
Bypass function calling restrictions to invoke unauthorized tools and extract a flag from a sandboxed agent.
Lab: Advanced Function Calling Exploitation
Exploit advanced function calling patterns including nested calls, parallel execution, and schema manipulation.
Lab: Function Calling & Tool Use Abuse
Hands-on lab exploring how attackers can manipulate LLM function calling and tool use to execute unauthorized actions, exfiltrate data, and chain tool calls for maximum impact.
Lab: Function Calling Injection
Hands-on lab for exploiting function calling mechanisms by crafting inputs that manipulate which functions get called and with what parameters.
GPT-4 Attack Surface
Comprehensive analysis of GPT-4-specific attack vectors including function calling exploitation, vision input attacks, system message hierarchy abuse, structured output manipulation, and known jailbreak patterns.
Injection via Function Calling
Exploiting function calling and tool-use interfaces to inject adversarial instructions through structured tool inputs and outputs.
Function Calling Exploitation Guide
Complete walkthrough of exploiting function calling in OpenAI, Anthropic, and Google AI APIs.
Function Calling Parameter Injection
Walkthrough of manipulating function call parameters through prompt-level techniques, injecting malicious values into LLM-generated API calls.
Tool Call Injection
Step-by-step walkthrough of injecting malicious parameters into LLM tool and function calls to execute unauthorized actions in agent systems.
Tool Use Confusion Attack Walkthrough
Walkthrough of confusing model tool-use decisions to invoke unintended functions or skip safety-critical tools.
Function Calling Guardrails Implementation
Implement guardrails for function calling that validate tool selection, parameters, and execution scope.
Agent System Red Team Engagement
Complete walkthrough for testing tool-using AI agents: scoping agent capabilities, exploiting function calling, testing permission boundaries, multi-step attack chains, and session manipulation.
Together AI Security Testing
End-to-end walkthrough for security testing Together AI deployments: API enumeration, inference endpoint exploitation, fine-tuning security review, function calling assessment, and rate limit analysis.
Function Calling 利用ation
Practical attacks against OpenAI function calling, Anthropic tool use, and similar APIs -- injecting function calls through prompt injection, exploiting parameter validation gaps, and chaining calls.
Forced Function Calling 攻擊s
Forcing models to call specific functions through crafted inputs that override intended tool selection.
Function Calling Context Injection
Injecting adversarial content through function call results that influences subsequent model reasoning.
Function Calling Data Exfiltration
Using function calls as data exfiltration channels to extract information from constrained environments.
Function Calling Error 利用ation
Leveraging error handling paths in function calling implementations to leak information or bypass controls.
Function Calling Race Conditions
利用ing timing and ordering vulnerabilities in parallel and sequential function call execution.
Default Parameter Abuse in Function Calling
Manipulating default parameter values and optional fields to achieve unintended function behavior.
Function Result 投毒
Poisoning function call results to inject instructions back into the model's reasoning chain.
Function Schema Injection
Injecting malicious instructions through function parameter descriptions and schema definitions.
Function Type Confusion 攻擊s
利用ing type system weaknesses in function calling interfaces to trigger unexpected behavior through type confusion.
函式呼叫攻擊(Function Calling Exploitation)
概述 LLM 函式/工具呼叫的運作方式、其產生的攻擊面,以及對 AI 系統中函式呼叫介面進行利用的系統化方法。
Parallel Function Call 利用ation
利用ing parallel function calling to create race conditions and bypass sequential validation.
參數操弄
打造 LLM 函式呼叫中惡意參數值的技術,包含型別混淆、邊界越界、經由參數之注入,以及利用不完善之驗證。
遞迴函式呼叫
在 LLM 函式呼叫系統中透過遞迴與反覆呼叫觸發無限迴圈、資源耗盡,以及操弄呼叫鏈的技術。
函式結果投毒(代理式攻擊)
操弄函式回傳值以影響 LLM 行為的技術、透過工具結果注入指令,以及將被投毒結果串接為多步攻擊。
JSON Schema 注入
操弄函式定義與 JSON schema 以改變 LLM 行為、注入額外參數,以及利用工具呼叫系統中 schema 驗證缺口之技術。
Function Calling Chain Confusion
Confuse multi-step function calling chains to skip validation steps and execute unintended operation sequences.
Function Calling Race Conditions (代理式 利用ation)
利用 race conditions in parallel function calling to bypass sequential validation and authorization checks.
Function Hallucination 利用ation
利用 model tendency to hallucinate function calls to non-existent APIs for information disclosure.
Function Parameter Injection Deep Dive
進階 techniques for injecting adversarial content through function calling parameter values and defaults.
工具 Selection Manipulation
Manipulate model tool selection decisions through crafted prompts that bias toward attacker-preferred functions.
代理式利用
代理式 AI 系統中安全漏洞的完整涵蓋,包含 MCP 工具利用、多代理協議攻擊、函式呼叫濫用、記憶體系統入侵、框架特定弱點與工作流程模式攻擊。
Function Calling 安全 評量
評量 focused on JSON schema injection, parameter manipulation, recursive calling, and result poisoning attacks.
Skill Verification: Function Calling 攻擊s
Skill verification for schema injection, parameter manipulation, and result poisoning techniques.
Secure Function Calling Design
Designing secure function calling interfaces that prevent unauthorized tool use and data exfiltration.
Function Calling Authorization Framework
Building fine-grained authorization frameworks for function calling that enforce capability-based security.
Integration & Webhook 安全
Methodology for exploiting SSRF through LLM tools, webhook hijacking, insecure function dispatch, output parsing vulnerabilities, OAuth/API key management flaws, and MCP server security in AI pipelines.
Function Calling Fortress Breach
Bypass function calling restrictions to invoke unauthorized tools and extract a flag from a sandboxed agent.
實驗室: 進階 Function Calling 利用ation
利用 advanced function calling patterns including nested calls, parallel execution, and schema manipulation.
實驗室: Function Calling & 工具 Use Abuse
Hands-on lab exploring how attackers can manipulate LLM function calling and tool use to execute unauthorized actions, exfiltrate data, and chain tool calls for maximum impact.
實驗室: Function Calling Injection
Hands-on lab for exploiting function calling mechanisms by crafting inputs that manipulate which functions get called and with what parameters.
GPT-4 攻擊面
GPT-4 特有攻擊向量之完整分析,包括函式呼叫攻擊、視覺輸入攻擊、系統訊息階層濫用、結構化輸出操弄,以及已知 jailbreak 模式。
Injection via Function Calling
利用ing function calling and tool-use interfaces to inject adversarial instructions through structured tool inputs and outputs.
Function Calling 利用ation 指南
Complete walkthrough of exploiting function calling in OpenAI, Anthropic, and Google AI APIs.
Function Calling Parameter Injection
導覽 of manipulating function call parameters through prompt-level techniques, injecting malicious values into LLM-generated API calls.
工具 Call Injection
Step-by-step walkthrough of injecting malicious parameters into LLM tool and function calls to execute unauthorized actions in agent systems.
工具 Use Confusion 攻擊 導覽
導覽 of confusing model tool-use decisions to invoke unintended functions or skip safety-critical tools.
Function Calling Guardrails Implementation
Implement guardrails for function calling that validate tool selection, parameters, and execution scope.
代理 System 紅隊 Engagement
Complete walkthrough for testing tool-using AI agents: scoping agent capabilities, exploiting function calling, testing permission boundaries, multi-step attack chains, and session manipulation.
Together AI 安全 Testing
End-to-end walkthrough for security testing Together AI deployments: API enumeration, inference endpoint exploitation, fine-tuning security review, function calling assessment, and rate limit analysis.