# code-execution
8 articlestagged with “code-execution”
Code Agent Manipulation
Techniques for manipulating AI agents that generate, execute, and review code, including injection through code context, repository poisoning, execution environment attacks, and code review manipulation.
Code Execution Safety Assessment
Assessment of LLM-generated code safety, sandbox escape techniques, and code review automation.
Case Study: LangChain Remote Code Execution Vulnerabilities (CVE-2023-29374 and CVE-2023-36258)
Technical analysis of critical remote code execution vulnerabilities in LangChain's LLMMathChain and PALChain components that allowed arbitrary Python execution through crafted LLM outputs.
Code Agent Sandbox Escape
Techniques for escaping sandboxed code execution environments in AI code agents.
Model Serialization Attacks
Pickle, SafeTensors, and ONNX deserialization attacks targeting ML model files for arbitrary code execution.
Gemini Attack Surface
Gemini-specific attack vectors including multimodal injection across image, audio, and video inputs, Google Workspace integration attacks, grounding abuse, and code execution exploitation.
Sandbox Escape via Injection
Using prompt injection as a vector for escaping application sandboxes and achieving unauthorized code execution or system access.
AutoGen Multi-Agent System Testing
End-to-end walkthrough for security testing AutoGen multi-agent systems: agent enumeration, inter-agent injection, code execution sandbox assessment, conversation manipulation, and escalation path analysis.