# langchain
13 articlestagged with “langchain”
Security Comparison Matrix
Side-by-side security comparison of major AI agent frameworks: LangChain, CrewAI, AutoGen, Semantic Kernel, and OpenAI Assistants, covering default security, common misconfigurations, and framework selection guidance.
Agent Framework Security
Security analysis of major AI agent frameworks including LangChain, CrewAI, AutoGen, Semantic Kernel, and OpenAI Assistants, covering default configurations, common vulnerabilities, and framework-specific attack surfaces.
LangChain Security Deep Dive
Comprehensive security analysis of LangChain including known CVEs and exploitation patterns.
LangChain Security Deep Dive (Agentic Exploitation)
Comprehensive security analysis of LangChain and LangGraph, covering dangerous defaults, chain composition attacks, callback exploitation, community tool risks, and agent executor vulnerabilities.
Case Study: LangChain CVE Analysis
Analysis of LangChain CVEs including CVE-2023-29374, CVE-2023-36258, and their root causes.
Case Study: LangChain Remote Code Execution Vulnerabilities (CVE-2023-29374 and CVE-2023-36258)
Technical analysis of critical remote code execution vulnerabilities in LangChain's LLMMathChain and PALChain components that allowed arbitrary Python execution through crafted LLM outputs.
LangChain & LlamaIndex Security
Security analysis of popular LLM orchestration frameworks. Common misconfigurations, known CVEs, insecure defaults, and hardening guides for LangChain, LlamaIndex, and related LLM application frameworks.
Agent Architectures & Tool Use Patterns
How ReAct, Plan-and-Execute, and LangGraph agent patterns work — tool definition, invocation, and result processing — and where injection happens in each architecture.
Integration & Framework Security
Security analysis of AI integration frameworks including LangChain, LlamaIndex, and Semantic Kernel, covering common vulnerability patterns and exploitation techniques.
LangChain CVE Exploitation Lab
Reproduce and analyze LangChain CVEs including CVE-2023-29374 and CVE-2023-36258 in a safe lab environment.
LangChain Exploit Chain Walkthrough
Walkthrough of chaining LangChain CVEs for remote code execution from prompt injection through to shell access.
LangChain Application Security Testing
End-to-end walkthrough for security testing LangChain applications: chain enumeration, prompt injection through chains, tool and agent exploitation, retrieval augmented generation attacks, and memory manipulation.
Security Testing LangChain Applications
Step-by-step walkthrough for identifying and exploiting security vulnerabilities in LangChain-based applications, covering chain injection, agent manipulation, tool abuse, retrieval poisoning, and memory extraction attacks.