# langchain
標記為「langchain」的 25 篇文章
Security Comparison Matrix
Side-by-side security comparison of major AI agent frameworks: LangChain, CrewAI, AutoGen, Semantic Kernel, and OpenAI Assistants, covering default security, common misconfigurations, and framework selection guidance.
Agent Framework Security
Security analysis of major AI agent frameworks including LangChain, CrewAI, AutoGen, Semantic Kernel, and OpenAI Assistants, covering default configurations, common vulnerabilities, and framework-specific attack surfaces.
LangChain Security Deep Dive
Comprehensive security analysis of LangChain including known CVEs and exploitation patterns.
LangChain Security Deep Dive (Agentic Exploitation)
Comprehensive security analysis of LangChain and LangGraph, covering dangerous defaults, chain composition attacks, callback exploitation, community tool risks, and agent executor vulnerabilities.
Case Study: LangChain CVE Analysis
Analysis of LangChain CVEs including CVE-2023-29374, CVE-2023-36258, and their root causes.
Case Study: LangChain Remote Code Execution Vulnerabilities (CVE-2023-29374 and CVE-2023-36258)
Technical analysis of critical remote code execution vulnerabilities in LangChain's LLMMathChain and PALChain components that allowed arbitrary Python execution through crafted LLM outputs.
LangChain & LlamaIndex Security
Security analysis of popular LLM orchestration frameworks. Common misconfigurations, known CVEs, insecure defaults, and hardening guides for LangChain, LlamaIndex, and related LLM application frameworks.
Agent Architectures & Tool Use Patterns
How ReAct, Plan-and-Execute, and LangGraph agent patterns work — tool definition, invocation, and result processing — and where injection happens in each architecture.
Integration & Framework Security
Security analysis of AI integration frameworks including LangChain, LlamaIndex, and Semantic Kernel, covering common vulnerability patterns and exploitation techniques.
LangChain CVE Exploitation Lab
Reproduce and analyze LangChain CVEs including CVE-2023-29374 and CVE-2023-36258 in a safe lab environment.
LangChain Exploit Chain Walkthrough
Walkthrough of chaining LangChain CVEs for remote code execution from prompt injection through to shell access.
LangChain Application Security Testing
End-to-end walkthrough for security testing LangChain applications: chain enumeration, prompt injection through chains, tool and agent exploitation, retrieval augmented generation attacks, and memory manipulation.
Security Testing LangChain Applications
Step-by-step walkthrough for identifying and exploiting security vulnerabilities in LangChain-based applications, covering chain injection, agent manipulation, tool abuse, retrieval poisoning, and memory extraction attacks.
代理框架安全比較矩陣
主要 AI 代理框架的並列安全比較:LangChain、CrewAI、AutoGen、Semantic Kernel 與 OpenAI Assistants,涵蓋預設安全、常見錯誤組態,以及框架選擇指引。
代理框架安全
主流 AI 代理框架的安全分析,涵蓋 LangChain、CrewAI、AutoGen、Semantic Kernel 與 OpenAI Assistants,包括預設組態、常見漏洞與框架特有攻擊面。
LangChain 安全 Deep Dive
Comprehensive security analysis of LangChain including known CVEs and exploitation patterns.
LangChain 安全深入探討(代理攻擊)
對 LangChain 與 LangGraph 的完整安全分析,涵蓋危險預設、chain 組合攻擊、callback 利用、社群工具風險,以及 agent executor 漏洞。
Case Study: LangChain CVE Analysis
Analysis of LangChain CVEs including CVE-2023-29374, CVE-2023-36258, and their root causes.
Case Study: LangChain Remote Code Execution Vulnerabilities (CVE-2023-29374 and CVE-2023-36258)
Technical analysis of critical remote code execution vulnerabilities in LangChain's LLMMathChain and PALChain components that allowed arbitrary Python execution through crafted LLM outputs.
LangChain 與 LlamaIndex 安全
熱門 LLM 編排框架之安全分析。常見組態錯誤、已知 CVE、不安全預設與 LangChain、LlamaIndex 與相關 LLM 應用框架之加固指引。
代理架構與工具使用模式
ReAct、Plan-and-Execute 與 LangGraph 代理模式如何運作——工具定義、呼叫與結果處理——以及注入於每個架構中發生之處。
LangChain CVE 利用ation 實驗室
Reproduce and analyze LangChain CVEs including CVE-2023-29374 and CVE-2023-36258 in a safe lab environment.
LangChain 利用 Chain 導覽
導覽 of chaining LangChain CVEs for remote code execution from prompt injection through to shell access.
LangChain Application 安全 Testing
End-to-end walkthrough for security testing LangChain applications: chain enumeration, prompt injection through chains, tool and agent exploitation, retrieval augmented generation attacks, and memory manipulation.
安全 Testing LangChain Applications
Step-by-step walkthrough for identifying and exploiting security vulnerabilities in LangChain-based applications, covering chain injection, agent manipulation, tool abuse, retrieval poisoning, and memory extraction attacks.