- LangChain and LangGraph combine three highly robust vulnerabilities to reveal files, secrets, and chat histories
- Vulnerabilities include method leaks, deserialization leaks, and SQL injection in SQLite checkpoints
- Researchers warn of dangers to libraries downstream; developers are urged to check the settings and treat the LLM results as a trusted input
LangChain and LangGraph, two popular open source frameworks for building AI applications, contain high sensitivity and critical vulnerabilities that allow malicious actors to extract sensitive data from compromised systems.
LangChain helps developers build applications using large-scale linguistic models (LLM), by connecting AI models to various data sources and tools. It is a popular tool among developers who want to build chatbots and assistants. LangGraph, on the other hand, is built on top of LangChain and is designed to help create AI agents that follow structured workflows, step by step. It uses graphs to control how tasks flow between steps, and devs use them for complex, multi-step processes.
Citing statistics from the Python Package Index (PyPI), Hacker News states that the projects have a combined download of more than 60 million per week, suggesting that they are very popular in the software development community.
The article continues below
Vulnerabilities and patches
Overall, the projects addressed three risks:
CVE-2026-34070 (severity score 7.5/10 – high) – Routing bug in LangChain that allows random file access without authentication
CVE-2025-68664 (severity rating 9.3/10 – serious ) – Untrustworthy data removal bug in LangChain that leaks API keys and environment secrets
CVE-2025-67644 (severity score 7.3/10 – high ) – SQL injection vulnerability in the implementation of the LangGraph SQLite test environment that allows SQL query manipulation
“Each vulnerability exposes a different class of business information: system files, environment secrets, and conversation history,” said security researcher Vladimir Tokarev of Cyera in a report detailing the flaws.
Hacker News notes exploiting any of the three flaws allows malicious actors to read sensitive files such as Docker configurations, extract secrets via rapid injection, and even access chat histories associated with sensitive workflows.
All bugs have been fixed so if you use any of these tools, make sure you upgrade to the latest version to protect your projects.
CVE-2026-34070 can be fixed by bringing langchain-core to at least version 1.2.22
CVE-2025-68664 can be fixed by bringing langchain-core to versions n0.3.81 and 1.2.5
CVE-2025-67644 can be fixed by bringing langgraph-checkpoint-sqlite to version 3.0.1
Basic plumbing
For Cyera, the findings show that the biggest threat to enterprise AI data may not be as complex as people think.
“In essence, it hides the invisible, basic plumbing that connects your AI to your business. This layer is vulnerable to some of the oldest tricks in the hacker’s playbook,” they said.
They also cautioned that LangChain “does not stand alone” but sits “at the center of a large interdependence that cuts across the AI ​​stack.” By combining hundreds of libraries that wrap LangChain, extend it, or depend on it, it means that any risk to the project also means risk down the stream.
The bugs “extrude into every library that drops, every wrapper, every integration that inherits the vulnerable code.”
To truly protect your environment, stockpiling tools won’t be enough, they say. Any code that passes external or user-controlled configuration to load_prompt_from_config() or load_prompt() needs to be tested, and developers should not enable secrets_from_env=True when removing untrusted data. “The new default is False. Keep it that way,” they warned.
They also urged the public to treat the LLM results as “credible input”, as different fields can be influenced by a quick injection. Finally, metadata filter keys must be validated before they are passed to checkpoint queries.
“Never allow user-controlled strings to be dictionary keys in a sort function.”

The best antivirus for all budgets
Follow TechRadar for Google news again add us as a favorite resource to get our expert news, reviews, and opinions in your feed. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok to get news, reviews, unboxings in video form, and get regular updates from us WhatsApp again.



