LangChain Core serialization injection allows arbitrary code execution
Affected: langchain-core
A critical serialization injection vulnerability in LangChain Core allows attackers to execute arbitrary code through crafted LLM response fields.
Attack vector
The vulnerability exists in how LangChain Core deserializes LLM response metadata. The additional_kwargs and response_metadata fields in LLM responses can be controlled via prompt injection, then deserialized in streaming operations.
Attack chain:
- Attacker crafts a prompt injection that causes the LLM to include malicious serialized data in response metadata
- LangChain Core deserializes this data during streaming
- Arbitrary code executes in the application context
Impact
- Remote code execution (RCE) via prompt injection → serialization chain
- Affects any application using LangChain Core streaming with untrusted inputs
- CVSS 9.3 — network-exploitable, no authentication required
Remediation
- Patch available: Upgrade
langchain-coreto the latest version - Mitigation: Sanitize LLM response metadata before deserialization
- Detection: Monitor for unusual serialized objects in LLM response fields
Significance
This vulnerability demonstrates a critical pattern: prompt injection as the entry point for traditional exploitation. The LLM doesn’t need to be “jailbroken” — the attacker just needs to influence response metadata fields that downstream code trusts and deserializes.