Critical severity9.8NVD Advisory· Published Apr 1, 2026· Updated Apr 30, 2026
CVE-2026-34159
CVE-2026-34159
Description
llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backend's deserialize_tensor() skips all bounds validation when a tensor's buffer field is 0. An unauthenticated attacker can read and write arbitrary process memory via crafted GRAPH_COMPUTE messages. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE, this gives full ASLR bypass and remote code execution. No authentication required, just TCP access to the RPC server port. This issue has been patched in version b8492.
Affected products
1Patches
139bf0d3c6a95https://github.com/ggml-org/llama.cppvia nvd-ref
Vulnerability mechanics
Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.
References
3- github.com/ggml-org/llama.cpp/commit/39bf0d3c6a95803e0f41aaba069ffbee26721042nvdPatch
- github.com/ggml-org/llama.cpp/pull/20908nvdIssue TrackingPatch
- github.com/ggml-org/llama.cpp/security/advisories/GHSA-j8rj-fmpv-wcxwnvdExploitVendor Advisory
News mentions
0No linked articles in our index yet.