VYPR
Critical severity9.8NVD Advisory· Published Apr 1, 2026· Updated Apr 30, 2026

CVE-2026-34159

CVE-2026-34159

Description

llama.cpp is an inference of several LLM models in C/C++. Prior to version b8492, the RPC backend's deserialize_tensor() skips all bounds validation when a tensor's buffer field is 0. An unauthenticated attacker can read and write arbitrary process memory via crafted GRAPH_COMPUTE messages. Combined with pointer leaks from ALLOC_BUFFER/BUFFER_GET_BASE, this gives full ASLR bypass and remote code execution. No authentication required, just TCP access to the RPC server port. This issue has been patched in version b8492.

Affected products

1
  • cpe:2.3:a:ggml:llama.cpp:*:*:*:*:*:*:*:*
    Range: <b8492

Patches

1

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

3

News mentions

0

No linked articles in our index yet.