High severity7.8NVD Advisory· Published Mar 12, 2026· Updated Apr 28, 2026
CVE-2026-27940
CVE-2026-27940
Description
llama.cpp is an inference of several LLM models in C/C++. Prior to b8146, the gguf_init_from_file_impl() in gguf.cpp is vulnerable to an Integer overflow, leading to an undersized heap allocation. Using the subsequent fread() writes 528+ bytes of attacker-controlled data past the buffer boundary. This is a bypass of a similar bug in the same file - CVE-2025-53630, but the fix overlooked some areas. This vulnerability is fixed in b8146.
Affected products
1Patches
0No patches discovered yet.
Vulnerability mechanics
AI mechanics synthesis has not run for this CVE yet.
References
1- github.com/ggml-org/llama.cpp/security/advisories/GHSA-3p4r-fq3f-q74vnvdExploitVendor Advisory
News mentions
0No linked articles in our index yet.