VYPR
High severity7.8NVD Advisory· Published Mar 24, 2026· Updated Apr 30, 2026

CVE-2026-33298

CVE-2026-33298

Description

llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the ggml_nbytes function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes ggml_nbytes to return a significantly smaller size than required (e.g., 4MB instead of Exabytes), leading to a heap-based buffer overflow when the application subsequently processes the tensor. This vulnerability allows potential Remote Code Execution (RCE) via memory corruption. b7824 contains a fix.

Affected products

1
  • cpe:2.3:a:ggml:llama.cpp:*:*:*:*:*:*:*:*
    Range: <b7824

Patches

0

No patches discovered yet.

Vulnerability mechanics

AI mechanics synthesis has not run for this CVE yet.

References

2

News mentions

0

No linked articles in our index yet.