Medium severity5.9NVD Advisory· Published Apr 2, 2026· Updated May 11, 2026
CVE-2026-34760
CVE-2026-34760
Description
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before version 0.18.0, Librosa defaults to using numpy.mean for mono downmixing (to_mono), while the international standard ITU-R BS.775-4 specifies a weighted downmixing algorithm. This discrepancy results in inconsistency between audio heard by humans (e.g., through headphones/regular speakers) and audio processed by AI models (Which infra via Librosa, such as vllm, transformer). This issue has been patched in version 0.18.0.
Affected products
1Patches
0No patches discovered yet.
Vulnerability mechanics
AI mechanics synthesis has not run for this CVE yet.
References
4- github.com/vllm-project/vllm/commit/c7f98b4d0a63b32ed939e2b6dfaa8a626e9b46c4nvdPatch
- github.com/vllm-project/vllm/security/advisories/GHSA-6c4r-fmh3-7rh8nvdVendor Advisory
- github.com/vllm-project/vllm/pull/37058nvdIssue Tracking
- github.com/vllm-project/vllm/releases/tag/v0.18.0nvdRelease Notes
News mentions
0No linked articles in our index yet.