vLLM phi4mm: Quadratic Time Complexity in Input Token Processing leads to denial of service
Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
Affected packages
Versions sourced from the GitHub Security Advisory.
| Package | Affected versions | Patched versions |
|---|---|---|
vllmPyPI | >= 0.8.0, < 0.8.5 | 0.8.5 |
Affected products
1Patches
0No patches discovered yet.
Vulnerability mechanics
AI mechanics synthesis has not run for this CVE yet.
References
4- github.com/advisories/GHSA-vc6m-hm49-g9qgghsaADVISORY
- nvd.nist.gov/vuln/detail/CVE-2025-46560ghsaADVISORY
- github.com/vllm-project/vllm/blob/8cac35ba435906fb7eb07e44fe1a8c26e8744f4e/vllm/model_executor/models/phi4mm.pyghsax_refsource_MISCWEB
- github.com/vllm-project/vllm/security/advisories/GHSA-vc6m-hm49-g9qgghsax_refsource_CONFIRMWEB
News mentions
0No linked articles in our index yet.