Medium severity6.5GHSA Advisory· Published May 12, 2026· Updated May 14, 2026
CVE-2026-44222
CVE-2026-44222
Description
vLLM is an inference and serving engine for large language models (LLMs). From 0.6.1 to before 0.20.0, there is a a Token Injection vulnerability in vLLM’s multimodal processing. Unauthenticated, text-only prompts that spell special tokens are interpreted as control. Image and video placeholder sequences supplied without matching data cause vLLM to index into empty grids during input-position computation, raising an unhandled IndexError and terminating the worker or degrading availability. Multimodal paths that rely on image_grid_thw/video_grid_thw are affected. This vulnerability is fixed in 0.20.0.
Affected products
2Patches
0No patches discovered yet.
Vulnerability mechanics
AI mechanics synthesis has not run for this CVE yet.
References
4- github.com/vllm-project/vllm/security/advisories/GHSA-hpv8-x276-m59fnvdExploitVendor Advisory
- github.com/advisories/GHSA-hpv8-x276-m59fghsaADVISORY
- github.com/vllm-project/vllm/issues/32656nvdIssue Tracking
- nvd.nist.gov/vuln/detail/CVE-2026-44222ghsa
News mentions
0No linked articles in our index yet.