VYPR
Medium severity6.5GHSA Advisory· Published May 12, 2026· Updated May 14, 2026

CVE-2026-44222

CVE-2026-44222

Description

vLLM is an inference and serving engine for large language models (LLMs). From 0.6.1 to before 0.20.0, there is a a Token Injection vulnerability in vLLM’s multimodal processing. Unauthenticated, text-only prompts that spell special tokens are interpreted as control. Image and video placeholder sequences supplied without matching data cause vLLM to index into empty grids during input-position computation, raising an unhandled IndexError and terminating the worker or degrading availability. Multimodal paths that rely on image_grid_thw/video_grid_thw are affected. This vulnerability is fixed in 0.20.0.

Affected products

2
  • Vllm/VllmGHSA2 versions
    >= 0.6.1, < 0.20.0+ 1 more
    • (no CPE)range: >= 0.6.1, < 0.20.0
    • cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:*range: >=0.6.1,<0.20.0

Patches

0

No patches discovered yet.

Vulnerability mechanics

AI mechanics synthesis has not run for this CVE yet.

References

4

News mentions

0

No linked articles in our index yet.