VYPR
High severity7.5NVD Advisory· Published Sep 17, 2024· Updated Apr 15, 2026

CVE-2024-8768

CVE-2024-8768

Description

A flaw was found in the vLLM library. A completions API request with an empty prompt will crash the vLLM API server, resulting in a denial of service.

Affected packages

Versions sourced from the GitHub Security Advisory.

PackageAffected versionsPatched versions
vllmPyPI
< 0.5.50.5.5

Patches

1
e25fee57c2e6

[BugFix] Fix server crash on empty prompt (#7746)

https://github.com/vllm-project/vllmMaximilien de BayserAug 23, 2024via ghsa
3 files changed · +39 0
  • tests/entrypoints/llm/test_prompt_validation.py+9 0 added
    @@ -0,0 +1,9 @@
    +import pytest
    +
    +from vllm import LLM
    +
    +
    +def test_empty_prompt():
    +    llm = LLM(model="gpt2")
    +    with pytest.raises(ValueError, match='Prompt cannot be empty'):
    +        llm.generate([""])
    
  • tests/entrypoints/openai/test_prompt_validation.py+22 0 added
    @@ -0,0 +1,22 @@
    +# imports for guided decoding tests
    +import re
    +
    +import openai
    +import pytest
    +
    +from ...utils import RemoteOpenAIServer
    +
    +
    +@pytest.mark.asyncio
    +async def test_empty_prompt():
    +    model_name = "gpt2"
    +    server_args = ["--enforce-eager"]
    +    with RemoteOpenAIServer(model_name, server_args) as remote_server:
    +        client = remote_server.get_async_client()
    +
    +        with pytest.raises(openai.BadRequestError,
    +                           match=re.compile('.+Prompt cannot be empty.+')):
    +            await client.completions.create(model=model_name,
    +                                            prompt="",
    +                                            max_tokens=5,
    +                                            temperature=0.0)
    
  • vllm/engine/llm_engine.py+8 0 modified
    @@ -591,6 +591,7 @@ def _add_processed_request(
             prompt_adapter_request: Optional[PromptAdapterRequest],
             trace_headers: Optional[Mapping[str, str]] = None,
         ) -> None:
    +        self._validate_model_inputs(processed_inputs)
             # Create the sequences.
             block_size = self.cache_config.block_size
             seq_id = next(self.seq_counter)
    @@ -1647,3 +1648,10 @@ def is_encoder_decoder_model(self):
     
         def is_embedding_model(self):
             return self.model_config.is_embedding_model
    +
    +    def _validate_model_inputs(self, inputs: Union[LLMInputs,
    +                                                   EncoderDecoderLLMInputs]):
    +        prompt_key = "encoder_prompt_token_ids" \
    +            if self.is_encoder_decoder_model() else "prompt_token_ids"
    +        if not inputs.get(prompt_key):
    +            raise ValueError("Prompt cannot be empty")
    

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

7

News mentions

0

No linked articles in our index yet.