VYPR
High severityNVD Advisory· Published Dec 1, 2025· Updated Dec 2, 2025

vLLM vulnerable to remote code execution via transformers_utils/get_config

CVE-2025-66448

Description

vLLM is an inference and serving engine for large language models (LLMs). Prior to 0.11.1, vllm has a critical remote code execution vector in a config class named Nemotron_Nano_VL_Config. When vllm loads a model config that contains an auto_map entry, the config class resolves that mapping with get_class_from_dynamic_module(...) and immediately instantiates the returned class. This fetches and executes Python from the remote repository referenced in the auto_map string. Crucially, this happens even when the caller explicitly sets trust_remote_code=False in vllm.transformers_utils.config.get_config. In practice, an attacker can publish a benign-looking frontend repo whose config.json points via auto_map to a separate malicious backend repo; loading the frontend will silently run the backend’s code on the victim host. This vulnerability is fixed in 0.11.1.

Affected packages

Versions sourced from the GitHub Security Advisory.

PackageAffected versionsPatched versions
vllmPyPI
< 0.11.10.11.1

Affected products

1

Patches

1
ffb08379d887

[Chore] Remove Nemotron-Nano-VL config copy (#28126)

https://github.com/vllm-project/vllmIsotr0pyNov 5, 2025via ghsa
3 files changed · +1 63
  • vllm/transformers_utils/config.py+1 1 modified
    @@ -81,7 +81,6 @@ def __getitem__(self, key):
         flex_olmo="FlexOlmoConfig",
         kimi_linear="KimiLinearConfig",
         kimi_vl="KimiVLConfig",
    -    Llama_Nemotron_Nano_VL="Nemotron_Nano_VL_Config",
         RefinedWeb="RWConfig",  # For tiiuae/falcon-40b(-instruct)
         RefinedWebModel="RWConfig",  # For tiiuae/falcon-7b(-instruct)
         jais="JAISConfig",
    @@ -106,6 +105,7 @@ def __getitem__(self, key):
     
     _AUTO_CONFIG_KWARGS_OVERRIDES: dict[str, dict[str, Any]] = {
         "internvl_chat": {"has_no_defaults_at_init": True},
    +    "Llama_Nemotron_Nano_VL": {"attn_implementation": "eager"},
         "NVLM_D": {"has_no_defaults_at_init": True},
     }
     
    
  • vllm/transformers_utils/configs/__init__.py+0 2 modified
    @@ -28,7 +28,6 @@
     from vllm.transformers_utils.configs.moonvit import MoonViTConfig
     from vllm.transformers_utils.configs.nemotron import NemotronConfig
     from vllm.transformers_utils.configs.nemotron_h import NemotronHConfig
    -from vllm.transformers_utils.configs.nemotron_vl import Nemotron_Nano_VL_Config
     from vllm.transformers_utils.configs.olmo3 import Olmo3Config
     from vllm.transformers_utils.configs.ovis import OvisConfig
     from vllm.transformers_utils.configs.qwen3_next import Qwen3NextConfig
    @@ -59,7 +58,6 @@
         "KimiVLConfig",
         "NemotronConfig",
         "NemotronHConfig",
    -    "Nemotron_Nano_VL_Config",
         "Olmo3Config",
         "OvisConfig",
         "RadioConfig",
    
  • vllm/transformers_utils/configs/nemotron_vl.py+0 60 removed
    @@ -1,60 +0,0 @@
    -# SPDX-License-Identifier: Apache-2.0
    -# SPDX-FileCopyrightText: Copyright contributors to the vLLM project
    -
    -# ruff: noqa: E501
    -# Adapted from
    -# https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1/blob/main/configuration.py
    -# --------------------------------------------------------
    -# Adapted from https://huggingface.co/OpenGVLab/InternVL2-Llama3-76B under MIT License
    -#     LICENSE is in incl_licenses directory.
    -# --------------------------------------------------------
    -
    -from transformers import LlamaConfig
    -from transformers.configuration_utils import PretrainedConfig
    -from transformers.dynamic_module_utils import get_class_from_dynamic_module
    -
    -
    -class Nemotron_Nano_VL_Config(PretrainedConfig):
    -    model_type = "Llama_Nemotron_Nano_VL"
    -    is_composition = True
    -
    -    def __init__(
    -        self,
    -        vision_config=None,
    -        llm_config=None,
    -        force_image_size=None,
    -        downsample_ratio=0.5,
    -        template=None,
    -        ps_version="v1",
    -        image_tag_type="internvl",
    -        projector_hidden_size=4096,
    -        vit_hidden_size=1280,
    -        **kwargs,
    -    ):
    -        super().__init__(**kwargs)
    -
    -        if vision_config is not None:
    -            assert (
    -                "auto_map" in vision_config
    -                and "AutoConfig" in vision_config["auto_map"]
    -            )
    -            vision_auto_config = get_class_from_dynamic_module(
    -                *vision_config["auto_map"]["AutoConfig"].split("--")[::-1]
    -            )
    -            self.vision_config = vision_auto_config(**vision_config)
    -        else:
    -            self.vision_config = PretrainedConfig()
    -
    -        if llm_config is None:
    -            self.text_config = LlamaConfig()
    -        else:
    -            self.text_config = LlamaConfig(**llm_config)
    -
    -        # Assign configuration values
    -        self.force_image_size = force_image_size
    -        self.downsample_ratio = downsample_ratio
    -        self.template = template  # TODO move out of here and into the tokenizer
    -        self.ps_version = ps_version  # Pixel shuffle version
    -        self.image_tag_type = image_tag_type  # TODO: into the tokenizer too?
    -        self.projector_hidden_size = projector_hidden_size
    -        self.vit_hidden_size = vit_hidden_size
    

Vulnerability mechanics

Generated by null/stub on May 9, 2026. Inputs: CWE entries + fix-commit diffs from this CVE's patches. Citations validated against bundle.

References

5

News mentions

0

No linked articles in our index yet.