Bug 2431865 (CVE-2026-22807) - CVE-2026-22807 vLLM: vLLM: Arbitrary code execution via untrusted model loading
Summary: CVE-2026-22807 vLLM: vLLM: Arbitrary code execution via untrusted model loading
Keywords:
Status: NEW
Alias: CVE-2026-22807
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: Product Security DevOps Team
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2026-01-21 22:01 UTC by OSIDB Bzimport
Modified: 2026-01-22 05:55 UTC (History)
7 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description OSIDB Bzimport 2026-01-21 22:01:22 UTC
vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.


Note You need to log in before you can comment on or make changes to this bug.