Bug 2416282 (CVE-2025-62164) - CVE-2025-62164 vllm: VLLM deserialization vulnerability leading to DoS and potential RCE
Summary: CVE-2025-62164 vllm: VLLM deserialization vulnerability leading to DoS and po...
Keywords:
Status: NEW
Alias: CVE-2025-62164
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: Product Security DevOps Team
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-11-21 02:01 UTC by OSIDB Bzimport
Modified: 2025-11-25 07:06 UTC (History)
7 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description OSIDB Bzimport 2025-11-21 02:01:35 UTC
vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.


Note You need to log in before you can comment on or make changes to this bug.