Bug 2342304 (CVE-2025-24357) - CVE-2025-24357 vllm: vLLM allows a malicious model RCE by torch.load in hf_model_weights_iterator
Summary: CVE-2025-24357 vllm: vLLM allows a malicious model RCE by torch.load in hf_mo...
Keywords:
Status: NEW
Alias: CVE-2025-24357
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: Product Security DevOps Team
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-01-27 18:01 UTC by OSIDB Bzimport
Modified: 2025-02-06 14:57 UTC (History)
1 user (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description OSIDB Bzimport 2025-01-27 18:01:32 UTC
vLLM is a library for LLM inference and serving. vllm/model_executor/weight_utils.py implements hf_model_weights_iterator to load the model checkpoint, which is downloaded from huggingface. It uses the torch.load function and the weights_only parameter defaults to False. When torch.load loads malicious pickle data, it will execute arbitrary code during unpickling. This vulnerability is fixed in v0.7.0.


Note You need to log in before you can comment on or make changes to this bug.