Bug 2369223 (CVE-2025-46570)

Summary: CVE-2025-46570 vllm: vLLM’s Chunk-Based Prefix Caching Vulnerable to Potential Timing Side-Channel
Product: [Other] Security Response Reporter: OSIDB Bzimport <bzimport>
Component: vulnerabilityAssignee: Product Security DevOps Team <prodsec-dev>
Status: NEW --- QA Contact:
Severity: low Docs Contact:
Priority: low    
Version: unspecifiedCC: alinfoot, bbrownin, dtrifiro, rbryant, weaton
Target Milestone: ---Keywords: Security
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: ---
Doc Text:
A timing discrepancy flaw was found in vLLM, where a prefix match on a user prompt can reveal other user prompts. An attacker must have user-level access to the vLLM instance to exploit this vulnerabi
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description OSIDB Bzimport 2025-05-29 17:01:15 UTC
vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.