Bug 2428443 (CVE-2026-22773) - CVE-2026-22773 vllm: vLLM: Denial of Service via specially crafted image in multimodal model serving
Summary: CVE-2026-22773 vllm: vLLM: Denial of Service via specially crafted image in m...
Keywords:
Status: NEW
Alias: CVE-2026-22773
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Product Security DevOps Team
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2026-01-10 07:01 UTC by OSIDB Bzimport
Modified: 2026-01-10 07:52 UTC (History)
7 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description OSIDB Bzimport 2026-01-10 07:01:46 UTC
vLLM is an inference and serving engine for large language models (LLMs). In versions from 0.6.4 to before 0.12.0, users can crash the vLLM engine serving multimodal models that use the Idefics3 vision model implementation by sending a specially crafted 1x1 pixel image. This causes a tensor dimension mismatch that results in an unhandled runtime error, leading to complete server termination. This issue has been patched in version 0.12.0.


Note You need to log in before you can comment on or make changes to this bug.