Bug 2450575 (CVE-2026-33298) - CVE-2026-33298 llama.cpp: llama.cpp: Remote Code Execution vulnerability due to integer overflow in GGUF file processing
Summary: CVE-2026-33298 llama.cpp: llama.cpp: Remote Code Execution vulnerability due ...
Keywords:
Status: NEW
Alias: CVE-2026-33298
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: Product Security DevOps Team
QA Contact:
URL:
Whiteboard:
Depends On: 2450676 2450677 2450678
Blocks:
TreeView+ depends on / blocked
 
Reported: 2026-03-24 01:02 UTC by OSIDB Bzimport
Modified: 2026-03-24 10:41 UTC (History)
0 users

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description OSIDB Bzimport 2026-03-24 01:02:26 UTC
llama.cpp is an inference of several LLM models in C/C++. Prior to b7824, an integer overflow vulnerability in the `ggml_nbytes` function allows an attacker to bypass memory validation by crafting a GGUF file with specific tensor dimensions. This causes `ggml_nbytes` to return a significantly smaller size than required (e.g., 4MB instead of Exabytes), leading to a heap-based buffer overflow when the application subsequently processes the tensor. This vulnerability allows potential Remote Code Execution (RCE) via memory corruption. b7824 contains a fix.


Note You need to log in before you can comment on or make changes to this bug.