Bug 2373282 (CVE-2025-49847) - CVE-2025-49847 llama-cpp: llama.cpp Buffer Overflow
Summary: CVE-2025-49847 llama-cpp: llama.cpp Buffer Overflow
Keywords:
Status: NEW
Alias: CVE-2025-49847
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: Product Security DevOps Team
QA Contact:
URL:
Whiteboard:
Depends On: 2373285 2373286
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-06-17 21:01 UTC by OSIDB Bzimport
Modified: 2025-06-17 22:34 UTC (History)
0 users

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description OSIDB Bzimport 2025-06-17 21:01:17 UTC
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp: llama_vocab::impl::token_to_piece() casts a very large size_t token length into an int32_t, causing the length check (if (length < (int32_t)size)) to be bypassed. As a result, memcpy is still called with that oversized size, letting a malicious model overwrite memory beyond the intended buffer. This can lead to arbitrary memory corruption and potential code execution. This issue has been patched in version b5662.


Note You need to log in before you can comment on or make changes to this bug.