Bug 2374503 (CVE-2025-52566) - CVE-2025-52566 llama-cpp: llama.cpp heap overflow
Summary: CVE-2025-52566 llama-cpp: llama.cpp heap overflow
Keywords:
Status: NEW
Alias: CVE-2025-52566
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: Product Security DevOps Team
QA Contact:
URL:
Whiteboard:
Depends On: 2374627 2374628
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-06-24 04:01 UTC by OSIDB Bzimport
Modified: 2025-06-26 06:54 UTC (History)
0 users

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description OSIDB Bzimport 2025-06-24 04:01:35 UTC
llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.


Note You need to log in before you can comment on or make changes to this bug.