Bug 2379370 (CVE-2025-53630) - CVE-2025-53630 llama.cpp: Integer Overflow in llama.cpp
Summary: CVE-2025-53630 llama.cpp: Integer Overflow in llama.cpp
Keywords:
Status: NEW
Alias: CVE-2025-53630
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
high
high
Target Milestone: ---
Assignee: Product Security DevOps Team
QA Contact:
URL:
Whiteboard:
Depends On: 2379417 2379418
Blocks:
TreeView+ depends on / blocked
 
Reported: 2025-07-10 20:01 UTC by OSIDB Bzimport
Modified: 2025-07-11 12:21 UTC (History)
0 users

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description OSIDB Bzimport 2025-07-10 20:01:35 UTC
llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.


Note You need to log in before you can comment on or make changes to this bug.