CVE-2025-53630

Publication date 10 July 2025

Last updated 16 July 2025


Ubuntu priority

Description

llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.

Status

Package Ubuntu Release Status
llama.cpp 25.04 plucky Not in release
24.04 LTS noble Not in release
22.04 LTS jammy Not in release