Bug 2454645 (CVE-2026-34760) - CVE-2026-34760 vLLM: Librosa: numpy: Librosa: AI model data integrity impact due to audio processing discrepancy
Summary: CVE-2026-34760 vLLM: Librosa: numpy: Librosa: AI model data integrity impact ...
Keywords:
Status: NEW
Alias: CVE-2026-34760
Product: Security Response
Classification: Other
Component: vulnerability
Version: unspecified
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Product Security DevOps Team
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2026-04-02 20:02 UTC by OSIDB Bzimport
Modified: 2026-04-06 17:04 UTC (History)
7 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)

Description OSIDB Bzimport 2026-04-02 20:02:04 UTC
vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before version 0.18.0, Librosa defaults to using numpy.mean for mono downmixing (to_mono), while the international standard ITU-R BS.775-4 specifies a weighted downmixing algorithm. This discrepancy results in inconsistency between audio heard by humans (e.g., through headphones/regular speakers) and audio processed by AI models (Which infra via Librosa, such as vllm, transformer). This issue has been patched in version 0.18.0.


Note You need to log in before you can comment on or make changes to this bug.