Bug 2301496 (CVE-2024-42131)

Summary: CVE-2024-42131 kernel: mm: avoid overflows in dirty throttling logic
Product: [Other] Security Response Reporter: OSIDB Bzimport <bzimport>
Component: vulnerabilityAssignee: Product Security DevOps Team <prodsec-dev>
Status: NEW --- QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: unspecifiedCC: dfreiber, drow, jburrell, vkumar
Target Milestone: ---Keywords: Security
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: kernel 5.10.222, kernel 5.15.163, kernel 6.1.98, kernel 6.6.39, kernel 6.9.9, kernel 6.10 Doc Type: If docs needed, set a value
Doc Text:
A vulnerability was found in the Linux kernel's memory management subsystem where a lack of proper size checks on dirty limits can lead to situations where large dirty limits end up being larger than 32-bits, resulting in potential overflows and divisions by 0. This can cause memory corruption, system instability, or crashes.
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2301812    
Bug Blocks:    

Description OSIDB Bzimport 2024-07-30 08:35:42 UTC
In the Linux kernel, the following vulnerability has been resolved:

mm: avoid overflows in dirty throttling logic

The dirty throttling logic is interspersed with assumptions that dirty
limits in PAGE_SIZE units fit into 32-bit (so that various multiplications
fit into 64-bits).  If limits end up being larger, we will hit overflows,
possible divisions by 0 etc.  Fix these problems by never allowing so
large dirty limits as they have dubious practical value anyway.  For
dirty_bytes / dirty_background_bytes interfaces we can just refuse to set
so large limits.  For dirty_ratio / dirty_background_ratio it isn't so
simple as the dirty limit is computed from the amount of available memory
which can change due to memory hotplug etc.  So when converting dirty
limits from ratios to numbers of pages, we just don't allow the result to
exceed UINT_MAX.

This is root-only triggerable problem which occurs when the operator
sets dirty limits to >16 TB.

Comment 1 Mauro Matteo Cascella 2024-07-30 19:54:13 UTC
Upstream advisory:
https://lore.kernel.org/linux-cve-announce/2024073027-CVE-2024-42131-2f7f@gregkh/T

Comment 2 Mauro Matteo Cascella 2024-07-30 19:54:35 UTC
Created kernel tracking bugs for this issue:

Affects: fedora-all [bug 2301812]

Comment 10 errata-xmlrpc 2024-09-04 00:11:43 UTC
This issue has been addressed in the following products:

  Red Hat Enterprise Linux 9.2 Extended Update Support

Via RHSA-2024:6268 https://access.redhat.com/errata/RHSA-2024:6268

Comment 11 errata-xmlrpc 2024-09-04 00:25:44 UTC
This issue has been addressed in the following products:

  Red Hat Enterprise Linux 9.2 Extended Update Support

Via RHSA-2024:6267 https://access.redhat.com/errata/RHSA-2024:6267

Comment 12 errata-xmlrpc 2024-09-11 01:01:10 UTC
This issue has been addressed in the following products:

  Red Hat Enterprise Linux 9

Via RHSA-2024:6567 https://access.redhat.com/errata/RHSA-2024:6567

Comment 13 errata-xmlrpc 2024-09-24 00:39:55 UTC
This issue has been addressed in the following products:

  Red Hat Enterprise Linux 8

Via RHSA-2024:7001 https://access.redhat.com/errata/RHSA-2024:7001

Comment 14 errata-xmlrpc 2024-09-24 02:35:25 UTC
This issue has been addressed in the following products:

  Red Hat Enterprise Linux 8

Via RHSA-2024:7000 https://access.redhat.com/errata/RHSA-2024:7000