Bug 1945002
| Summary: | Enable CMA on x86-64 as tech preview | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | David Hildenbrand <dhildenb> | |
| Component: | kernel | Assignee: | David Hildenbrand <dhildenb> | |
| kernel sub component: | Memory Management | QA Contact: | Ping Fang <pifang> | |
| Status: | CLOSED CURRENTRELEASE | Docs Contact: | ||
| Severity: | high | |||
| Priority: | unspecified | CC: | aarcange, cye, ddutile, hartsjc, hkrzesin, mm-maint, perobins, peterx, pifang, stalexan | |
| Version: | 9.0 | Keywords: | Reopened, TestOnly, Triaged | |
| Target Milestone: | beta | |||
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | kernel-5.13.0-0.rc3.25.el9 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 2043141 (view as bug list) | Environment: | ||
| Last Closed: | 2021-12-07 21:55:02 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1950885 | |||
|
Description
David Hildenbrand
2021-03-31 07:51:46 UTC
Change has been merged into kernel-ark: https://gitlab.com/cki-project/kernel-ark/-/commit/4c454e8be716ad8b7b529afd221f380447ea0f9a Closing as RAWHIDE as the change is available in upstream kernel-ark. @Ping Fang I remember closing as RAWHIDE this is the right procedure for kernel-ark, but I wonder how to to handle QE for such things. Do you know what the right target state is once on upstream kernel-ark? Okay, thanks. Reopening this one as "TestOnly" for simplicity. @Ping Fang who'd be the right QE person to tackle testing this feature (hoping that there is capacity :) )? I can give some guidance regarding what/how to test. We can also test with gigantic page allocation via CMA. I assume a system with 64G memory, 1. Define "hugetlb_cma=16G" on the kernel cmdline 2. Boot the system and observe how much free memory there is (free -g), like 62G 3. Run some workload (e.g., memtester 62G) that consumes most of the free memory in the system to try shuffling free page lists. 4. Stop the workload. 5. Run a workload that leaves roughly 4G in the system free (e.g., memtester 58G) 6. While the workload is running, try allocating some gigantic pages. echo 4 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 7. Observe if allocation succeeded cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages Might have to retry 6/7 a couple of times if such an extreme workload (memtester) is running concurrently. memtester might be an extreme workload as in mlocks all memory. I'll try out myself later if that workload is suitable for testing CMA here or if we need another one (like memhog). (In reply to David Hildenbrand from comment #18) > We can also test with gigantic page allocation via CMA. I assume a system > with 64G memory, > > 1. Define "hugetlb_cma=16G" on the kernel cmdline > > 2. Boot the system and observe how much free memory there is (free -g), like > 62G > > 3. Run some workload (e.g., memtester 62G) that consumes most of the free > memory in the system to try shuffling free page lists. > > 4. Stop the workload. > > 5. Run a workload that leaves roughly 4G in the system free (e.g., memtester > 58G) > > 6. While the workload is running, try allocating some gigantic pages. > > echo 4 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages > > 7. Observe if allocation succeeded > > cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages Sorry, both paths should target /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages |