Bug 2070380
Summary: | Start a guest with numatune restrictive mode has different behavior with virsh numatune cmd. | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 9 | Reporter: | liang cong <lcong> |
Component: | libvirt | Assignee: | Michal Privoznik <mprivozn> |
libvirt sub component: | General | QA Contact: | liang cong <lcong> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | medium | ||
Priority: | medium | CC: | jdenemar, jsuchane, lhuang, lmen, mprivozn, virt-maint, xuzhang, yalzhang |
Version: | 9.0 | Keywords: | AutomationTriaged, Triaged, Upstream |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-8.3.0-1.el9 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-11-15 10:04:06 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | 8.3.0 |
Embargoed: |
Description
liang cong
2022-03-31 01:53:29 UTC
Yes, this is a true bug. Let me see if I can fix it. Patches posted on the list: https://listman.redhat.com/archives/libvir-list/2022-April/229805.html Merged upstream as: 629282d884 lib: Set up cpuset controller for restrictive numatune 5c6622eff7 ch: Explicitly forbid live changing nodeset for strict numatune 85a6474907 hypervisor: Drop dead code in virDomainCgroupSetupGlobalCpuCgroup() cc4542e5d3 lib: Don't short circuit around virDomainCgroupSetupVcpuBW() v8.2.0-108-g629282d884 Preverified with: libvirt-v8.2.0-119-ga8682ab791 qemu-kvm-6.2.0-8.fc37.x86_64 Test steps: 1. Define a domain xml with numatune part like below: <numatune> <memory mode='restrictive' nodeset='1'/> </numatune> 2. Start the guest vm 3. Check the qemu-kvm numa memory state: # numastat -p `pidof qemu-system-x86_64` Per-node process memory usage (in MBs) for PID 5817 (qemu-system-x86) Node 0 Node 1 Total --------------- --------------- --------------- Huge 0.00 0.00 0.00 Heap 0.00 14.25 14.25 Stack 0.00 0.04 0.04 Private 19.41 719.81 739.22 ---------------- --------------- --------------- --------------- Total 19.42 734.10 753.52 4. Check cgroup setting: # cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/emulator/cpuset.mems 1 5. Change numatune: # virsh numatune vm1 3 0 6. Check cgroup setting: # cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/emulator/cpuset.mems 0 7. Check the qemu-kvm numa memory state: # numastat -p `pidof qemu-system-x86_64` Per-node process memory usage (in MBs) for PID 5817 (qemu-system-x86) Node 0 Node 1 Total --------------- --------------- --------------- Huge 0.00 0.00 0.00 Heap 14.25 0.00 14.25 Stack 0.04 0.00 0.04 Private 794.66 0.02 794.68 ---------------- --------------- --------------- --------------- Total 808.95 0.02 808.98 Verified with: libvirt-8.3.0-1.el9.x86_64 qemu-kvm-7.0.0-3.el9.x86_64 Test steps: 1. Define a domain xml with numatune part like below: <numatune> <memory mode='restrictive' nodeset='1'/> </numatune> 2. Start the guest vm1 3. Check the qemu-kvm numa memory state: # numastat -p `pidof qemu-kvm` Per-node process memory usage (in MBs) for PID 114433 (qemu-kvm) Node 0 Node 1 Total --------------- --------------- --------------- Huge 0.00 0.00 0.00 Heap 0.00 26.12 26.12 Stack 0.00 0.02 0.02 Private 2.04 762.79 764.82 ---------------- --------------- --------------- --------------- Total 2.04 788.93 790.97 4. Check cgroup setting: # cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d8\\x2dvm1.scope/libvirt/emulator/cpuset.mems 1 5. Change numatune: # virsh numatune vm1 3 0 6. Check cgroup setting: # cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d8\\x2dvm1.scope/libvirt/emulator/cpuset.mems 0 7. Check the qemu-kvm numa memory state: # numastat -p `pidof qemu-kvm` Per-node process memory usage (in MBs) for PID 114433 (qemu-kvm) Node 0 Node 1 Total --------------- --------------- --------------- Huge 0.00 0.00 0.00 Heap 26.12 0.00 26.12 Stack 0.02 0.00 0.02 Private 760.79 4.04 764.82 ---------------- --------------- --------------- --------------- Total 786.93 4.04 790.96 8. Change numatune: # virsh numatune vm1 3 0-1 9. Check cgroup setting: # cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d8\\x2dvm1.scope/libvirt/emulator/cpuset.mems 0-1 10. Login vm and consume the ram by memhog # memhog -r1 500M .................................................. 11. Check the qemu-kvm numa memory usage state: # numastat -p `pidof qemu-kvm` Per-node process memory usage (in MBs) for PID 114433 (qemu-kvm) Node 0 Node 1 Total --------------- --------------- --------------- Huge 0.00 0.00 0.00 Heap 26.12 0.00 26.12 Stack 0.02 0.00 0.02 Private 624.45 624.42 1248.88 ---------------- --------------- --------------- --------------- Total 650.60 624.42 1275.02 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Low: libvirt security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:8003 |