RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2070380 - Start a guest with numatune restrictive mode has different behavior with virsh numatune cmd.
Summary: Start a guest with numatune restrictive mode has different behavior with virs...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: liang cong
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-31 01:53 UTC by liang cong
Modified: 2022-11-15 10:39 UTC (History)
8 users (show)

Fixed In Version: libvirt-8.3.0-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-15 10:04:06 UTC
Type: Bug
Target Upstream Version: 8.3.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-117425 0 None None None 2022-03-31 02:09:19 UTC
Red Hat Product Errata RHSA-2022:8003 0 None None None 2022-11-15 10:04:26 UTC

Description liang cong 2022-03-31 01:53:29 UTC
Description of problem:
Start a guest with numatune restrctive mode has different behavior with virsh numatune cmd.

Version-Release number of selected component (if applicable):
qemu-kvm-6.2.0-11.el9_0.2.x86_64
libvirt-8.0.0-7.el9_0.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Prepare a host with numa like below:
# numactl --hardware
available: 2 nodes (0-1)
node 0 cpus: 0 1
node 0 size: 3673 MB
node 0 free: 3289 MB
node 1 cpus: 2 3
node 1 size: 4022 MB
node 1 free: 3601 MB
node distances:
node   0   1
  0:  10  20
  1:  20  10

2. Define a guest with numatune and numa setting like below:
<vcpu placement="static">2</vcpu>
<numatune>
    <memory mode="restrictive" nodeset="1"/>
</numatune>


3. Start the guest and check the numa memory usage, and could see the memory is almost average allocated on each node.
# numastat -p `pidof qemu-kvm`

Per-node process memory usage (in MBs) for PID 6134 (qemu-kvm)
                           Node 0          Node 1           Total
                  --------------- --------------- ---------------
Huge                         0.00            0.00            0.00
Heap                         1.25           17.68           18.93
Stack                        0.00            0.02            0.02
Private                    334.39          305.32          639.71
----------------  --------------- --------------- ---------------
Total                      335.64          323.02          658.66

4. Check the cgroup of cpuset.mems is not set.
# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/cpuset.mems

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/cpuset.mems

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/emulator/cpuset.mems

5. Exec cmd numatune:
# virsh numatune vm1 3 1

6. Check the numa memory usage again, and could see the memory is almost allocated on node 1.
# numastat -p `pidof qemu-kvm`

Per-node process memory usage (in MBs) for PID 6134 (qemu-kvm)
                           Node 0          Node 1           Total
                  --------------- --------------- ---------------
Huge                         0.00            0.00            0.00
Heap                         0.00           18.93           18.93
Stack                        0.00            0.02            0.02
Private                      2.02          719.63          721.65
----------------  --------------- --------------- ---------------
Total                        2.02          738.58          740.60

7. Check the cgroup of cpuset.mems is set to 1.
# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/cpuset.mems

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/cpuset.mems

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/emulator/cpuset.mems
1



Expected results:
Start a guest with numatune restrctive mode should have same behavior with virsh numatune cmd.

Additional info:
if define a guest with numatune and numa setting like below:
<vcpu placement="static">2</vcpu>
<numatune>
    <memory mode="restrictive" nodeset="1"/>
    <memnode cellid="0" mode="restrictive" nodeset="1"/>
  </numatune>

<numa>
      <cell id="0" cpus="0-1" memory="1025024" unit="KiB"/>
</numa>

after start up, check the numa memory usage, the memory is almost allocated on node 1.
# numastat -p `pidof qemu-kvm`

Per-node process memory usage (in MBs) for PID 6534 (qemu-kvm)
                           Node 0          Node 1           Total
                  --------------- --------------- ---------------
Huge                         0.00            0.00            0.00
Heap                         2.02           24.86           26.88
Stack                        0.01            0.02            0.02
Private                     14.65          705.19          719.84
----------------  --------------- --------------- ---------------
Total                       16.68          730.07          746.74

And cgroup of cpuset.mems is not set
# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/cpuset.mems

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/libvirt/cpuset.mems

# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d3\\x2dvm1.scope/libvirt/emulator/cpuset.mems

I got the info from libvirt doc: The value 'restrictive' specifies using system default policy and only cgroups is used to restrict the memory nodes, and it requires setting mode to 'restrictive' in memnode elements.
Does that mean restrictive mode must set 'restrictive' memnode element?

Comment 1 Michal Privoznik 2022-04-01 08:26:10 UTC
Yes, this is a true bug. Let me see if I can fix it.

Comment 2 Michal Privoznik 2022-04-01 14:40:03 UTC
Patches posted on the list:

https://listman.redhat.com/archives/libvir-list/2022-April/229805.html

Comment 3 Michal Privoznik 2022-04-07 10:23:30 UTC
Merged upstream as:

629282d884 lib: Set up cpuset controller for restrictive numatune
5c6622eff7 ch: Explicitly forbid live changing nodeset for strict numatune
85a6474907 hypervisor: Drop dead code in virDomainCgroupSetupGlobalCpuCgroup()
cc4542e5d3 lib: Don't short circuit around virDomainCgroupSetupVcpuBW()

v8.2.0-108-g629282d884

Comment 4 liang cong 2022-04-11 06:02:24 UTC
Preverified with:
libvirt-v8.2.0-119-ga8682ab791
qemu-kvm-6.2.0-8.fc37.x86_64

Test steps:
1. Define a domain xml with numatune part like below:
<numatune>
    <memory mode='restrictive' nodeset='1'/>
</numatune>

2. Start the guest vm

3. Check the qemu-kvm numa memory state:
# numastat -p `pidof qemu-system-x86_64`

Per-node process memory usage (in MBs) for PID 5817 (qemu-system-x86)
                           Node 0          Node 1           Total
                  --------------- --------------- ---------------
Huge                         0.00            0.00            0.00
Heap                         0.00           14.25           14.25
Stack                        0.00            0.04            0.04
Private                     19.41          719.81          739.22
----------------  --------------- --------------- ---------------
Total                       19.42          734.10          753.52

4. Check cgroup setting:
# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/emulator/cpuset.mems
1

5. Change numatune:
# virsh numatune vm1 3 0

6. Check cgroup setting:
# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d2\\x2dvm1.scope/libvirt/emulator/cpuset.mems
0

7. Check the qemu-kvm numa memory state:
# numastat -p `pidof qemu-system-x86_64`

Per-node process memory usage (in MBs) for PID 5817 (qemu-system-x86)
                           Node 0          Node 1           Total
                  --------------- --------------- ---------------
Huge                         0.00            0.00            0.00
Heap                        14.25            0.00           14.25
Stack                        0.04            0.00            0.04
Private                    794.66            0.02          794.68
----------------  --------------- --------------- ---------------
Total                      808.95            0.02          808.98

Comment 7 liang cong 2022-05-23 02:29:21 UTC
Verified with:
libvirt-8.3.0-1.el9.x86_64
qemu-kvm-7.0.0-3.el9.x86_64

Test steps:
1. Define a domain xml with numatune part like below:
<numatune>
    <memory mode='restrictive' nodeset='1'/>
</numatune>

2. Start the guest vm1

3. Check the qemu-kvm numa memory state:
#  numastat -p `pidof qemu-kvm`

Per-node process memory usage (in MBs) for PID 114433 (qemu-kvm)
                           Node 0          Node 1           Total
                  --------------- --------------- ---------------
Huge                         0.00            0.00            0.00
Heap                         0.00           26.12           26.12
Stack                        0.00            0.02            0.02
Private                      2.04          762.79          764.82
----------------  --------------- --------------- ---------------
Total                        2.04          788.93          790.97

4. Check cgroup setting:
# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d8\\x2dvm1.scope/libvirt/emulator/cpuset.mems
1

5. Change numatune:
# virsh numatune vm1 3 0

6. Check cgroup setting:
# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d8\\x2dvm1.scope/libvirt/emulator/cpuset.mems
0

7. Check the qemu-kvm numa memory state:
#  numastat -p `pidof qemu-kvm`

Per-node process memory usage (in MBs) for PID 114433 (qemu-kvm)
                           Node 0          Node 1           Total
                  --------------- --------------- ---------------
Huge                         0.00            0.00            0.00
Heap                        26.12            0.00           26.12
Stack                        0.02            0.00            0.02
Private                    760.79            4.04          764.82
----------------  --------------- --------------- ---------------
Total                      786.93            4.04          790.96

8. Change numatune:
# virsh numatune vm1 3 0-1

9. Check cgroup setting:
# cat /sys/fs/cgroup/machine.slice/machine-qemu\\x2d8\\x2dvm1.scope/libvirt/emulator/cpuset.mems
0-1

10. Login vm and consume the ram by memhog
# memhog -r1 500M
..................................................

11. Check the qemu-kvm numa memory usage state:
#  numastat -p `pidof qemu-kvm`

Per-node process memory usage (in MBs) for PID 114433 (qemu-kvm)
                           Node 0          Node 1           Total
                  --------------- --------------- ---------------
Huge                         0.00            0.00            0.00
Heap                        26.12            0.00           26.12
Stack                        0.02            0.00            0.02
Private                    624.45          624.42         1248.88
----------------  --------------- --------------- ---------------
Total                      650.60          624.42         1275.02

Comment 9 errata-xmlrpc 2022-11-15 10:04:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: libvirt security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:8003


Note You need to log in before you can comment on or make changes to this bug.