RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1724408 - [POWER8]The numa bind node is not expected
Summary: [POWER8]The numa bind node is not expected
Keywords:
Status: CLOSED DUPLICATE of bug 1716908
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: libvirt
Version: ---
Hardware: ppc64le
OS: Linux
low
unspecified
Target Milestone: rc
: 8.0
Assignee: Andrea Bolognani
QA Contact: Junxiang Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-27 03:12 UTC by Junxiang Li
Modified: 2019-06-28 08:35 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-28 02:25:33 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Junxiang Li 2019-06-27 03:12:00 UTC
Description of problem:
The numa bind node is not expected

Version-Release number of selected component (if applicable):
# rpm -q libvirt qemu-kvm kernel
libvirt-4.5.0-24.module+el8.1.0+3205+41ff0a42.ppc64le
qemu-kvm-2.12.0-77.module+el8.1.0+3382+49219945.ppc64le
kernel-4.18.0-107.el8.ppc64le

How reproducible:
100%

Steps to Reproduce:
1. Prepare the hugepage env
# echo 250 > /sys/devices/system/node/node0/hugepages/hugepages-16384kB/nr_hugepages
# echo 250 > /sys/devices/system/node/node1/hugepages/hugepages-16384kB/nr_hugepages
# mount -t hugetlbfs -o pagesize=16384K none /dev/hugepages16M
2. Prepare a guest numa
<numatune>
  <memnode cellid="0" mode="strict" nodeset="1" />
  <memory mode="strict" nodeset="0,1" placement="static" />
</numatune>
<cpu>
  <numa>
    <cell cpus="0-1" id="0" memory="1048576" />
    <cell cpus="2-3" id="1" memory="1048576" />
  </numa>
  <topology cores="2" sockets="2" threads="1" />
</cpu>
<memoryBacking>
  <hugepages>
    <page nodeset="0" size="16384" unit="KiB" />
  </hugepages>
</memoryBacking>
3. Define and start the guest, then check the numamaps in /proc/`qemu pid`/numa_maps

Actual results:
7effbf000000 bind:0 file=/dev/hugepages/libvirt/qemu/3-avocado-vt-vm1/qemu_back_mem._objects_ram-node0.GnuaBO\040(deleted) huge anon=64 dirty=64 N1=64 kernelpagesize_kB=16384

Expected results:
7effbf000000 bind:*1* file=/dev/hugepages/libvirt/qemu/3-avocado-vt-vm1/qemu_back_mem._objects_ram-node0.GnuaBO\040(deleted) huge anon=64 dirty=64 N1=64 kernelpagesize_kB=16384

Additional info:
1.
After talk with qe feature owner, both the bind:*1* and the N*1* are set by <memnode cellid="0" mode="strict" nodeset="1" />
2.
# numactl --hardware
available: 4 nodes (0-1,16-17)
node 0 cpus: 0 8 16 24 32 40
node 0 size: 257754 MB
node 0 free: 249815 MB
node 1 cpus: 48 56 64 72 80 88
node 1 size: 261856 MB
node 1 free: 256309 MB
node 16 cpus: 96 104 112 120 128 136
node 16 size: 261856 MB
node 16 free: 254260 MB
node 17 cpus: 144 152 160 168 176 184
node 17 size: 260820 MB
node 17 free: 253979 MB
node distances:
node   0   1  16  17 
  0:  10  20  40  40 
  1:  20  10  40  40 
 16:  40  40  10  20 
 17:  40  40  20  10

Comment 2 Andrea Bolognani 2019-06-27 10:33:51 UTC
I haven't really given this more than a cursory look, but from the
symptoms it looks like it could be related to Bug 1716908 (see also
Bug 1703661 for a more complete explanation of the issue).

That bug has been addressed in libvirt-4.5.0-25 while you're using
libvirt-4.5.0-24... Can you please update libvirt and try again?

Comment 3 Junxiang Li 2019-06-28 02:25:33 UTC
(In reply to Andrea Bolognani from comment #2)
> I haven't really given this more than a cursory look, but from the
> symptoms it looks like it could be related to Bug 1716908 (see also
> Bug 1703661 for a more complete explanation of the issue).
> 
> That bug has been addressed in libvirt-4.5.0-25 while you're using
> libvirt-4.5.0-24... Can you please update libvirt and try again?

Yes, when I update the libvirt to 4.5.0-25, it works well now.

*** This bug has been marked as a duplicate of bug 1716908 ***

Comment 4 Andrea Bolognani 2019-06-28 08:35:33 UTC
(In reply to Junxiang Li from comment #3)
> (In reply to Andrea Bolognani from comment #2)
> > That bug has been addressed in libvirt-4.5.0-25 while you're using
> > libvirt-4.5.0-24... Can you please update libvirt and try again?
> 
> Yes, when I update the libvirt to 4.5.0-25, it works well now.

Great news, thanks for verifying! :)


Note You need to log in before you can comment on or make changes to this bug.