RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1917560 - hwloc does not work on Power10
Summary: hwloc does not work on Power10
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: hwloc
Version: 8.4
Hardware: ppc64le
OS: Linux
unspecified
high
Target Milestone: rc
: 8.5
Assignee: Prarit Bhargava
QA Contact: Jeff Bastian
Jaroslav Klech
URL:
Whiteboard:
Depends On:
Blocks: 1916117
TreeView+ depends on / blocked
 
Reported: 2021-01-18 18:13 UTC by Jiri Hladky
Modified: 2021-11-10 08:24 UTC (History)
9 users (show)

Fixed In Version: hwloc-2.2.0-2.el8
Doc Type: Bug Fix
Doc Text:
.The `hwloc` commands now return correct data on single CPU Power9 and Power10 logical partitions With the `hwloc` utility of version 2.2.0, any single-node Non-Uniform Memory Access (NUMA) system that ran a Power9 or Power10 CPU was considered to be "disallowed". Consequently, all `hwloc` commands did not work, because NODE0 (socket 0, CPU 0) was offline and the `hwloc` source code expected NODE0 to be online. The following error message was displayed: ---- Topology does not contain any NUMA node, aborting! ---- With this update, `hwloc` has been fixed so that its source code checks to see if NODE0 is online before querying it. If NODE0 is not online, the code proceeds to the next online NODE. As a result, the `hwloc` command does not return any errors in the described scenario.
Clone Of:
Environment:
Last Closed: 2021-11-09 19:42:12 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
debug output of hwloc-1.11.9-3.el8 (24.85 KB, text/plain)
2021-01-18 21:26 UTC, Jeff Bastian
no flags Details
debug output of hwloc-2.2.0-1.el8 (24.43 KB, text/plain)
2021-01-18 21:27 UTC, Jeff Bastian
no flags Details
Patch I got from upstream (2.42 KB, patch)
2021-04-26 20:07 UTC, Jiri Hladky
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
IBM Linux Technology Center 190863 0 None None None 2021-01-18 22:47:21 UTC
Red Hat Product Errata RHBA-2021:4416 0 None None None 2021-11-09 19:42:14 UTC

Description Jiri Hladky 2021-01-18 18:13:17 UTC
Description of problem:

None of the commands from the hwloc package work on Power10 servers.

Error message:
Topology does not contain any NUMA node, aborting!


Version-Release number of selected component (if applicable):


How reproducible:

Always. Tested on RHEL-8.4.0-20210108.n.0 in IBM YZ on

rhyzden1-lp8.yz.ibm
https://beaker.yz.ibm/view/rhyzden1-lp8.yz.ibm#details

rhyzden1-lp9.yz.ibm
https://beaker.yz.ibm/view/rhyzden1-lp9.yz.ibm#details

Steps to Reproduce:
1. hwloc-calc -N core machine:0


Actual results:
Topology does not contain any NUMA node, aborting!


Expected results:
No error message. The above command should return a number of cores. 

For example, on Power9 the above command returns number "1":

zz1-lp9.yz.ibm
https://beaker.yz.ibm/view/zz1-lp9.yz.ibm#details

$ hwloc-calc -N core machine:0
1


Additional info:

Kernel Performance Tests rely heavily on hwloc-calc commands. It blocks Kernel Performance Testing on Power10.

Comment 1 janani 2021-01-18 19:11:56 UTC
Jiri -- do you know what happens when you run it on a LPAR with multiple cores ? For e.g. rhyzden1-lp11 and rhyzden-12 have multiple NUMA nodes.

Comment 2 Jeff Bastian 2021-01-18 20:16:06 UTC
lp12 looks good.  It seems the problem is only on the single CPU LPARs.


[root@rhyzden1-lp12 ~]# hwloc-calc -N core machine:0
4

[root@rhyzden1-lp12 ~]# lstopo-no-graphics 
Machine (62GB total)
  Package L#0
    NUMANode L#0 (P#0 14GB)
    L3 L#0 (8KB) + Core L#0
      L2 L#0 (256KB) + L1d L#0 (64KB) + L1i L#0 (32KB)
        Die L#0 + PU L#0 (P#0)
        PU L#1 (P#2)
        PU L#2 (P#4)
        PU L#3 (P#6)
      L2 L#1 (256KB) + L1d L#1 (64KB) + L1i L#1 (32KB)
        PU L#4 (P#1)
        PU L#5 (P#3)
        PU L#6 (P#5)
        PU L#7 (P#7)
  Package L#1
    NUMANode L#1 (P#1 16GB)
    L3 L#1 (8KB) + Core L#1
      L2 L#2 (256KB) + L1d L#2 (64KB) + L1i L#2 (32KB)
        Die L#1 + PU L#8 (P#8)
        PU L#9 (P#10)
        PU L#10 (P#12)
        PU L#11 (P#14)
      L2 L#3 (256KB) + L1d L#3 (64KB) + L1i L#3 (32KB)
        PU L#12 (P#9)
        PU L#13 (P#11)
        PU L#14 (P#13)
        PU L#15 (P#15)
  Package L#2
    NUMANode L#2 (P#2 16GB)
    L3 L#2 (8KB) + Core L#2
      L2 L#4 (256KB) + L1d L#4 (64KB) + L1i L#4 (32KB)
        Die L#2 + PU L#16 (P#16)
        PU L#17 (P#18)
        PU L#18 (P#20)
        PU L#19 (P#22)
      L2 L#5 (256KB) + L1d L#5 (64KB) + L1i L#5 (32KB)
        PU L#20 (P#17)
        PU L#21 (P#19)
        PU L#22 (P#21)
        PU L#23 (P#23)
  Package L#3
    NUMANode L#3 (P#3 16GB)
    L3 L#3 (8KB) + Core L#3
      L2 L#6 (256KB) + L1d L#6 (64KB) + L1i L#6 (32KB)
        Die L#3 + PU L#24 (P#24)
        PU L#25 (P#26)
        PU L#26 (P#28)
        PU L#27 (P#30)
      L2 L#7 (256KB) + L1d L#7 (64KB) + L1i L#7 (32KB)
        PU L#28 (P#25)
        PU L#29 (P#27)
        PU L#30 (P#29)
        PU L#31 (P#31)
  Block(Disk) "sda"
  Net "env2"

Comment 3 Jeff Bastian 2021-01-18 20:20:26 UTC
Note: hwloc is getting a rebase in RHEL-8.4.0.  See Red Hat bug 1841354.

RHEL-8.3.0: hwloc-1.11.9-3.el8
RHEL-8.4.0: hwloc-2.2.0-1.el8

Comment 4 Jeff Bastian 2021-01-18 20:26:29 UTC
The older 1.11.9-3.el8 rpm works on a single CPU (8 threads) P10 system:

[root@rhyzden1-lp1 ~]# lscpu | grep -i -e cpu -e thread -e core -e socket
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  8
Core(s) per socket:  1
Socket(s):           1
NUMA node2 CPU(s):   0-7
Physical sockets:    2
Physical cores/chip: 11

[root@rhyzden1-lp1 ~]# rpm -q hwloc
hwloc-1.11.9-3.el8.ppc64le

[root@rhyzden1-lp1 ~]# hwloc-calc -N core machine:0
1

[root@rhyzden1-lp1 ~]# lstopo-no-graphics 
Machine (7614MB) + Package L#0 + L3 L#0 (8KB) + Core L#0
  L2 L#0 (256KB) + L1d L#0 (64KB) + L1i L#0 (32KB)
    PU L#0 (P#0)
    PU L#1 (P#2)
    PU L#2 (P#4)
    PU L#3 (P#6)
  L2 L#1 (256KB) + L1d L#1 (64KB) + L1i L#1 (32KB)
    PU L#4 (P#1)
    PU L#5 (P#3)
    PU L#6 (P#5)
    PU L#7 (P#7)




But the update to version 2.2.0 does not like it:

[root@rhyzden1-lp1 ~]# yum -y update hwloc
...

[root@rhyzden1-lp1 ~]# rpm -q hwloc
hwloc-2.2.0-1.el8.ppc64le

[root@rhyzden1-lp1 ~]# hwloc-calc -N core machine:0
Topology does not contain any NUMA node, aborting!

[root@rhyzden1-lp1 ~]# lstopo-no-graphics
Topology does not contain any NUMA node, aborting!
hwloc_topology_load() failed (No such file or directory).

Comment 5 Jeff Bastian 2021-01-18 20:47:30 UTC
The relevant chunk of code from each version:

hwloc-1.11.9/src/topology.c
============================
  2665    hwloc_debug("%s", "\nRemoving empty objects except numa nodes and PCI devices\n");
  2666    remove_empty(topology, &topology->levels[0][0]);
  2667    if (!topology->levels[0][0]) {
  2668      fprintf(stderr, "Topology became empty, aborting!\n");
  2669      abort();
  2670    }
  2671    hwloc_debug_print_objects(0, topology->levels[0][0]);



hwloc-2.2.0/hwloc/topology.c
============================
  3449    hwloc_debug("%s", "\nRemoving empty objects\n");
  3450    remove_empty(topology, &topology->levels[0][0]);
  3451    if (!topology->levels[0][0]) {
  3452      fprintf(stderr, "Topology became empty, aborting!\n");
  3453      return -1;
  3454    }
  3455    if (hwloc_bitmap_iszero(topology->levels[0][0]->cpuset)) {
  3456      fprintf(stderr, "Topology does not contain any PU, aborting!\n");
  3457      return -1;
  3458    }
  3459    if (hwloc_bitmap_iszero(topology->levels[0][0]->nodeset)) {
  3460      fprintf(stderr, "Topology does not contain any NUMA node, aborting!\n");
  3461      return -1;
  3462    }
  3463    hwloc_debug_print_objects(0, topology->levels[0][0]);

Comment 6 Jeff Bastian 2021-01-18 21:25:04 UTC
I recompiled hwloc-1.11.9 and hwloc-2.2.0 from source and added --enable-debug to the configure flags.  Set the HWLOC_DEBUG_VERBOSE environment variable to 1 to get the debug output.

[root@rhyzden1-lp1 ~]# HWLOC_DEBUG_VERBOSE=1 hwloc-calc -N core machine:0 2>&1 | tee $(rpm -q hwloc).debug

Attaching the output of each version.

Comment 7 Jeff Bastian 2021-01-18 21:26:26 UTC
Created attachment 1748571 [details]
debug output of hwloc-1.11.9-3.el8

Comment 8 Jeff Bastian 2021-01-18 21:27:00 UTC
Created attachment 1748572 [details]
debug output of hwloc-2.2.0-1.el8

Comment 9 Jeff Bastian 2021-01-18 21:44:09 UTC
hwloc-2.2.0 is removing the single NUMA node for some reason:

$ grep Removing.empty /tmp/hwloc-2.2.0-1.el8.dbg.ppc64le.debug 
Removing empty objects
Removing empty object NUMANode#0(local=7797120KB total=0KB) cpuset 0x000000ff complete 0x000000ff nodeset 0x0 completeN 0x00000001


The older version doesn't remove any objects:

$ grep Removing.empty /tmp/hwloc-1.11.9-3.el8.dbg.ppc64le.debug 
Removing empty objects except numa nodes and PCI devices

Comment 10 Jeff Bastian 2021-01-18 21:50:43 UTC
Good news (if you can call it that): this also fails on POWER9 single CPU LPARs.

[root@ibm-p9z-20-lp15 ~]# lscpu | grep -i -e cpu -e socket -e thread -e model
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  8
Core(s) per socket:  1
Socket(s):           1
Model:               2.2 (pvr 004e 0202)
Model name:          POWER9 (architected), altivec supported
NUMA node1 CPU(s):   0-7
Physical sockets:    2

[root@ibm-p9z-20-lp15 ~]# hwloc-calc -N core machine:0
Topology does not contain any NUMA node, aborting!

[root@ibm-p9z-20-lp15 ~]# lstopo-no-graphics 
Topology does not contain any NUMA node, aborting!
hwloc_topology_load() failed (No such file or directory).

Comment 11 Jeff Bastian 2021-01-18 22:11:04 UTC
Truly good news: with hwloc-2.1.0 and newer, single-node NUMA systems are considered "disallowed" and thus it prints the errors and exits early.  But you can override this with the --disallowed flag:

[root@rhyzden1-lp1 ~]# rpm -q hwloc
hwloc-2.2.0-1.el8.ppc64le

[root@rhyzden1-lp1 ~]# hwloc-calc -N core machine:0
Topology does not contain any NUMA node, aborting!

[root@rhyzden1-lp1 ~]# hwloc-calc --disallowed -N core machine:0
1

[root@rhyzden1-lp1 ~]# lstopo-no-graphics 
Topology does not contain any NUMA node, aborting!
hwloc_topology_load() failed (No such file or directory).

[root@rhyzden1-lp1 ~]# lstopo-no-graphics --disallowed
Machine (7614MB total)
  Package L#0
    NUMANode L#0 (P#0 7614MB)
    L3 L#0 (8KB) + Core L#0
      L2 L#0 (256KB) + L1d L#0 (64KB) + L1i L#0 (32KB)
        Die L#0 + PU L#0 (P#0)
        PU L#1 (P#2)
        PU L#2 (P#4)
        PU L#3 (P#6)
      L2 L#1 (256KB) + L1d L#1 (64KB) + L1i L#1 (32KB)
        PU L#4 (P#1)
        PU L#5 (P#3)
        PU L#6 (P#5)
        PU L#7 (P#7)
  Block(Disk) "sda"
  Net "env2"


Jirka, can you update your tests to add the --disallowed flag and try again?

Comment 12 Jeff Bastian 2021-01-18 22:17:02 UTC
For the record, I discovered the --disallowed flag by checking the 'git blame' output of the remove_empty() function in topology.c.  In particular, this commit caught my eye because it removed a check for NUMA node objects:

-  if (obj->type != HWLOC_OBJ_NUMANODE

commit 1cfb5af0ad21a809d9e82ea4c65f20350ccac31c
Author: Brice Goglin <Brice.Goglin>
Date:   Thu Nov 2 23:27:26 2017 +0100

    core: remove disallowed nodes entirely (unless WHOLE_SYSTEM)

    Now that NUMA nodes are (memory-child) leaves, there's no need
    to keep them with memory=0, just remove them entirely.
    
    In v1.X, things were different because NUMA nodes may have CPU
    children. Removing NUMA nodes without their children would
    make the topology asymmetric, etc.
    
    Signed-off-by: Brice Goglin <Brice.Goglin>


That clue led to this note in the NEWS file:

  + Improve the API for dealing with disallowed resources
    - HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM is replaced with FLAG_INCLUDE_DISALLOWED
      and --whole-system command-line options with --disallowed.
      . Former names are still accepted for backward compatibility.
    - Add hwloc_topology_allow() for changing allowed sets after load().
    - Add the HWLOC_ALLOW=all environment variable to totally ignore
      administrative restrictions such as Linux Cgroups.
    - Add disallowed_pu and disallowed_numa bits to the discovery support
      structure.

Comment 13 Jiri Hladky 2021-01-19 07:01:18 UTC
Hi Jeff,


thanks a lot for the quick resolution and all the details! The fix is very simple at the end - you just need to set 

HWLOC_ALLOW=all

env variable and all my scripts work again. 

From my side, we can close this BZ. Thanks a lot!
Jirka

Comment 14 Jiri Hladky 2021-01-19 11:30:50 UTC
I think we should document this change in hwloc's behavior. I have proposed a doc text, feel free to modify it.

Comment 15 Jeff Bastian 2021-01-19 21:36:55 UTC
I'm adding the "Documentation" Keyword to this BZ since there's nothing to actually fix in hwloc itself, but we need to keep the BZ open until the RHEL-8.4.0 Release Notes (or Tech Notes) are updated.

Comment 22 Jiri Hladky 2021-04-26 20:03:46 UTC
Hi,

I have talked to upstream and there were very responsive. We got a fix. I will attach it. 

For RHEL-8.4, I think documenting this issue is fine. For RHEL-8.5, we should backport the patch. 

@Prarit - should I open a new BZ for RHEL-8.5 or can we use this one? 

Thanks a lot
Jirka

Comment 23 Jiri Hladky 2021-04-26 20:07:37 UTC
Created attachment 1775677 [details]
Patch I got from upstream

The patch is easy - I have manually applied it and tested that it works on rhyzden1-lp8.yz.ibm.

It should appear soon on GitHub as well (it's not there yet):

https://github.com/open-mpi/hwloc/commits/master

Comment 24 Jaroslav Klech 2021-04-27 06:03:13 UTC
Prarit, pls could you look at c#22?

Thanks

Comment 25 Jiri Hladky 2021-04-27 22:05:39 UTC
The patch is now available on github:

https://github.com/open-mpi/hwloc/commit/0114c2b0b3e39265e0829eebfff87ac9f4412fe9

Comment 26 Prarit Bhargava 2021-05-10 18:30:43 UTC
I'll see if I can get this into RHEL8.5.

P.

Comment 29 Prarit Bhargava 2021-05-17 19:46:28 UTC
jhladky, could you test with https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=36851005 to confirm the fix?

P.

Comment 30 Jeff Bastian 2021-05-17 21:54:58 UTC
I'll let Jiri run some more detailed tests, but a quick smoke test of the brew build in comment 29 looks good on a POWER10 Rainier LPAR.

::::::::::::
:: Before ::
::::::::::::

[root@rhyzrain2-lp19 ~]# rpm -q hwloc
hwloc-2.2.0-1.el8.ppc64le

[root@rhyzrain2-lp19 ~]# hwloc-calc -N core machine:0
Topology does not contain any NUMA node, aborting!

[root@rhyzrain2-lp19 ~]# lstopo-no-graphics
Topology does not contain any NUMA node, aborting!
hwloc_topology_load() failed (No such file or directory).

:::::::::::
:: After ::
:::::::::::

[root@rhyzrain2-lp19 ~]# rpm -q hwloc
hwloc-2.2.0-2.el8.ppc64le

[root@rhyzrain2-lp19 ~]# hwloc-calc -N core machine:0
4

[root@rhyzrain2-lp19 ~]# lstopo-no-graphics
Machine (15GB total)
  Package L#0
    NUMANode L#0 (P#2 15GB)
    Core L#0
      L1d L#0 (32KB) + L1i L#0 (48KB)
        Die L#0 + PU L#0 (P#0)
        PU L#1 (P#2)
        PU L#2 (P#4)
        PU L#3 (P#6)
      L1d L#1 (32KB) + L1i L#1 (48KB)
        PU L#4 (P#1)
        PU L#5 (P#3)
        PU L#6 (P#5)
        PU L#7 (P#7)
    Core L#1
      L1d L#2 (32KB) + L1i L#2 (48KB)
        Die L#1 + PU L#8 (P#8)
        PU L#9 (P#10)
        PU L#10 (P#12)
        PU L#11 (P#14)
      L1d L#3 (32KB) + L1i L#3 (48KB)
        PU L#12 (P#9)
        PU L#13 (P#11)
        PU L#14 (P#13)
        PU L#15 (P#15)
    Core L#2
      L1d L#4 (32KB) + L1i L#4 (48KB)
        Die L#2 + PU L#16 (P#16)
        PU L#17 (P#18)
        PU L#18 (P#20)
        PU L#19 (P#22)
      L1d L#5 (32KB) + L1i L#5 (48KB)
        PU L#20 (P#17)
        PU L#21 (P#19)
        PU L#22 (P#21)
        PU L#23 (P#23)
    Core L#3
      L1d L#6 (32KB) + L1i L#6 (48KB)
        Die L#3 + PU L#24 (P#24)
        PU L#25 (P#26)
        PU L#26 (P#28)
        PU L#27 (P#30)
      L1d L#7 (32KB) + L1i L#7 (48KB)
        PU L#28 (P#25)
        PU L#29 (P#27)
        PU L#30 (P#29)
        PU L#31 (P#31)
  Block(Disk) "sdd"
  Block(Disk) "sdb"
  Block(Disk) "sdc"
  Block(Disk) "sda"
  Net "env2"

Comment 31 Jiri Hladky 2021-05-18 13:10:41 UTC
Hi Jeff and Prarit,

thanks for the great news! I have tested a new version of hwloc on rhyzden1-lp9.yz.ibm running RHEL-8.5.0-20210506.d.3 

hwloc-2.2.0-1.el8.ppc64le.rpm => confirmed that issue can be reproduced
hwloc-2.2.0-2.el8.ppc64le.rpm => problem is fixed!

hwloc-calc -N core machine:0
1

Setting the state to Verified: Tested. 

Jirka

Comment 35 Jeff Bastian 2021-05-19 20:01:22 UTC
Verified with hwloc-2.2.0-2.el8

:::::::::::::::::::::::::::::::
:: ppc64le - POWER10 PowerVM ::
:::::::::::::::::::::::::::::::

[root@rhyzrain2-lp21 ~]# rpm -q hwloc
hwloc-2.2.0-2.el8.ppc64le

[root@rhyzrain2-lp21 ~]# lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  8
Core(s) per socket:  1
Socket(s):           1
NUMA node(s):        1
Model:               1.0 (pvr 0080 0100)
Model name:          POWER10 (architected), altivec supported
Hypervisor vendor:   pHyp
Virtualization type: para
L1d cache:           32K
L1i cache:           48K
L2 cache:            1024K
L3 cache:            4096K
NUMA node0 CPU(s):   0-7
Physical sockets:    1
Physical chips:      4
Physical cores/chip: 14

[root@rhyzrain2-lp21 ~]# hwloc-calc -N core machine:0
1

[root@rhyzrain2-lp21 ~]# lstopo-no-graphics 
Machine (15GB total)
  Package L#0
    NUMANode L#0 (P#0 15GB)
    L3 L#0 (4096KB) + Core L#0
      L2 L#0 (1024KB) + L1d L#0 (32KB) + L1i L#0 (48KB)
        Die L#0 + PU L#0 (P#0)
        PU L#1 (P#2)
        PU L#2 (P#4)
        PU L#3 (P#6)
      L2 L#1 (1024KB) + L1d L#1 (32KB) + L1i L#1 (48KB)
        PU L#4 (P#1)
        PU L#5 (P#3)
        PU L#6 (P#5)
        PU L#7 (P#7)
  Block(Disk) "sdd"
  Block(Disk) "sdb"
  Block(Disk) "sdc"
  Block(Disk) "sda"
  Net "env2"

::::::::::::::::::::::::::::::
:: ppc64le - POWER9 PowerNV ::
::::::::::::::::::::::::::::::

[root@raven ~]# rpm -q hwloc
hwloc-2.2.0-2.el8.ppc64le

[root@raven ~]# lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              16
On-line CPU(s) list: 0-15
Thread(s) per core:  4
Core(s) per socket:  4
Socket(s):           1
NUMA node(s):        1
Model:               2.2 (pvr 004e 1202)
Model name:          POWER9, altivec supported
CPU max MHz:         3800.0000
CPU min MHz:         2166.0000
L1d cache:           32K
L1i cache:           32K
L2 cache:            512K
L3 cache:            10240K
NUMA node0 CPU(s):   0-15

[root@raven ~]# hwloc-calc -N core machine:0
4

[root@raven ~]# lstopo-no-graphics 
Machine (31GB total)
  Package L#0
    NUMANode L#0 (P#0 31GB)
    L3 L#0 (10MB) + L2 L#0 (512KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
      Die L#0 + PU L#0 (P#0)
      PU L#1 (P#1)
      PU L#2 (P#2)
      PU L#3 (P#3)
    L3 L#1 (10MB) + L2 L#1 (512KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
      Die L#1 + PU L#4 (P#4)
      PU L#5 (P#5)
      PU L#6 (P#6)
      PU L#7 (P#7)
    L3 L#2 (10MB) + L2 L#2 (512KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2
      Die L#2 + PU L#8 (P#8)
      PU L#9 (P#9)
      PU L#10 (P#10)
      PU L#11 (P#11)
    L3 L#3 (10MB) + L2 L#3 (512KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3
      Die L#3 + PU L#12 (P#12)
      PU L#13 (P#13)
      PU L#14 (P#14)
      PU L#15 (P#15)
  HostBridge
    PCIBridge
      PCI 0001:01:00.0 (NVMExp)
        Block(Disk) "nvme0n1"
  HostBridge
    PCIBridge
      PCI 0002:01:00.0 (SATA)
  HostBridge
    PCIBridge
      PCI 0004:01:00.0 (Ethernet)
        Net "enP4p1s0f0"
      PCI 0004:01:00.1 (Ethernet)
        Net "enP4p1s0f1"
      PCI 0004:01:00.2 (Ethernet)
        Net "enP4p1s0f2"
  HostBridge
    PCIBridge
      PCIBridge
        PCI 0005:02:00.0 (VGA)

::::::::::::::::::::::::::
:: ppc64le - POWER9 KVM ::
::::::::::::::::::::::::::

[root@localhost ~]# rpm -q hwloc
hwloc-2.2.0-2.el8.ppc64le

[root@localhost ~]# lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              1
On-line CPU(s) list: 0
Thread(s) per core:  1
Core(s) per socket:  1
Socket(s):           1
NUMA node(s):        1
Model:               2.2 (pvr 004e 1202)
Model name:          POWER9 (architected), altivec supported
Hypervisor vendor:   KVM
Virtualization type: para
L1d cache:           32K
L1i cache:           32K
NUMA node0 CPU(s):   0

[root@localhost ~]# hwloc-calc -N core machine:0
1

[root@localhost ~]# lstopo-no-graphics
Machine (3528MB total)
  Package L#0
    NUMANode L#0 (P#0 3528MB)
    L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0)
  HostBridge
    PCI 00:01.0 (Ethernet)
      Net "enp0s1"
    PCI 00:04.0 (SCSI)
      Block "vda"


::::::::::::
:: x86_64 ::
::::::::::::

[root@dell-per7425-07 ~]# rpm -q hwloc
hwloc-2.2.0-2.el8.x86_64

[root@dell-per7425-07 ~]# lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              32
On-line CPU(s) list: 0-31
Thread(s) per core:  2
Core(s) per socket:  8
Socket(s):           2
NUMA node(s):        8
Vendor ID:           AuthenticAMD
BIOS Vendor ID:      AMD
CPU family:          23
Model:               1
Model name:          AMD EPYC 7251 8-Core Processor
BIOS Model name:     AMD EPYC 7251 8-Core Processor                 
Stepping:            2
CPU MHz:             2325.228
CPU max MHz:         2100.0000
CPU min MHz:         1200.0000
BogoMIPS:            4192.05
Virtualization:      AMD-V
L1d cache:           32K
L1i cache:           64K
L2 cache:            512K
L3 cache:            4096K
NUMA node0 CPU(s):   0,8,16,24
NUMA node1 CPU(s):   2,10,18,26
NUMA node2 CPU(s):   4,12,20,28
NUMA node3 CPU(s):   6,14,22,30
NUMA node4 CPU(s):   1,9,17,25
NUMA node5 CPU(s):   3,11,19,27
NUMA node6 CPU(s):   5,13,21,29
NUMA node7 CPU(s):   7,15,23,31
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate sme ssbd sev ibpb vmmcall sev_es fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 xsaves clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca

[root@dell-per7425-07 ~]# hwloc-calc -N core machine:0
16

[root@dell-per7425-07 ~]# lstopo-no-graphics
Machine (31GB total)
  Package L#0
    Group0 L#0
      L3 L#0 (4096KB) + L2 L#0 (512KB) + L1d L#0 (32KB) + L1i L#0 (64KB) + Core L#0
        PU L#0 (P#0)
        PU L#1 (P#16)
      L3 L#1 (4096KB) + L2 L#1 (512KB) + L1d L#1 (32KB) + L1i L#1 (64KB) + Core L#1
        PU L#2 (P#8)
        PU L#3 (P#24)
      HostBridge
        PCIBridge
          PCI 02:00.0 (Ethernet)
            Net "eno3"
          PCI 02:00.1 (Ethernet)
            Net "eno4"
        PCIBridge
          PCI 01:00.0 (Ethernet)
            Net "eno1"
          PCI 01:00.1 (Ethernet)
            Net "eno2"
        PCIBridge
          PCIBridge
            PCI 04:00.0 (VGA)
        PCIBridge
          PCI 06:00.2 (SATA)
    Group0 L#1
      NUMANode L#0 (P#1 15GB)
      L3 L#2 (4096KB) + L2 L#2 (512KB) + L1d L#2 (32KB) + L1i L#2 (64KB) + Core L#2
        PU L#4 (P#2)
        PU L#5 (P#18)
      L3 L#3 (4096KB) + L2 L#3 (512KB) + L1d L#3 (32KB) + L1i L#3 (64KB) + Core L#3
        PU L#6 (P#10)
        PU L#7 (P#26)
    Group0 L#2
      L3 L#4 (4096KB) + L2 L#4 (512KB) + L1d L#4 (32KB) + L1i L#4 (64KB) + Core L#4
        PU L#8 (P#4)
        PU L#9 (P#20)
      L3 L#5 (4096KB) + L2 L#5 (512KB) + L1d L#5 (32KB) + L1i L#5 (64KB) + Core L#5
        PU L#10 (P#12)
        PU L#11 (P#28)
    Group0 L#3
      L3 L#6 (4096KB) + L2 L#6 (512KB) + L1d L#6 (32KB) + L1i L#6 (64KB) + Core L#6
        PU L#12 (P#6)
        PU L#13 (P#22)
      L3 L#7 (4096KB) + L2 L#7 (512KB) + L1d L#7 (32KB) + L1i L#7 (64KB) + Core L#7
        PU L#14 (P#14)
        PU L#15 (P#30)
      HostBridge
        PCIBridge
          PCI 61:00.0 (RAID)
            Block(Disk) "sda"
  Package L#1
    Group0 L#4
      L3 L#8 (4096KB) + L2 L#8 (512KB) + L1d L#8 (32KB) + L1i L#8 (64KB) + Core L#8
        PU L#16 (P#1)
        PU L#17 (P#17)
      L3 L#9 (4096KB) + L2 L#9 (512KB) + L1d L#9 (32KB) + L1i L#9 (64KB) + Core L#9
        PU L#18 (P#9)
        PU L#19 (P#25)
    Group0 L#5
      NUMANode L#1 (P#5 16GB)
      L3 L#10 (4096KB) + L2 L#10 (512KB) + L1d L#10 (32KB) + L1i L#10 (64KB) + Core L#10
        PU L#20 (P#3)
        PU L#21 (P#19)
      L3 L#11 (4096KB) + L2 L#11 (512KB) + L1d L#11 (32KB) + L1i L#11 (64KB) + Core L#11
        PU L#22 (P#11)
        PU L#23 (P#27)
    Group0 L#6
      L3 L#12 (4096KB) + L2 L#12 (512KB) + L1d L#12 (32KB) + L1i L#12 (64KB) + Core L#12
        PU L#24 (P#5)
        PU L#25 (P#21)
      L3 L#13 (4096KB) + L2 L#13 (512KB) + L1d L#13 (32KB) + L1i L#13 (64KB) + Core L#13
        PU L#26 (P#13)
        PU L#27 (P#29)
    Group0 L#7
      L3 L#14 (4096KB) + L2 L#14 (512KB) + L1d L#14 (32KB) + L1i L#14 (64KB) + Core L#14
        PU L#28 (P#7)
        PU L#29 (P#23)
      L3 L#15 (4096KB) + L2 L#15 (512KB) + L1d L#15 (32KB) + L1i L#15 (64KB) + Core L#15
        PU L#30 (P#15)
        PU L#31 (P#31)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)


:::::::::::::
:: aarch64 ::
:::::::::::::

[root@ampere-hr330a-04 ~]# rpm -q hwloc
hwloc-2.2.0-2.el8.aarch64

[root@ampere-hr330a-04 ~]# lscpu
Architecture:        aarch64
Byte Order:          Little Endian
CPU(s):              32
On-line CPU(s) list: 0-31
Thread(s) per core:  1
Core(s) per socket:  32
Socket(s):           1
NUMA node(s):        1
Vendor ID:           APM
BIOS Vendor ID:      Ampere(TM)
Model:               2
Model name:          X-Gene
BIOS Model name:     eMAG 
Stepping:            0x3
CPU max MHz:         3300.0000
CPU min MHz:         375.0000
BogoMIPS:            80.00
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
NUMA node0 CPU(s):   0-31
Flags:               fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid

[root@ampere-hr330a-04 ~]# hwloc-calc -N core machine:0
32

[root@ampere-hr330a-04 ~]# lstopo-no-graphics
Machine (127GB total)
  Package L#0
    NUMANode L#0 (P#0 127GB)
    L2 L#0 (256KB)
      Die L#0 + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0)
      Die L#1 + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1)
    L2 L#1 (256KB)
      Die L#2 + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU L#2 (P#2)
      Die L#3 + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU L#3 (P#3)
    L2 L#2 (256KB)
      Die L#4 + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU L#4 (P#4)
      Die L#5 + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU L#5 (P#5)
    L2 L#3 (256KB)
      Die L#6 + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU L#6 (P#6)
      Die L#7 + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU L#7 (P#7)
    L2 L#4 (256KB)
      Die L#8 + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU L#8 (P#8)
      Die L#9 + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU L#9 (P#9)
    L2 L#5 (256KB)
      Die L#10 + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 + PU L#10 (P#10)
      Die L#11 + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 + PU L#11 (P#11)
    L2 L#6 (256KB)
      Die L#12 + L1d L#12 (32KB) + L1i L#12 (32KB) + Core L#12 + PU L#12 (P#12)
      Die L#13 + L1d L#13 (32KB) + L1i L#13 (32KB) + Core L#13 + PU L#13 (P#13)
    L2 L#7 (256KB)
      Die L#14 + L1d L#14 (32KB) + L1i L#14 (32KB) + Core L#14 + PU L#14 (P#14)
      Die L#15 + L1d L#15 (32KB) + L1i L#15 (32KB) + Core L#15 + PU L#15 (P#15)
    L2 L#8 (256KB)
      Die L#16 + L1d L#16 (32KB) + L1i L#16 (32KB) + Core L#16 + PU L#16 (P#16)
      Die L#17 + L1d L#17 (32KB) + L1i L#17 (32KB) + Core L#17 + PU L#17 (P#17)
    L2 L#9 (256KB)
      Die L#18 + L1d L#18 (32KB) + L1i L#18 (32KB) + Core L#18 + PU L#18 (P#18)
      Die L#19 + L1d L#19 (32KB) + L1i L#19 (32KB) + Core L#19 + PU L#19 (P#19)
    L2 L#10 (256KB)
      Die L#20 + L1d L#20 (32KB) + L1i L#20 (32KB) + Core L#20 + PU L#20 (P#20)
      Die L#21 + L1d L#21 (32KB) + L1i L#21 (32KB) + Core L#21 + PU L#21 (P#21)
    L2 L#11 (256KB)
      Die L#22 + L1d L#22 (32KB) + L1i L#22 (32KB) + Core L#22 + PU L#22 (P#22)
      Die L#23 + L1d L#23 (32KB) + L1i L#23 (32KB) + Core L#23 + PU L#23 (P#23)
    L2 L#12 (256KB)
      Die L#24 + L1d L#24 (32KB) + L1i L#24 (32KB) + Core L#24 + PU L#24 (P#24)
      Die L#25 + L1d L#25 (32KB) + L1i L#25 (32KB) + Core L#25 + PU L#25 (P#25)
    L2 L#13 (256KB)
      Die L#26 + L1d L#26 (32KB) + L1i L#26 (32KB) + Core L#26 + PU L#26 (P#26)
      Die L#27 + L1d L#27 (32KB) + L1i L#27 (32KB) + Core L#27 + PU L#27 (P#27)
    L2 L#14 (256KB)
      Die L#28 + L1d L#28 (32KB) + L1i L#28 (32KB) + Core L#28 + PU L#28 (P#28)
      Die L#29 + L1d L#29 (32KB) + L1i L#29 (32KB) + Core L#29 + PU L#29 (P#29)
    L2 L#15 (256KB)
      Die L#30 + L1d L#30 (32KB) + L1i L#30 (32KB) + Core L#30 + PU L#30 (P#30)
      Die L#31 + L1d L#31 (32KB) + L1i L#31 (32KB) + Core L#31 + PU L#31 (P#31)
  HostBridge
    PCIBridge
      PCI 0000:01:00.0 (Ethernet)
        Net "enp1s0f0"
        OpenFabrics "mlx5_0"
      PCI 0000:01:00.1 (Ethernet)
        Net "enp1s0f1"
        OpenFabrics "mlx5_1"
  HostBridge
    PCIBridge
      PCI 0002:01:00.0 (Ethernet)
        Net "enP2p1s0"
  HostBridge
    PCIBridge
      PCIBridge
        PCI 0007:02:00.0 (VGA)
  Block(Disk) "sda"
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)
  Misc(MemoryModule)


:::::::::::
:: s390x ::
:::::::::::

[root@ibm-z-137 ~]# rpm -q hwloc
hwloc-2.2.0-2.el8.s390x

[root@ibm-z-137 ~]# lscpu
Architecture:        s390x
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Big Endian
CPU(s):              2
On-line CPU(s) list: 0,1
Thread(s) per core:  1
Core(s) per socket:  1
Socket(s) per book:  1
Book(s) per drawer:  1
Drawer(s):           2
NUMA node(s):        1
Vendor ID:           IBM/S390
Machine type:        2964
CPU dynamic MHz:     5000
CPU static MHz:      5000
BogoMIPS:            3033.00
Hypervisor:          z/VM 6.4.0
Hypervisor vendor:   IBM
Virtualization type: full
Dispatching mode:    horizontal
L1d cache:           128K
L1i cache:           96K
L2d cache:           2048K
L2i cache:           2048K
L3 cache:            65536K
L4 cache:            491520K
NUMA node0 CPU(s):   0,1
Flags:               esan3 zarch stfle msa ldisp eimm dfp edat etf3eh highgprs te vx sie

[root@ibm-z-137 ~]# hwloc-calc -N core machine:0
2

[root@ibm-z-137 ~]# lstopo-no-graphics
Machine (3807MB total)
  NUMANode L#0 (P#0 3807MB)
  Package L#0 + L2d L#0 (2048KB) + L2i L#0 (2048KB) + L1d L#0 (128KB) + L1i L#0 (96KB) + Core L#0 + PU L#0 (P#0)
  Package L#1 + L2d L#1 (2048KB) + L2i L#1 (2048KB) + L1d L#1 (128KB) + L1i L#1 (96KB) + Core L#1 + PU L#1 (P#1)
  Block "dasda"
  Block "dasdb"
  Net "enc8000"

Comment 42 errata-xmlrpc 2021-11-09 19:42:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (hwloc bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4416


Note You need to log in before you can comment on or make changes to this bug.