Description of problem: virsh nodeinfo reports 1 socket, when there are most definitely four. I am most concerned that this could be perhaps affecting something else in virt* in case this is a library error and not just a display error. Please let me know if you want me to provide other information. (Obviously there is a bug because I'm not nearly cool enough to have a one socket CPU with 48 cores) Version-Release number of selected component (if applicable): Centos 6.2 How reproducible: 100% This is also reproducible on an identical "iron2" machine. Steps to Reproduce: 1. Run: virsh nodeinfo 2. 3. Actual results: [root@iron1 ~]# virsh nodeinfo CPU model: x86_64 CPU(s): 48 CPU frequency: 1866 MHz CPU socket(s): 1 Core(s) per socket: 6 Thread(s) per core: 2 NUMA cell(s): 4 Memory size: 66092028 kB Expected results: CPU socket(s): should show: 4 Additional info: [root@iron1 ~]# virsh capabilities <capabilities> <host> <uuid>00020003-0004-0005-0006-000700080009</uuid> <cpu> <arch>x86_64</arch> <model>Nehalem</model> <vendor>Intel</vendor> <topology sockets='1' cores='6' threads='2'/> <feature name='rdtscp'/> <feature name='x2apic'/> <feature name='dca'/> <feature name='xtpr'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='acpi'/> <feature name='ds'/> <feature name='vme'/> </cpu> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> </uri_transports> </migration_features> <topology> <cells num='4'> <cell id='0'> <cpus num='12'> <cpu id='0'/> <cpu id='1'/> <cpu id='2'/> <cpu id='3'/> <cpu id='4'/> <cpu id='5'/> <cpu id='24'/> <cpu id='25'/> <cpu id='26'/> <cpu id='27'/> <cpu id='28'/> <cpu id='29'/> </cpus> </cell> <cell id='1'> <cpus num='12'> <cpu id='6'/> <cpu id='7'/> <cpu id='8'/> <cpu id='9'/> <cpu id='10'/> <cpu id='11'/> <cpu id='30'/> <cpu id='31'/> <cpu id='32'/> <cpu id='33'/> <cpu id='34'/> <cpu id='35'/> </cpus> </cell> <cell id='2'> <cpus num='12'> <cpu id='12'/> <cpu id='13'/> <cpu id='14'/> <cpu id='15'/> <cpu id='16'/> <cpu id='17'/> <cpu id='36'/> <cpu id='37'/> <cpu id='38'/> <cpu id='39'/> <cpu id='40'/> <cpu id='41'/> </cpus> </cell> <cell id='3'> <cpus num='12'> <cpu id='18'/> <cpu id='19'/> <cpu id='20'/> <cpu id='21'/> <cpu id='22'/> <cpu id='23'/> <cpu id='42'/> <cpu id='43'/> <cpu id='44'/> <cpu id='45'/> <cpu id='46'/> <cpu id='47'/> </cpus> </cell> </cells> </topology> <secmodel> <model>selinux</model> <doi>0</doi> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.2.0</machine> <machine canonical='rhel6.2.0'>pc</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <pae/> <nonpae/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine>rhel6.2.0</machine> <machine canonical='rhel6.2.0'>pc</machine> <machine>rhel6.1.0</machine> <machine>rhel6.0.0</machine> <machine>rhel5.5.0</machine> <machine>rhel5.4.4</machine> <machine>rhel5.4.0</machine> <domain type='qemu'> </domain> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities> [root@iron1 ~]#
> [root@iron1 ~]# virsh nodeinfo > CPU model: x86_64 > CPU(s): 48 > CPU frequency: 1866 MHz > CPU socket(s): 1 > Core(s) per socket: 6 > Thread(s) per core: 2 > NUMA cell(s): 4 > Memory size: 66092028 kB This is correct, CPU sockets in nodeinfo are counted per NUMA cell and we can't change this because libvirt's public API contains a macro that counts all CPUs as cells * sockets * cores * threads. See documentation for virNodeInfo structure: http://www.libvirt.org/html/libvirt-libvirt.html#virNodeInfo > [root@iron1 ~]# virsh capabilities > <capabilities> > > <host> > <uuid>00020003-0004-0005-0006-000700080009</uuid> > <cpu> > <arch>x86_64</arch> > <model>Nehalem</model> > <vendor>Intel</vendor> > <topology sockets='1' cores='6' threads='2'/> However, I believe we should fix capabilities XML to show sockets='4' here. See also related discussion upstream: https://www.redhat.com/archives/libvir-list/2012-March/msg01123.html
Hi Jiri, Okay, thank you for the response. Sorry my bug was not as meaningful as I thought. FWIW: [root@iron1 ~]# numastat node0 node1 node2 node3 numa_hit 759859462 811255273 1287634274 922956748 numa_miss 4087790 6793253 2567421 6412145 numa_foreign 4764728 4342527 4667721 6085633 interleave_hit 9957 9989 9965 9951 local_node 758826554 810175297 1286515022 921803909 other_node 5120698 7873229 3686673 7564984 Thanks! James
AFAICT the capabilities bit was eventually fixed upstream, so closing
Hi Jiri, There is a similar issue when I test the virsh nodeinfo, can you please check if it is a bug? Description of problem: virsh nodeinfo reports the wrong number of "Core(s) per socket" in AMD machine. Intel machine does not meet this issue. Version-Release number of selected component (if applicable): libvirt-4.5.0-36.el7.x86_64' How reproducible: 100% Steps to Reproduce: 1. Run virsh nodeinfo on an AMD machine 2. Check the output of "Core(s) per socket" Actual Result: [root@hp-dl385g7-07 ~]# virsh nodeinfo CPU model: x86_64 CPU(s): 16 CPU frequency: 2099 MHz CPU socket(s): 1 Core(s) per socket: 16 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 65756012 KiB Expected results: "Core(s) per socket" should show '8' Additional info: [root@hp-dl385g7-07 ~]# lscpu|grep 'Core(s) per socket' Core(s) per socket: 8 [root@hp-dl385g7-07 ~]# virsh capabilities <capabilities> <host> <uuid>31333735-3232-4e43-4732-333253504d32</uuid> <cpu> <arch>x86_64</arch> <model>Opteron_G4</model> <vendor>AMD</vendor> <microcode version='100664894'/> <counter name='tsc' frequency='2100041000'/> <topology sockets='1' cores='16' threads='1'/> <feature name='vme'/> <feature name='ht'/> <feature name='monitor'/> <feature name='osxsave'/> <feature name='mmxext'/> <feature name='fxsr_opt'/> <feature name='cmp_legacy'/> <feature name='extapic'/> <feature name='cr8legacy'/> <feature name='osvw'/> <feature name='ibs'/> <feature name='skinit'/> <feature name='wdt'/> <feature name='nodeid_msr'/> <feature name='topoext'/> <feature name='perfctr_core'/> <feature name='perfctr_nb'/> <feature name='invtsc'/> <feature name='ibpb'/> <pages unit='KiB' size='4'/> <pages unit='KiB' size='2048'/> <pages unit='KiB' size='1048576'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> <suspend_hybrid/> </power_management> <iommu support='no'/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> <uri_transport>rdma</uri_transport> </uri_transports> </migration_features> <topology> <cells num='2'> <cell id='0'> <memory unit='KiB'>32759804</memory> <pages unit='KiB' size='4'>8189951</pages> <pages unit='KiB' size='2048'>0</pages> <pages unit='KiB' size='1048576'>0</pages> <distances> <sibling id='0' value='10'/> <sibling id='1' value='20'/> </distances> <cpus num='8'> <cpu id='0' socket_id='0' core_id='0' siblings='0-1'/> <cpu id='1' socket_id='0' core_id='1' siblings='0-1'/> <cpu id='2' socket_id='0' core_id='2' siblings='2-3'/> <cpu id='3' socket_id='0' core_id='3' siblings='2-3'/> <cpu id='4' socket_id='0' core_id='4' siblings='4-5'/> <cpu id='5' socket_id='0' core_id='5' siblings='4-5'/> <cpu id='6' socket_id='0' core_id='6' siblings='6-7'/> <cpu id='7' socket_id='0' core_id='7' siblings='6-7'/> </cpus> </cell> <cell id='1'> <memory unit='KiB'>32996208</memory> <pages unit='KiB' size='4'>8249052</pages> <pages unit='KiB' size='2048'>0</pages> <pages unit='KiB' size='1048576'>0</pages> <distances> <sibling id='0' value='20'/> <sibling id='1' value='10'/> </distances> <cpus num='8'> <cpu id='8' socket_id='0' core_id='0' siblings='8-9'/> <cpu id='9' socket_id='0' core_id='1' siblings='8-9'/> <cpu id='10' socket_id='0' core_id='2' siblings='10-11'/> <cpu id='11' socket_id='0' core_id='3' siblings='10-11'/> <cpu id='12' socket_id='0' core_id='4' siblings='12-13'/> <cpu id='13' socket_id='0' core_id='5' siblings='12-13'/> <cpu id='14' socket_id='0' core_id='6' siblings='14-15'/> <cpu id='15' socket_id='0' core_id='7' siblings='14-15'/> </cpus> </cell> </cells> </topology> <cache> <bank id='4' level='3' type='both' size='6' unit='MiB' cpus='0-7'/> <bank id='5' level='3' type='both' size='6' unit='MiB' cpus='8-15'/> </cache> <secmodel> <model>selinux</model> <doi>0</doi> <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel> <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> <baselabel type='kvm'>+107:+107</baselabel> <baselabel type='qemu'>+107:+107</baselabel> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities>
You didn't say what is the expected topology. I guess it's one socket with two NUMA nodes with 8 cores in each node as shown in the capabilities XML. This cannot be expressed by virsh nodeinfo as described in comment 1 and https://libvirt.org/html/libvirt-libvirt-host.html#virNodeInfo
(In reply to Jiri Denemark from comment #5) Yes, the expected topology is the same as you described. Thank you for your comment. > You didn't say what is the expected topology. I guess it's one socket with > two > NUMA nodes with 8 cores in each node as shown in the capabilities XML. This > cannot be expressed by virsh nodeinfo as described in comment 1 and > https://libvirt.org/html/libvirt-libvirt-host.html#virNodeInfo