Bug 1785207 - RFE: support for configuring CPU 'dies' in guest topology
Summary: RFE: support for configuring CPU 'dies' in guest topology
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.2
Assignee: Daniel Berrangé
QA Contact: jiyan
URL:
Whiteboard:
Depends On: 1813395 1821592
Blocks: 1702444 1749470 1819060
TreeView+ depends on / blocked
 
Reported: 2019-12-19 11:44 UTC by Daniel Berrangé
Modified: 2020-05-05 09:52 UTC (History)
9 users (show)

Fixed In Version: libvirt-6.0.0-3.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1819060 (view as bug list)
Environment:
Last Closed: 2020-05-05 09:52:22 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Daniel Berrangé 2019-12-19 11:44:59 UTC
Description of problem:
Latest generation CPUs introduced a new level in the topology referred to as a "die", sitting between the socket & core. 

QEMU added support for this in the -smp arg in 4.1.0, and libvirt needs to expose this in the guest XML configuration

https://libvirt.org/formatdomain.html#elementsCPU

eg

  <vcpu placement='static'>12</vcpu>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='2' dies='3' cores='2' threads='1'/>
  </cpu>
 
With such a config we should get

  -smp 12,sockets=2,dies=3,cores=2,threads=1

And inside the guest we should see topology:


# hwloc-ls
Machine (7724MB total)
  NUMANode L#0 (P#0 7724MB)
  Package L#0
    L3 L#0 (16MB)
      L2 L#0 (4096KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 + PU L#0 (P#0)
      L2 L#1 (4096KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 + PU L#1 (P#1)
    L3 L#1 (16MB)
      L2 L#2 (4096KB) + L1d L#2 (32KB) + L1i L#2 (32KB) + Core L#2 + PU L#2 (P#2)
      L2 L#3 (4096KB) + L1d L#3 (32KB) + L1i L#3 (32KB) + Core L#3 + PU L#3 (P#3)
    L3 L#2 (16MB)
      L2 L#4 (4096KB) + L1d L#4 (32KB) + L1i L#4 (32KB) + Core L#4 + PU L#4 (P#4)
      L2 L#5 (4096KB) + L1d L#5 (32KB) + L1i L#5 (32KB) + Core L#5 + PU L#5 (P#5)
  Package L#1
    L3 L#3 (16MB)
      L2 L#6 (4096KB) + L1d L#6 (32KB) + L1i L#6 (32KB) + Core L#6 + PU L#6 (P#6)
      L2 L#7 (4096KB) + L1d L#7 (32KB) + L1i L#7 (32KB) + Core L#7 + PU L#7 (P#7)
    L3 L#4 (16MB)
      L2 L#8 (4096KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8 + PU L#8 (P#8)
      L2 L#9 (4096KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9 + PU L#9 (P#9)
    L3 L#5 (16MB)
      L2 L#10 (4096KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10 + PU L#10 (P#10)
      L2 L#11 (4096KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11 + PU L#11 (P#11)


Note 'Package' here maps to 'socket' in libvirt terminology. So the first level below the package is the 'die' and next level is the 'core'. We could introduce hyperthreads if we want yet another level.

Note that in sysfs on latest upstream / Fedora kernels there's a new sysfs file "die_id" and "die_cpus" and "die_cpus_list" at 

  /sys/devices/system/cpu/cpuXXX/topology/

which can also be used to validate the guest topology.

I'm not sure if this is backported to RHEL8 kernels or not yet.

Version-Release number of selected component (if applicable):
libvirt-5.10.0-1

Comment 1 Daniel Berrangé 2019-12-20 15:22:48 UTC
Patches at 

https://www.redhat.com/archives/libvir-list/2019-December/msg01249.html

Comment 2 Daniel Berrangé 2020-02-03 17:27:11 UTC
(In reply to Daniel Berrangé from comment #0)
> Note that in sysfs on latest upstream / Fedora kernels there's a new sysfs
> file "die_id" and "die_cpus" and "die_cpus_list" at 
> 
>   /sys/devices/system/cpu/cpuXXX/topology/
> 
> which can also be used to validate the guest topology.
> 
> I'm not sure if this is backported to RHEL8 kernels or not yet.

This is present in RHEL 8 from kernel-4.18.0-147.4.el8, via bug 1616309

Comment 5 jiyan 2020-03-13 14:32:55 UTC
Hi Daniel

As I have shown my testing env in the bug 1785211
https://bugzilla.redhat.com/show_bug.cgi?id=1785211#c8
https://bugzilla.redhat.com/show_bug.cgi?id=1785211#c9

I still can hit the err in https://bugzilla.redhat.com/show_bug.cgi?id=1785211#c6

Version:
qemu-kvm-4.2.0-13.module+el8.2.0+5898+fb4bceae.x86_64
kernel-4.18.0-187.el8.x86_64
libvirt-6.0.0-10.module+el8.2.0+5984+dce93708.x86_64

Steps:
# virsh domstate test82 
shut off

# virsh dumpxml test82 |grep -E "vcpu|topology"
  <vcpu placement='static' current='52'>128</vcpu>
    <topology sockets='8' dies='2' cores='4' threads='2'/>

# virsh start test82 
error: Failed to start domain test82
error: internal error: qemu didn't report thread id for vcpu '48'


#### Changed the "current" vcpu num to "49"
  <vcpu placement='static' current='49'>128</vcpu>
    <topology sockets='8' dies='2' cores='4' threads='2'/>

# virsh start test82 
error: Failed to start domain test82
error: internal error: qemu didn't report thread id for vcpu '48'


#### Changed the "current" vcpu num to "49"
  <vcpu placement='static' current='47'>128</vcpu>
    <topology sockets='8' dies='2' cores='4' threads='2'/>

# virsh start test82 
error: Failed to start domain test82
error: internal error: qemu didn't report thread id for vcpu '46'

Comment 7 Daniel Berrangé 2020-03-13 14:47:58 UTC
Thanks, for the further info. I can confirm that I can reproduce this problem on my own development platform too, and will investigate further.

Comment 8 Daniel Berrangé 2020-03-13 16:55:29 UTC
(In reply to jiyan from comment #5)
> # virsh dumpxml test82 |grep -E "vcpu|topology"
>   <vcpu placement='static' current='52'>128</vcpu>
>     <topology sockets='8' dies='2' cores='4' threads='2'/>
> 
> # virsh start test82 
> error: Failed to start domain test82
> error: internal error: qemu didn't report thread id for vcpu '48'

This problem is being tracked in a new bug:

 https://bugzilla.redhat.com/show_bug.cgi?id=1813395


From the POV of this RFE, we can say that  "dies" can be configured for guest CPUs, but only if the active CPU count matches the total CPU count. 

We'll need to fix the other bug in order to enable hotplug scenarios with a reduced initial CPU count to work.

Comment 11 jiyan 2020-04-21 06:47:46 UTC
Version:
qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.x86_64
libvirt-6.0.0-17.module+el8.2.0+6257+0d066c28.x86_64
kernel-4.18.0-193.el8.x86_64

Steps:
1. Check Host env info
# lscpu 
Model name:          Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz

# virsh capabilities
    <topology>
      <cells num='4'>
        <cell id='1'>
	...
          <cpus num='48'>
            <cpu id='24' socket_id='0' die_id='1' core_id='0' siblings='24,120'/>
            <cpu id='25' socket_id='0' die_id='1' core_id='1' siblings='25,121'/>
            <cpu id='26' socket_id='0' die_id='1' core_id='2' siblings='26,122'/>
            <cpu id='27' socket_id='0' die_id='1' core_id='3' siblings='27,123'/>

2. Configure dies=2 for VM and check VM's related info
# virsh domstate test82 
shut off

# virsh dumpxml test82 --inactive 
  <vcpu placement='auto'>16</vcpu>
  <numatune>
    <memory mode='strict' placement='auto'/>
  </numatune>
  <cpu mode='host-model' check='partial'>
    <topology sockets='2' dies='2' cores='2' threads='2'/>
    <numa>
      <cell id='0' cpus='0-7' memory='512000' unit='KiB' discard='yes'/>
      <cell id='1' cpus='8-15' memory='512000' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>

# virsh start test82
Domain test82 started

# ps -ef | grep test82
-smp 16,sockets=2,dies=2,cores=2,threads=2

# virsh console test82
Connected to domain test82
Escape character is ^]

[root@localhost ~]# cat ss.sh 
printf cpuid"\t"phyid"\t"phycpus"\t"phylist"\t"dieid"\t"diecpus"\t"dielist"\n"
for i in {0..15}
do
printf CPU$i"\t"
for j in physical_package_id package_cpus package_cpus_list die_id die_cpus die_cpus_list  
do 
printf `cat /sys/devices/system/cpu/cpu$i/topology/$j`"\t"
done
printf "\n"
done

[root@localhost ~]# sh ss.sh 
cpuid	phyid	phycpus	phylist	dieid	diecpus	dielist
CPU0	0	00ff	0-7	0	000f	0-3	
CPU1	0	00ff	0-7	0	000f	0-3	
CPU2	0	00ff	0-7	0	000f	0-3	
CPU3	0	00ff	0-7	0	000f	0-3	
CPU4	0	00ff	0-7	1	00f0	4-7	
CPU5	0	00ff	0-7	1	00f0	4-7	
CPU6	0	00ff	0-7	1	00f0	4-7	
CPU7	0	00ff	0-7	1	00f0	4-7	
CPU8	1	ff00	8-15	0	0f00	8-11	
CPU9	1	ff00	8-15	0	0f00	8-11	
CPU10	1	ff00	8-15	0	0f00	8-11	
CPU11	1	ff00	8-15	0	0f00	8-11	
CPU12	1	ff00	8-15	1	f000	12-15	
CPU13	1	ff00	8-15	1	f000	12-15	
CPU14	1	ff00	8-15	1	f000	12-15	
CPU15	1	ff00	8-15	1	f000	12-15	

==> The /sys info checked in VM OS corresponds the dumpxml conf of VM.
physical_package_id (0 and 1) ==> sockets in dumpxml equals 2
package_cpus_list (0-7 and 8-15) ==> sockets in dumpxml equals 2 and each socket has 8 CPUS
die_id (0 and 1) ==> die_id in dumpxml equals 2, 
die_cpus_list (0-3, 4-7 and 8-11 and 12-15) ==> each socket has 2 dies, there are totally 4 dies and each dies has 4 CPUS.

Comment 12 jiyan 2020-04-21 06:48:58 UTC
Based on comment 11, also tested the following 2 scenarios.

3. Configure dies=5 for VM and check VM's related info
# virsh destroy test82 
Domain test82 destroyed

# virsh domstate test82 
shut off

# virsh dumpxml test82 --inactive 
  <vcpu placement='auto'>40</vcpu>
  <numatune>
    <memory mode='strict' placement='auto'/>
  </numatune>
  <cpu mode='host-model' check='partial'>
    <topology sockets='2' dies='5' cores='2' threads='2'/>
    <numa>
      <cell id='0' cpus='0-19' memory='512000' unit='KiB' discard='yes'/>
      <cell id='1' cpus='20-39' memory='512000' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>

# virsh start test82
Domain test82 started

# ps -ef | grep test82
-smp 40,sockets=2,dies=5,cores=2,threads=2

# virsh console test82
Connected to domain test82
Escape character is ^]

[root@localhost ~]# cat ss.sh 
printf cpuid"\t"phyid"\t"phycpus"\t"phylist"\t"dieid"\t"diecpus"\t"dielist"\n"
for i in {0..39}
do
	printf CPU$i"\t"
	for j in physical_package_id package_cpus package_cpus_list die_id die_cpus die_cpus_list  
	do 
		printf `cat /sys/devices/system/cpu/cpu$i/topology/$j`"\t"
	done
	printf "\n"
done

[root@localhost ~]# sh ss.sh 
cpuid	phyid	phycpus	phylist	dieid	diecpus	dielist
CPU0	0	00,000fffff	0-19	0	00,0000000f	0-3	
CPU1	0	00,000fffff	0-19	0	00,0000000f	0-3	
CPU2	0	00,000fffff	0-19	0	00,0000000f	0-3	
CPU3	0	00,000fffff	0-19	0	00,0000000f	0-3	
CPU4	0	00,000fffff	0-19	1	00,000000f0	4-7	
CPU5	0	00,000fffff	0-19	1	00,000000f0	4-7	
CPU6	0	00,000fffff	0-19	1	00,000000f0	4-7	
CPU7	0	00,000fffff	0-19	1	00,000000f0	4-7	
CPU8	0	00,000fffff	0-19	2	00,00000f00	8-11	
CPU9	0	00,000fffff	0-19	2	00,00000f00	8-11	
CPU10	0	00,000fffff	0-19	2	00,00000f00	8-11	
CPU11	0	00,000fffff	0-19	2	00,00000f00	8-11	
CPU12	0	00,000fffff	0-19	3	00,0000f000	12-15	
CPU13	0	00,000fffff	0-19	3	00,0000f000	12-15	
CPU14	0	00,000fffff	0-19	3	00,0000f000	12-15	
CPU15	0	00,000fffff	0-19	3	00,0000f000	12-15	
CPU16	0	00,000fffff	0-19	4	00,000f0000	16-19	
CPU17	0	00,000fffff	0-19	4	00,000f0000	16-19	
CPU18	0	00,000fffff	0-19	4	00,000f0000	16-19	
CPU19	0	00,000fffff	0-19	4	00,000f0000	16-19	
CPU20	1	ff,fff00000	20-39	0	00,00f00000	20-23	
CPU21	1	ff,fff00000	20-39	0	00,00f00000	20-23	
CPU22	1	ff,fff00000	20-39	0	00,00f00000	20-23	
CPU23	1	ff,fff00000	20-39	0	00,00f00000	20-23	
CPU24	1	ff,fff00000	20-39	1	00,0f000000	24-27	
CPU25	1	ff,fff00000	20-39	1	00,0f000000	24-27	
CPU26	1	ff,fff00000	20-39	1	00,0f000000	24-27	
CPU27	1	ff,fff00000	20-39	1	00,0f000000	24-27	
CPU28	1	ff,fff00000	20-39	2	00,f0000000	28-31	
CPU29	1	ff,fff00000	20-39	2	00,f0000000	28-31	
CPU30	1	ff,fff00000	20-39	2	00,f0000000	28-31	
CPU31	1	ff,fff00000	20-39	2	00,f0000000	28-31	
CPU32	1	ff,fff00000	20-39	3	0f,00000000	32-35	
CPU33	1	ff,fff00000	20-39	3	0f,00000000	32-35	
CPU34	1	ff,fff00000	20-39	3	0f,00000000	32-35	
CPU35	1	ff,fff00000	20-39	3	0f,00000000	32-35	
CPU36	1	ff,fff00000	20-39	4	f0,00000000	36-39	
CPU37	1	ff,fff00000	20-39	4	f0,00000000	36-39	
CPU38	1	ff,fff00000	20-39	4	f0,00000000	36-39	
CPU39	1	ff,fff00000	20-39	4	f0,00000000	36-39	

==> The /sys info checked in VM OS corresponds the dumpxml conf of VM.
2 sockets, 5 dies (0-4) and totally 40 CPUS.




4. Configure dies=11 for VM and check VM's related info
# virsh domstate test82 
shut off

# virsh dumpxml test82 --inactive 
  <vcpu placement='auto'>88</vcpu>
  <numatune>
    <memory mode='strict' placement='auto'/>
  </numatune>
  <cpu mode='host-model' check='partial'>
    <topology sockets='2' dies='11' cores='2' threads='2'/>
    <numa>
      <cell id='0' cpus='0-43' memory='512000' unit='KiB' discard='yes'/>
      <cell id='1' cpus='44-87' memory='512000' unit='KiB' memAccess='shared'/>
    </numa>
  </cpu>

# virsh start test82 
Domain test82 started

# ps -ef | grep test82
-smp 88,sockets=2,dies=11,cores=2,threads=2

# virsh console test82
Connected to domain test82
Escape character is ^]

[root@localhost ~]# cat ss.sh 
printf cpuid"\t"phyid"\t"phycpus"\t"phylist"\t"dieid"\t"diecpus"\t"dielist"\n"
for i in {0..87}
do
	printf CPU$i"\t"
	for j in physical_package_id package_cpus package_cpus_list die_id die_cpus die_cpus_list  
	do 
		printf `cat /sys/devices/system/cpu/cpu$i/topology/$j`"\t"
	done
	printf "\n"
done
[root@localhost ~]# sh ss.sh 
cpuid	phyid	phycpus	phylist	dieid	diecpus	dielist
CPU0	0	000000,00000fff,ffffffff	0-43	0	000000,00000000,0000000f	0-3	
CPU1	0	000000,00000fff,ffffffff	0-43	0	000000,00000000,0000000f	0-3	
CPU2	0	000000,00000fff,ffffffff	0-43	0	000000,00000000,0000000f	0-3	
CPU3	0	000000,00000fff,ffffffff	0-43	0	000000,00000000,0000000f	0-3	
CPU4	0	000000,00000fff,ffffffff	0-43	1	000000,00000000,000000f0	4-7	
CPU5	0	000000,00000fff,ffffffff	0-43	1	000000,00000000,000000f0	4-7	
CPU6	0	000000,00000fff,ffffffff	0-43	1	000000,00000000,000000f0	4-7	
CPU7	0	000000,00000fff,ffffffff	0-43	1	000000,00000000,000000f0	4-7	
CPU8	0	000000,00000fff,ffffffff	0-43	2	000000,00000000,00000f00	8-11	
CPU9	0	000000,00000fff,ffffffff	0-43	2	000000,00000000,00000f00	8-11	
CPU10	0	000000,00000fff,ffffffff	0-43	2	000000,00000000,00000f00	8-11	
CPU11	0	000000,00000fff,ffffffff	0-43	2	000000,00000000,00000f00	8-11	
CPU12	0	000000,00000fff,ffffffff	0-43	3	000000,00000000,0000f000	12-15	
CPU13	0	000000,00000fff,ffffffff	0-43	3	000000,00000000,0000f000	12-15	
CPU14	0	000000,00000fff,ffffffff	0-43	3	000000,00000000,0000f000	12-15	
CPU15	0	000000,00000fff,ffffffff	0-43	3	000000,00000000,0000f000	12-15	
CPU16	0	000000,00000fff,ffffffff	0-43	4	000000,00000000,000f0000	16-19	
CPU17	0	000000,00000fff,ffffffff	0-43	4	000000,00000000,000f0000	16-19	
CPU18	0	000000,00000fff,ffffffff	0-43	4	000000,00000000,000f0000	16-19	
CPU19	0	000000,00000fff,ffffffff	0-43	4	000000,00000000,000f0000	16-19	
CPU20	0	000000,00000fff,ffffffff	0-43	5	000000,00000000,00f00000	20-23	
CPU21	0	000000,00000fff,ffffffff	0-43	5	000000,00000000,00f00000	20-23	
CPU22	0	000000,00000fff,ffffffff	0-43	5	000000,00000000,00f00000	20-23	
CPU23	0	000000,00000fff,ffffffff	0-43	5	000000,00000000,00f00000	20-23	
CPU24	0	000000,00000fff,ffffffff	0-43	6	000000,00000000,0f000000	24-27	
CPU25	0	000000,00000fff,ffffffff	0-43	6	000000,00000000,0f000000	24-27	
CPU26	0	000000,00000fff,ffffffff	0-43	6	000000,00000000,0f000000	24-27	
CPU27	0	000000,00000fff,ffffffff	0-43	6	000000,00000000,0f000000	24-27	
CPU28	0	000000,00000fff,ffffffff	0-43	7	000000,00000000,f0000000	28-31	
CPU29	0	000000,00000fff,ffffffff	0-43	7	000000,00000000,f0000000	28-31	
CPU30	0	000000,00000fff,ffffffff	0-43	7	000000,00000000,f0000000	28-31	
CPU31	0	000000,00000fff,ffffffff	0-43	7	000000,00000000,f0000000	28-31	
CPU32	0	000000,00000fff,ffffffff	0-43	8	000000,0000000f,00000000	32-35	
CPU33	0	000000,00000fff,ffffffff	0-43	8	000000,0000000f,00000000	32-35	
CPU34	0	000000,00000fff,ffffffff	0-43	8	000000,0000000f,00000000	32-35	
CPU35	0	000000,00000fff,ffffffff	0-43	8	000000,0000000f,00000000	32-35	
CPU36	0	000000,00000fff,ffffffff	0-43	9	000000,000000f0,00000000	36-39	
CPU37	0	000000,00000fff,ffffffff	0-43	9	000000,000000f0,00000000	36-39	
CPU38	0	000000,00000fff,ffffffff	0-43	9	000000,000000f0,00000000	36-39	
CPU39	0	000000,00000fff,ffffffff	0-43	9	000000,000000f0,00000000	36-39	
CPU40	0	000000,00000fff,ffffffff	0-43	10	000000,00000f00,00000000	40-43	
CPU41	0	000000,00000fff,ffffffff	0-43	10	000000,00000f00,00000000	40-43	
CPU42	0	000000,00000fff,ffffffff	0-43	10	000000,00000f00,00000000	40-43	
CPU43	0	000000,00000fff,ffffffff	0-43	10	000000,00000f00,00000000	40-43	
CPU44	1	ffffff,fffff000,00000000	44-87	0	000000,0000f000,00000000	44-47	
CPU45	1	ffffff,fffff000,00000000	44-87	0	000000,0000f000,00000000	44-47	
CPU46	1	ffffff,fffff000,00000000	44-87	0	000000,0000f000,00000000	44-47	
CPU47	1	ffffff,fffff000,00000000	44-87	0	000000,0000f000,00000000	44-47	
CPU48	1	ffffff,fffff000,00000000	44-87	1	000000,000f0000,00000000	48-51	
CPU49	1	ffffff,fffff000,00000000	44-87	1	000000,000f0000,00000000	48-51	
CPU50	1	ffffff,fffff000,00000000	44-87	1	000000,000f0000,00000000	48-51	
CPU51	1	ffffff,fffff000,00000000	44-87	1	000000,000f0000,00000000	48-51	
CPU52	1	ffffff,fffff000,00000000	44-87	2	000000,00f00000,00000000	52-55	
CPU53	1	ffffff,fffff000,00000000	44-87	2	000000,00f00000,00000000	52-55	
CPU54	1	ffffff,fffff000,00000000	44-87	2	000000,00f00000,00000000	52-55	
CPU55	1	ffffff,fffff000,00000000	44-87	2	000000,00f00000,00000000	52-55	
CPU56	1	ffffff,fffff000,00000000	44-87	3	000000,0f000000,00000000	56-59	
CPU57	1	ffffff,fffff000,00000000	44-87	3	000000,0f000000,00000000	56-59	
CPU58	1	ffffff,fffff000,00000000	44-87	3	000000,0f000000,00000000	56-59	
CPU59	1	ffffff,fffff000,00000000	44-87	3	000000,0f000000,00000000	56-59	
CPU60	1	ffffff,fffff000,00000000	44-87	4	000000,f0000000,00000000	60-63	
CPU61	1	ffffff,fffff000,00000000	44-87	4	000000,f0000000,00000000	60-63	
CPU62	1	ffffff,fffff000,00000000	44-87	4	000000,f0000000,00000000	60-63	
CPU63	1	ffffff,fffff000,00000000	44-87	4	000000,f0000000,00000000	60-63	
CPU64	1	ffffff,fffff000,00000000	44-87	5	00000f,00000000,00000000	64-67	
CPU65	1	ffffff,fffff000,00000000	44-87	5	00000f,00000000,00000000	64-67	
CPU66	1	ffffff,fffff000,00000000	44-87	5	00000f,00000000,00000000	64-67	
CPU67	1	ffffff,fffff000,00000000	44-87	5	00000f,00000000,00000000	64-67	
CPU68	1	ffffff,fffff000,00000000	44-87	6	0000f0,00000000,00000000	68-71	
CPU69	1	ffffff,fffff000,00000000	44-87	6	0000f0,00000000,00000000	68-71	
CPU70	1	ffffff,fffff000,00000000	44-87	6	0000f0,00000000,00000000	68-71	
CPU71	1	ffffff,fffff000,00000000	44-87	6	0000f0,00000000,00000000	68-71	
CPU72	1	ffffff,fffff000,00000000	44-87	7	000f00,00000000,00000000	72-75	
CPU73	1	ffffff,fffff000,00000000	44-87	7	000f00,00000000,00000000	72-75	
CPU74	1	ffffff,fffff000,00000000	44-87	7	000f00,00000000,00000000	72-75	
CPU75	1	ffffff,fffff000,00000000	44-87	7	000f00,00000000,00000000	72-75	
CPU76	1	ffffff,fffff000,00000000	44-87	8	00f000,00000000,00000000	76-79	
CPU77	1	ffffff,fffff000,00000000	44-87	8	00f000,00000000,00000000	76-79	
CPU78	1	ffffff,fffff000,00000000	44-87	8	00f000,00000000,00000000	76-79	
CPU79	1	ffffff,fffff000,00000000	44-87	8	00f000,00000000,00000000	76-79	
CPU80	1	ffffff,fffff000,00000000	44-87	9	0f0000,00000000,00000000	80-83	
CPU81	1	ffffff,fffff000,00000000	44-87	9	0f0000,00000000,00000000	80-83	
CPU82	1	ffffff,fffff000,00000000	44-87	9	0f0000,00000000,00000000	80-83	
CPU83	1	ffffff,fffff000,00000000	44-87	9	0f0000,00000000,00000000	80-83	
CPU84	1	ffffff,fffff000,00000000	44-87	10	f00000,00000000,00000000	84-87	
CPU85	1	ffffff,fffff000,00000000	44-87	10	f00000,00000000,00000000	84-87	
CPU86	1	ffffff,fffff000,00000000	44-87	10	f00000,00000000,00000000	84-87	
CPU87	1	ffffff,fffff000,00000000	44-87	10	f00000,00000000,00000000	84-87	

==> The /sys info checked in VM OS corresponds the dumpxml conf of VM.
2 sockets, 11 dies (0-10) and totally 88 CPUS.

All the test results are as expected, move this bug to ve verified.

Comment 14 errata-xmlrpc 2020-05-05 09:52:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017


Note You need to log in before you can comment on or make changes to this bug.