Bug 969352 - [RFE] Use suitable CPU topology for windows guests
Summary: [RFE] Use suitable CPU topology for windows guests
Keywords:
Status: CLOSED DUPLICATE of bug 1095323
Alias: None
Product: Virtualization Tools
Classification: Community
Component: virt-manager
Version: unspecified
Hardware: x86_64
OS: Windows
low
low
Target Milestone: ---
Assignee: Cole Robinson
QA Contact:
URL:
Whiteboard:
Depends On: 879303
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-05-31 09:46 UTC by Peter Krempa
Modified: 2016-05-13 17:04 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of: 879303
Environment:
Last Closed: 2016-05-13 17:04:42 UTC
Embargoed:


Attachments (Terms of Use)

Description Peter Krempa 2013-05-31 09:46:23 UTC
+++ This bug was initially created as a clone of Bug #879303 +++

Description of problem:
boot guest win2k8r2 guest with the following scenarios and hotplug 160 vcpus.
     1.-smp 64,cores=4,thread=1,socket=40,maxcpus=160
     2.-smp 64,cores=1,thread=1,socket=1,maxcpus=160
     3.-smp 64,cores=1,thread=1,socket=64,maxcpus=160
scenario 1:
     119 cpus are online in guest
scenario 2:
     64 cpus are online in guest
scenario 3:
     119 cpus are online in guest

Version-Release number of selected component (if applicable):
# uname -r
2.6.32-342.el6.x86_64
qemu-kvm-0.12.1.2-2.334.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1./usr/libexec/qemu-kvm -enable-kvm -m 2G -smp 64,cores=4,thread=1,socket=40,maxcpus=160 -M rhel6.4.0 -name rhel6 -uuid ddcbfb49-3411-1701-3c36-6bdbc00bedb9 -rtc base=utc,clock=host,driftfix=slew -boot c -drive file=/mnt/win2k8r2-bak.raw,if=none,id=drive-virtio-0-1,format=raw,cache=none,werror=report,rerror=report -device virtio-blk-pci,drive=drive-virtio-0-1,id=virt0-0-1 -netdev tap,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:50:a4:c2:c5 -device virtio-balloon-pci,id=ballooning -monitor stdio  -qmp tcp:0:4455,server,nowait -drive file=5g.qcow2,format=qcow2,if=none,id=drive-disk,cache=none,werror=ignore,rerror=ignore -device virtio-blk-pci,scsi=off,drive=drive-disk,id=image -device sga -chardev socket,id=serial0,path=/var/test1,server,nowait -device isa-serial,chardev=serial0 -monitor unix:/tmp/monitor2,server,nowait -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -spice disable-ticketing,port=5911 -vga qxl

2.hotplug 160 vcpu to guest with this small script
i=1
while [ $i -lt 160 ]
do
sleep 2
echo "cpu_set $i online"|nc -U /tmp/monitor2
i=$(($i+1))
done

3.check vpu number in guest
  
Actual results:
64 vcpu are online from task manager--> performance in guest

Expected results:
160 vcpu are online

Additional info:

--- Additional comment from FuXiangChun on 2012-11-23 02:48:04 CET ---

Boot guest with -smp 1,cores=4,thread=1,socket=40,maxcpus=160, and reboot guest. then can show 160 vcpus from task manager--> performance in guest, so I think it isn't a bug.

--- Additional comment from juzhang on 2012-11-23 04:34:45 CET ---

Reopen this issue since KVM QE do not know whether upper management can handle this issue.

Issue summary
1. unexpected results   
1.1.-smp 64,cores=4,thread=1,socket=40,maxcpus=160
1.2.-smp 64,cores=1,thread=1,socket=1,maxcpus=160
1.3.-smp 64,cores=1,thread=1,socket=64,maxcpus=160
scenario 1:
     119 cpus are online in guest
scenario 2:
     64 cpus are online in guest
scenario 3:
     119 cpus are online in guest

2. expected results
2.1 -smp 1,cores=4,thread=1,socket=40,maxcpus=160

Results:
160 cpus are online in guest

Additional infos
KVM QE know the win2k8r2 maximize support cpu socket are 64, the problem is we do not now how the upper management handle -smp x, cores=x,thread=x,socket=x when boot a guest.

--- Additional comment from Igor Mammedov on 2013-05-22 17:59:48 CEST ---

(In reply to comment #3)
> Reopen this issue since KVM QE do not know whether upper management can
> handle this issue.

Defining topology is up to management layer which knows what guest OS will be used.

Here is link on supported limits  for Windows Server:
http://blogs.technet.com/b/matthts/archive/2012/10/14/windows-server-sockets-logical-processors-symmetric-multi-threading.aspx


In addition to limits specified an above link, WS will not hotplug more that 8 CPUs if it was started with less than 9 CPUs.
In case it started with less then 9 CPUs, it will online up to 8 CPUs and only create CPU devices for the rest (and ask for restart to use them). If it's started with more than 8 CPUs it will online every hotplugged CPU up to supported limits.

It looks like WS limitation, so libvirt probably needs to take it into account (with instrumented ACPI in BIOS I've traced that WS is notified about all hotplugged CPUs and gets valid status and _MAT values for every hotplugged CPU).

May be there is other topology limitations but I wasn't able to find any docs about them.

> 
> Issue summary
> 1. unexpected results   
> 1.1.-smp 64,cores=4,thread=1,socket=40,maxcpus=160
> 1.2.-smp 64,cores=1,thread=1,socket=1,maxcpus=160
> 1.3.-smp 64,cores=1,thread=1,socket=64,maxcpus=160
> scenario 1:
>      119 cpus are online in guest
> scenario 2:
>      64 cpus are online in guest
> scenario 3:
>      119 cpus are online in guest
> 
> 2. expected results
> 2.1 -smp 1,cores=4,thread=1,socket=40,maxcpus=160
> 
> Results:
> 160 cpus are online in guest
> 
> Additional infos
> KVM QE know the win2k8r2 maximize support cpu socket are 64, the problem is
> we do not now how the upper management handle -smp x,
> cores=x,thread=x,socket=x when boot a guest.

Scenarios 1.2 and 1.3 are invalid due to unsupported topology by WS.

Scenario 1.1 works for me with RHEL and upstream qemu versions.
Guest takes time to online all CPUs so you have to wait till the end of it or count device nodes with processor type.

I've used following commands in guest to get online CPUs number:
---
# get number of online threads
get-wmiobject Win32_ComputerSystem -Property NumberOfLogicalProcessors

#get number of online sockets (i.e. socket with at least one online CPU)
get-wmiobject Win32_ComputerSystem -Property NumberOfProcessors

--- Additional comment from Igor Mammedov on 2013-05-22 18:08:11 CEST ---

Reassigning to libvirt (as mgmt layer) so that it wouldn't be possible to start qemu with wrong/not supported topology for specific guests (i.e. WS).

--- Additional comment from Peter Krempa on 2013-05-31 11:31:24 CEST ---

Libvirt isn't aware of the guest operating system that is used and doesn't store OS specific configuration details. This has to be done in even higher management layer like virt-manager, RHEV or others.

Comment 2 hyao@redhat.com 2013-07-05 06:46:54 UTC
Tried on ibm-x3850x5-04.qe.lab.eng.nay.redhat.com, and could not reproduce the error.

# rpm -qa libvirt virt-manager
libvirt-0.10.2-18.el6_4.9.x86_64
virt-manager-0.9.0-18.el6.x86_64

Scenarios 1.1
# virsh nodeinfo
CPU model:           x86_64
CPU(s):              32
CPU frequency:       1995 MHz
CPU socket(s):       1
Core(s) per socket:  8
Thread(s) per core:  2
NUMA cell(s):        2
Memory size:         132136588 KiB

set the processor topology for Windows 2008 R2 guest without hyper-v enabled as following: 
cores=4,thread=1,socket=40,maxcpus=160

start the guest and check the vcpucount
# virsh vcpucount 2k08r2
maximum      config       160
maximum      live         160
current      config       160
current      live         160

Scenarios 1.2 and 1.3
cores=1,thread=1,socket=1,maxcpus=160
cores=1,thread=1,socket=64,maxcpus=160

Error pops up when save the changes of guest:
Error changing VM configuration: Maximum CPUs greater than topology limit. 




I also tried on rhel7 
#rpm -qa libvirt virt-manager
virt-manager-0.10.0-1.el7.noarch
libvirt-1.1.0-1.el7.x86_64

# virsh nodeinfo
CPU model:           x86_64
CPU(s):              32
CPU frequency:       1064 MHz
CPU socket(s):       1
Core(s) per socket:  8
Thread(s) per core:  2
NUMA cell(s):        2
Memory size:         131818876 KiB


set the processor topology for Windows 2008 R2 guest  without hyper-v enabled as following: 
cores=4,thread=1,socket=40,maxcpus=160

check vcpucount 
# virsh vcpucount w2k82r
maximum      config       160
maximum      live         160
current      config       160
current      live         160


Scenarios 1.2 and 1.3
cores=1,thread=1,socket=1,maxcpus=160
cores=1,thread=1,socket=64,maxcpus=160

Error pops up when save the changes of guest:
Error changing VM configuration: Maximum CPUs greater than topology limit.

Comment 3 hyao@redhat.com 2013-07-05 07:21:28 UTC
Hi Peter Krempa,

According to the comment2, I can't reproduce this bug. Could you please recheck this bug and offer hints once it's still exist on your machine. Thanks very much.

Comment 4 Peter Krempa 2013-07-09 12:38:01 UTC
If you un-check the "manually set CPU topology" check box, libvirt will then start the guest with a flat topology "1 core, 1 thread, sockets == number of cpus". This won't be correctly recognized by windows guests as the Microsoft licensing policy restricts the number of processor packages usable with windows.

virt-manager should in this case provide a better topology by default (if this request is possible).

Comment 6 Pavel Hrdina 2016-05-13 11:32:36 UTC
This is something that virt-manager can do only while creating a new guest and it would require cooperation from liboseinfo to tell us the maximum number of sockets based on the detected windows version.

Moving to upstream because this has really low priority.

Comment 7 Daniel Berrangé 2016-05-13 11:33:57 UTC
If OS have restrictions on the number sockets, threads or cores they support (whether a technical reason or a licensing reason), libosinfo should totally record that data and make it available to apps.

Comment 8 Cole Robinson 2016-05-13 17:04:42 UTC
I filed a libosinfo RFE: https://bugzilla.redhat.com/show_bug.cgi?id=1335977

Duping to the other virt-manager bug tracking this

*** This bug has been marked as a duplicate of bug 1095323 ***


Note You need to log in before you can comment on or make changes to this bug.