Bug 879303
Summary: | hotplug 160 vcpu to win2k8r2 guest, some vcpu are offline in guest | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | FuXiangChun <xfu> | |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> | |
Status: | CLOSED CANTFIX | QA Contact: | Virtualization Bugs <virt-bugs> | |
Severity: | low | Docs Contact: | ||
Priority: | low | |||
Version: | 6.4 | CC: | acathrow, areis, bsarathy, dyasny, dyuan, honzhang, jiahu, juzhang, mkenneth, virt-maint | |
Target Milestone: | rc | Keywords: | Reopened | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Windows | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 969352 969354 (view as bug list) | Environment: | ||
Last Closed: | 2013-05-31 09:52:37 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 969352, 969354 |
Description
FuXiangChun
2012-11-22 14:58:53 UTC
Boot guest with -smp 1,cores=4,thread=1,socket=40,maxcpus=160, and reboot guest. then can show 160 vcpus from task manager--> performance in guest, so I think it isn't a bug. Reopen this issue since KVM QE do not know whether upper management can handle this issue. Issue summary 1. unexpected results 1.1.-smp 64,cores=4,thread=1,socket=40,maxcpus=160 1.2.-smp 64,cores=1,thread=1,socket=1,maxcpus=160 1.3.-smp 64,cores=1,thread=1,socket=64,maxcpus=160 scenario 1: 119 cpus are online in guest scenario 2: 64 cpus are online in guest scenario 3: 119 cpus are online in guest 2. expected results 2.1 -smp 1,cores=4,thread=1,socket=40,maxcpus=160 Results: 160 cpus are online in guest Additional infos KVM QE know the win2k8r2 maximize support cpu socket are 64, the problem is we do not now how the upper management handle -smp x, cores=x,thread=x,socket=x when boot a guest. (In reply to comment #3) > Reopen this issue since KVM QE do not know whether upper management can > handle this issue. Defining topology is up to management layer which knows what guest OS will be used. Here is link on supported limits for Windows Server: http://blogs.technet.com/b/matthts/archive/2012/10/14/windows-server-sockets-logical-processors-symmetric-multi-threading.aspx In addition to limits specified an above link, WS will not hotplug more that 8 CPUs if it was started with less than 9 CPUs. In case it started with less then 9 CPUs, it will online up to 8 CPUs and only create CPU devices for the rest (and ask for restart to use them). If it's started with more than 8 CPUs it will online every hotplugged CPU up to supported limits. It looks like WS limitation, so libvirt probably needs to take it into account (with instrumented ACPI in BIOS I've traced that WS is notified about all hotplugged CPUs and gets valid status and _MAT values for every hotplugged CPU). May be there is other topology limitations but I wasn't able to find any docs about them. > > Issue summary > 1. unexpected results > 1.1.-smp 64,cores=4,thread=1,socket=40,maxcpus=160 > 1.2.-smp 64,cores=1,thread=1,socket=1,maxcpus=160 > 1.3.-smp 64,cores=1,thread=1,socket=64,maxcpus=160 > scenario 1: > 119 cpus are online in guest > scenario 2: > 64 cpus are online in guest > scenario 3: > 119 cpus are online in guest > > 2. expected results > 2.1 -smp 1,cores=4,thread=1,socket=40,maxcpus=160 > > Results: > 160 cpus are online in guest > > Additional infos > KVM QE know the win2k8r2 maximize support cpu socket are 64, the problem is > we do not now how the upper management handle -smp x, > cores=x,thread=x,socket=x when boot a guest. Scenarios 1.2 and 1.3 are invalid due to unsupported topology by WS. Scenario 1.1 works for me with RHEL and upstream qemu versions. Guest takes time to online all CPUs so you have to wait till the end of it or count device nodes with processor type. I've used following commands in guest to get online CPUs number: --- # get number of online threads get-wmiobject Win32_ComputerSystem -Property NumberOfLogicalProcessors #get number of online sockets (i.e. socket with at least one online CPU) get-wmiobject Win32_ComputerSystem -Property NumberOfProcessors Reassigning to libvirt (as mgmt layer) so that it wouldn't be possible to start qemu with wrong/not supported topology for specific guests (i.e. WS). Libvirt isn't aware of the guest operating system that is used and doesn't store OS specific configuration details. This has to be done in even higher management layer like virt-manager, RHEV or others. |