Bug 1095323
| Summary: | RFE: default VM topology to use cores instead of sockets | ||
|---|---|---|---|
| Product: | [Community] Virtualization Tools | Reporter: | Samo Dadela <samo_dadela> |
| Component: | virt-manager | Assignee: | Cole Robinson <crobinso> |
| Status: | CLOSED DEFERRED | QA Contact: | |
| Severity: | low | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | unspecified | CC: | be.0, berrange, cfergeau, crobinso, fidencio, gscrivan, hagbardcelin, ibaldo, pkrempa, virt-maint |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-09-16 21:16:12 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Samo Dadela
2014-05-07 13:03:22 UTC
Yes, I've heard of this before. Since it's OS based, probably something that should be trackable with libosinfo, reassigning there zeenix/teuf, does boxes have this issue as well? (In reply to Cole Robinson from comment #1) > Yes, I've heard of this before. Since it's OS based, probably something that > should be trackable with libosinfo, reassigning there The bug is about user manually specifying the number of CPU so its more about the UI of virt-manager AFAICT. IIRC, I had a chat with Daniel about having this info in libosinfo but he had some good arguments against it and we decided to just assign all CPU power available to VMs. > zeenix/teuf, does boxes have this issue as well? No, Boxes just assigns all available CPU power and there isn't a UI to change that: https://git.gnome.org/browse/gnome-boxes/tree/src/vm-configurator.vala#n202 Thanks for the info. Moving back to virt-manager (In reply to Zeeshan Ali from comment #2) > (In reply to Cole Robinson from comment #1) > > Yes, I've heard of this before. Since it's OS based, probably something that > > should be trackable with libosinfo, reassigning there > > The bug is about user manually specifying the number of CPU so its more > about the UI of virt-manager AFAICT. IIRC, I had a chat with Daniel about > having this info in libosinfo but he had some good arguments against it and > we decided to just assign all CPU power available to VMs. FYI I was against the idea of storing preferred vCPU count, since I think "cpu count" is a fairly meaningless metric given the vast range of CPU types / speeds / etc. Recording metadata about maximum supported number of cores, or upper limits on CPU counts would be reasonable, since those are clearly defined software or licensing limits we need to deal with. We certainly want to make sure that if an OS won't support > 2 sockets and we want 8 vCPUs, then libosinfo should tell us this, so we can use 2x socket + 4 x cores. > > zeenix/teuf, does boxes have this issue as well? > > No, Boxes just assigns all available CPU power and there isn't a UI to > change that: > https://git.gnome.org/browse/gnome-boxes/tree/src/vm-configurator.vala#n202 Assigning all host CPUs is fine - the issue here is getting the topology of that right ie cores vs sockets. Hi. I had to make an account to chime in on this, because it is causing me much irritation when trying to run Windows in KVM from virt-manager.
The thing making trouble for me is not exactly the missing default. Having to manually set Sockets/Cores/Threads once when creating the VM is manageable.
The real irritating problem is that when you tick "Manually set topology" the "Current allocation" setting below "CPUs" will default to the value of "Sockets" instead of "Sockets"*("Cores"+"Threads")
And the worst part is the following reproducible behaviour.
1. Start virt-manager.
2. Open/make a quem-KVM that _does not_ have "Manually set topology" enabled ane make shure that "Logical host CPUs"="Current allocation"="Maximum allocation".
3. Quit and restart virt-manager, open the same VM and verify that nothing changed, "Logical host CPUs"="Current allocation"="Maximum allocation"
4. Enable "Manually set topology". And raise "Cores" and "Threads" to "2", observe that the (now greyed out) "Maximum allocation" rises as you up the Threads and Cores, but the "Current allocation" is unaffected.
5. Lower "Cores" to 1. Observe that "Current allocation" is now equal to "Maximum allocation" which is equal to "Sockets"*("Cores"+"Threads")
6. Apply the changes, Quit and Restart virt-manager, open the same VM and verify the settings. Result: "Current allocation" is now equal to "Cores" and not what we set it to.
To be able to boot a VM with this setup from virt-manager I have to go to CPU settings and adjust the "Current allocation" setting and _very quickly_ press "power on" if there is too long a delay between the adjusting and the power-up, the effective setting when booting the VM will be "Current allocation"="Cores" again.
Thanks for the report hagbardcelin, can you file a separate bug for that? Use this link: https://bugzilla.redhat.com/enter_bug.cgi?product=Virtualization%20Tools&component=virt-manager Nevermind on the separate bug, I fixed it upstream now:
commit 0f940a443266af7a199d0e5a959f898da58430c3
Author: Cole Robinson <crobinso>
Date: Thu Sep 24 09:31:04 2015 -0400
details: Fix topology init from overriding curvcpus value
Reported here: https://bugzilla.redhat.com/show_bug.cgi?id=1095323#c5
FYI there's some more info in this RHEL bug: https://bugzilla.redhat.com/show_bug.cgi?id=969352 *** Bug 969352 has been marked as a duplicate of this bug. *** I think rather than try to arrange topology to specifically match a VM, we should just default to mirroring the host topology inside the VM, for new VMs that is. Then out of the box for defaults this will more or less 'just work' for most common cases here I imagine. It is not that simple when hyperthreads are involved. When the scheduler sees SMT it does special process placement optimization to take account of fact that SMT siblings are not as powerful as real cores. If you expose SMT to the guest, and vCPUs are freely floating across host pCPUs, then the guest schedular is going to be making very bad decisions, because what it sees as 2 SMT sibling vCPUs may in fact be placed on separate sockets or cores, or vica-verca. So.. If vCPUs are freely floating you must *NEVER* expose SMT to the guest, only ever use cores/sockets. If vCPUs are strictly pinned you should always try to match host cores/sockets/threads. Given Windows restrictions on using more than 1 socket, the sensible default behaviour is really to just expose everything as cores. Real physical sockets today have huge core counts (as much as 24/32), so guest OS will be expecting to see large core counts. They rarely see large socket counts (max 2 / 4 even if big x86 servers). NB what I said about SMT is generally applicable but there are some complications on ppc that I won't go into. Okay thanks for the info. I'll start a thread to discuss the implications before I change any default here Pinging on this. Users shouldn't have to manually configure the number of CPU sockets for Windows to use multiple CPU cores. Upstream virt-manager is now using the github issue tracker for upstream bugs: https://github.com/virt-manager/virt-manager I've filed this issue there: https://github.com/virt-manager/virt-manager/issues/155 Please follow along there, I'm closing this bugzilla bug. I am mopping up issues and planning to start a series of broader virt discussions about improving virt-manager/virt-install defaults, this will be one of the topics |