Bug 2075486
Summary: | VM with Q35 UEFI and 64 CPUs is running but without boot screen, console and network. | ||
---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Nisim Simsolo <nsimsolo> |
Component: | BLL.Virt | Assignee: | Milan Zamazal <mzamazal> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Nisim Simsolo <nsimsolo> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.5.0 | CC: | ahadas, bugs, lsvaty, michal.skrivanek, mzamazal, nsimsolo |
Target Milestone: | ovirt-4.5.0-1 | Flags: | pm-rhel:
ovirt-4.5?
lsvaty: blocker+ |
Target Release: | 4.5.0.5 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ovirt-engine-4.5.0.5 | Doc Type: | No Doc Update |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-05-03 06:46:58 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Nisim Simsolo
2022-04-14 11:07:46 UTC
Platform discussion happens in BZ 2074149. There is also a nice explanatory summary in https://bugzilla.redhat.com/show_bug.cgi?id=1469338#c30. What we need to do in oVirt is: - Not setting the maximum number of vCPUs unnecessarily high. Let's make it at most some configurable multiple of the initial vCPUs number (somewhat similar to how we limit maximum memory). - Adding <features> <smm state='on'> <tseg unit='MiB'>48</tseg> </smm> </features> to the domain XML when the maximum number of vCPUs exceeds certain limit (e.g. 255?) and UEFI is used (see also `smm' doc in https://libvirt.org/formatdomain.html#hypervisor-features). We may hit a similar problem with large RAM. This is sort of guesswork but hopefully it should cover most of the typical cases. *** Bug 2074149 has been marked as a duplicate of this bug. *** it doesn't sound like too much of an overhead, why not just setting it high enough to cover our maximums? You meant TSEG? Let's not be nasty to small VMs. If I understand the libvirt documentation correctly then this value is taken from the RAM available to the guest and there are other things that eat some memory from the guest memory here and there. There is no need to waste it unnecessarily here in case of small VMs. Verified: ovirt-engine-4.5.0.5-0.7.el8ev vdsm-4.50.0.13-1.el8ev.x86_64 qemu-kvm-6.2.0-11.module+el8.6.0+14707+5aa4b42d.x86_64 libvirt-daemon-8.0.0-5.module+el8.6.0+14480+c0a3aa0f.x86_64 Verification scenario: 1. Run the next VMs with 64 virtual CPUs (2 virtual sockets, 16 cores per virtual socket and 2 threads per core): - RHEL8 VM Q35/SecureBoot - RHEL8 VM Q35/UEFI - RHEL8 VM Q35/BIOS - RHEL9 VM Q35/SecureBoot - RHEL9 VM Q35/UEFI - RHEL9 VM Q35/BIOS - RHEL8 VM I400FX/BIOS - Windows VM Q35/SecureBoot - Windows VM Q35/UEFI - Windows VM Q35/BIOS - Windows VM I440FX/BIOS 2. For each VM running, verify console is showing boot screen and after boot it shows VM OS, mouse and keyboard are functioning and inside the VM verify that the correct CPUs (64, 2/16/2) are set. |