Bug 1598162
| Summary: | [RFE] Add 'qemu64' as the CPU model if user doesn't supply a <cpu/> element | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | Kashyap Chamarthy <kchamart> |
| Component: | libvirt | Assignee: | Jiri Denemark <jdenemar> |
| Status: | CLOSED ERRATA | QA Contact: | jiyan <jiyan> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 8.1 | CC: | agedosier, berrange, chhu, clalancette, dyuan, ehabkost, extras-qa, itamar, jdenemar, jforbes, jinzhao, jsuchane, knoel, laine, lhuang, libvirt-maint, mtessun, veillard, xuzhang, yalzhang, yuhuang |
| Target Milestone: | rc | Keywords: | FutureFeature |
| Target Release: | 8.2 | Flags: | rule-engine:
mirror+
|
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-5.10.0-1.el8 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1598151 | Environment: | |
| Last Closed: | 2020-05-05 09:43:16 UTC | Type: | Feature Request |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Kashyap Chamarthy
2018-07-04 13:46:25 UTC
Missing <cpu> element means the user doesn't care what CPU model they get and
I don't believe printing the model in the live XML would make them start to
care about it.
Moreover, specifying a CPU model in the domain XML causes libvirt to check
whether such CPU model can be provided on the current host. So for example on
any Intel CPU a domain with no <cpu> element starts just fine, but a domain
with
<cpu mode='custom' match='exact'>
<model>qemu64</model>
</cpu>
would fail to start because of missing features.
Because libvirt has no way of knowing what CPU model QEMU used when starting
the domain we'd need to set the CPU explicitly before starting the domain and
disable the host CPU compatibility checks libvirt normally does:
<cpu mode='custom' check='none'>
<model>qemu64</model>
</cpu>
Once the domain starts, we'd normally update the CPU with the features QEMU
was not able to enable (such as 'svm' on Intel CPUs).
Of course, this would be incompatible with any QEMU (either past or future)
which decides to use a different default CPU model.
That said, I'm not quite convinced this is worth the effort, but it should be
doable.
(In reply to Jiri Denemark from comment #4) > Missing <cpu> element means the user doesn't care what CPU model they get and > I don't believe printing the model in the live XML would make them start to > care about it. > > Moreover, specifying a CPU model in the domain XML causes libvirt to check > whether such CPU model can be provided on the current host. So for example on > any Intel CPU a domain with no <cpu> element starts just fine, but a domain > with > > <cpu mode='custom' match='exact'> > <model>qemu64</model> > </cpu> > > would fail to start because of missing features. Urgh, I forgot about this complication :-( > Because libvirt has no way of knowing what CPU model QEMU used when starting > the domain we'd need to set the CPU explicitly before starting the domain and > disable the host CPU compatibility checks libvirt normally does: > > <cpu mode='custom' check='none'> > <model>qemu64</model> > </cpu> > > Once the domain starts, we'd normally update the CPU with the features QEMU > was not able to enable (such as 'svm' on Intel CPUs). > > Of course, this would be incompatible with any QEMU (either past or future) > which decides to use a different default CPU model. If QEMU did ever change its default CPU, it would have to tie it to machine type anyway, to avoid creating breakage for previously used QEMUs. I think its sufficient if we're back compatible with existing QEMU's which AFAIK, have always used 'qemu64', except for RHEL-6 fork which invented some custom CPUs. RHEL6 isn't a supported platform, but i guess we'd need to keep compat for incoming migration from RHEL-6 to 7. > That said, I'm not quite convinced this is worth the effort, but it should be > doable. My thought was two fold - People not providing <cpu> usually don't realize they are getting an terrible CPU model choice. This used to just be a performance issue, but now its a security issue thanks to spectre and friends. If we can do something to make this bad choice more obvious I think its useful. Exposing it in the XML and QEMU command line feels like its an incremental step to making it more obvious. - Get away from reliance on QEMU defaults, since we've tried to avoid such dependency in general most everywhere else. It would be nice to have a more explicit known CPU unconditionally present in live XML, so we can always do correct validation across migration for example. Patches sent upstream for review: https://www.redhat.com/archives/libvir-list/2019-October/msg00140.html Acked version 3 of the series: https://www.redhat.com/archives/libvir-list/2019-November/msg00070.html Implemented upstream by
commit 5e939cea896fb3373a6f68f86e325c657429ed3d
Refs: v5.9.0-352-g5e939cea89
Author: Jiri Denemark <jdenemar>
AuthorDate: Thu Sep 26 18:42:02 2019 +0200
Commit: Jiri Denemark <jdenemar>
CommitDate: Wed Nov 20 17:22:07 2019 +0100
qemu: Store default CPU in domain XML
When starting a domain without a CPU model specified in the domain XML,
QEMU will choose a default one. Which is fine unless the domain gets
migrated to another host because libvirt doesn't perform any CPU ABI
checks and the virtual CPU provided by QEMU on the destination host can
differ from the one on the source host.
With QEMU 4.2.0 we can probe for the default CPU model used by QEMU for
a particular machine type and store it in the domain XML. This way the
chosen CPU model is more visible to users and libvirt will make sure
the guest will see the exact same CPU after migration.
Architecture specific notes
- aarch64: We only set the default CPU for TCG domains as KVM requires
explicit "-cpu host" to work.
- ppc64: The default CPU for KVM is "host" thanks to some hacks in QEMU,
we will translate the default model to the model corresponding to the
host CPU ("POWER8" on a Power8 host, "POWER9" on Power9 host, etc.).
This is not a problem as the corresponding CPU model is in fact an
alias for "host". This is probably not ideal, but it's not wrong and
the default virtual CPU configured by libvirt is the same QEMU would
use. TCG uses various CPU models depending on machine type and its
version.
- s390x: The default CPU for KVM is "host" while TCG defaults to "qemu".
- x86_64: The default CPU model (qemu64) is not runnable on any host
with KVM, but QEMU just disables unavailable features and starts
happily.
https://bugzilla.redhat.com/show_bug.cgi?id=1598151
https://bugzilla.redhat.com/show_bug.cgi?id=1598162
Signed-off-by: Jiri Denemark <jdenemar>
Reviewed-by: Ján Tomko <jtomko>
Verified this bug on libvirt-5.10.0-1.module+el8.2.0+5135+ed3b2489.x86_64.
Version:
libvirt-5.10.0-1.module+el8.2.0+5135+ed3b2489.x86_64
qemu-kvm-4.2.0-2.module+el8.2.0+5135+ed3b2489.x86_64
kernel-4.18.0-160.el8.x86_64
Steps:
1. Prepare a shutdown VM, and edit the VM by deleting all <cpu> conf
# virsh domstate test820
shut off
# virsh edit test820
Domain test820 XML configuration edited.
2. Check the inactive CPU conf after step-1
# virsh dumpxml test820 --inactive |grep "<cpu" -A2
<cpu mode='custom' match='exact' check='none'>
<model fallback='forbid'>qemu64</model> **qemu64 was added here**
</cpu>
3. Start VM, then check active dumpxml, qemu cmd line and guest os cpu info
# virsh start test820
Domain test820 started
# virsh dumpxml test820 |grep "<cpu" -A20
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>qemu64</model>
<feature policy='require' name='x2apic'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='lahf_lm'/>
<feature policy='disable' name='svm'/>
</cpu>
# ps -ef |grep test
...-machine pc-q35-rhel8.1.0,accel=kvm,usb=off,vmport=off,dump-guest-core=off -cpu qemu64 -m 1024
# virsh console test820
Connected to domain test820
Escape character is ^]
Red Hat Enterprise Linux 8.1 (Ootpa)
Kernel 4.18.0-147.el8.x86_64 on an x86_64
localhost login: root
Password:
Last login: Tue Dec 10 10:24:04 on ttyS0
[root@localhost ~]# lscpu
...
Vendor ID: AuthenticAMD
...
Model name: QEMU Virtual CPU version 2.5+
...
Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm nopl cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm 3dnowprefetch vmmcall
4. Hot-plug/unplug vcpus
# virsh vcpucount test820
maximum config 8
maximum live 8
current config 4
current live 4
# virsh setvcpus test820 6
# virsh setvcpu test820 4 --disable
# virsh vcpupin test820 2 7,25
# virsh vcpupin test820 4 7,25
# virsh vcpucount test820
maximum config 8
maximum live 8
current config 4
current live 5
# virsh vcpupin test820
VCPU CPU Affinity
----------------------
0 0-31
1 0-31
2 7,25
3 0-31
4 7,25
5 0-31
6 0-31
7 0-31
All the test results are as expected, move this bug to be verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2017 |