Bug 1598162 - [RFE] Add 'qemu64' as the CPU model if user doesn't supply a <cpu/> element
Summary: [RFE] Add 'qemu64' as the CPU model if user doesn't supply a <cpu/> element
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libvirt
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.2
Assignee: Jiri Denemark
QA Contact: jiyan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-04 13:46 UTC by Kashyap Chamarthy
Modified: 2020-05-05 09:44 UTC (History)
21 users (show)

Fixed In Version: libvirt-5.10.0-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1598151
Environment:
Last Closed: 2020-05-05 09:43:16 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:
rule-engine: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2017 0 None None None 2020-05-05 09:44:40 UTC

Description Kashyap Chamarthy 2018-07-04 13:46:25 UTC
+++ This bug was initially created as a clone of Bug #1598151 +++

Problem 
-------

When a user doesn't provide any <cpu/> element in the guest
configuration, libvirt doesn't automatically add it.  So you have have
no way of knowing _what_ CPU your libvirt-launched guest is using.
(Unless you look at the QEMU source to figure out the default CPU model,
which is 'qemu64', for x86 hosts).

Users should be reminded of the CPU model they are using.  Especially so 
when they are using an inefficient and insecure model such as the
default 'qemu64'.


Steps to reproduce
------------------

(1) Edit a libvirt guest configuration and remove any <cpu/> element in it.  

(2) Boot the guest.


Actual result
-------------

Guest starts fine.  But no <cpu/> element in the libvirt XML
configuration -- which, in turn, means no '-cpu qemu64' on the QEMU
command-line either.


Expected result
---------------

Guest should be assigned a <cpu/> element with the QEMU / libvirt 
default CPU model 'qemu64'.

Comment 4 Jiri Denemark 2018-07-04 15:00:49 UTC
Missing <cpu> element means the user doesn't care what CPU model they get and
I don't believe printing the model in the live XML would make them start to
care about it.

Moreover, specifying a CPU model in the domain XML causes libvirt to check
whether such CPU model can be provided on the current host. So for example on
any Intel CPU a domain with no <cpu> element starts just fine, but a domain
with

    <cpu mode='custom' match='exact'>
        <model>qemu64</model>
    </cpu>

would fail to start because of missing features.

Because libvirt has no way of knowing what CPU model QEMU used when starting
the domain we'd need to set the CPU explicitly before starting the domain and
disable the host CPU compatibility checks libvirt normally does:

    <cpu mode='custom' check='none'>
        <model>qemu64</model>
    </cpu>

Once the domain starts, we'd normally update the CPU with the features QEMU
was not able to enable (such as 'svm' on Intel CPUs).

Of course, this would be incompatible with any QEMU (either past or future)
which decides to use a different default CPU model.

That said, I'm not quite convinced this is worth the effort, but it should be
doable.

Comment 5 Daniel Berrangé 2018-07-04 15:15:40 UTC
(In reply to Jiri Denemark from comment #4)
> Missing <cpu> element means the user doesn't care what CPU model they get and
> I don't believe printing the model in the live XML would make them start to
> care about it.
> 
> Moreover, specifying a CPU model in the domain XML causes libvirt to check
> whether such CPU model can be provided on the current host. So for example on
> any Intel CPU a domain with no <cpu> element starts just fine, but a domain
> with
> 
>     <cpu mode='custom' match='exact'>
>         <model>qemu64</model>
>     </cpu>
> 
> would fail to start because of missing features.

Urgh, I forgot about this complication :-(

> Because libvirt has no way of knowing what CPU model QEMU used when starting
> the domain we'd need to set the CPU explicitly before starting the domain and
> disable the host CPU compatibility checks libvirt normally does:
> 
>     <cpu mode='custom' check='none'>
>         <model>qemu64</model>
>     </cpu>
> 
> Once the domain starts, we'd normally update the CPU with the features QEMU
> was not able to enable (such as 'svm' on Intel CPUs).
> 
> Of course, this would be incompatible with any QEMU (either past or future)
> which decides to use a different default CPU model.

If QEMU did ever change its default CPU, it would have to tie it to machine type anyway, to avoid creating breakage for previously used QEMUs.

I think its sufficient if we're back compatible with existing QEMU's which AFAIK, have always used 'qemu64', except for RHEL-6 fork which invented some custom CPUs. RHEL6 isn't a supported platform, but i guess we'd need to keep compat for incoming migration from RHEL-6 to 7.

> That said, I'm not quite convinced this is worth the effort, but it should be
> doable.

My thought was two fold

 - People not providing <cpu> usually don't realize they are getting an terrible CPU model choice. This used to just be a performance issue, but now its a security issue thanks to spectre and friends. If we can do something to make this bad choice more obvious I think its useful. Exposing it in the XML and QEMU command line feels like its an incremental step to making it more obvious.

 - Get away from reliance on QEMU defaults, since we've tried to avoid such dependency in general most everywhere else. It would be nice to have a more explicit known CPU unconditionally present in live XML, so we can always do correct validation across migration for example.

Comment 14 Jiri Denemark 2019-10-03 14:11:45 UTC
Patches sent upstream for review: https://www.redhat.com/archives/libvir-list/2019-October/msg00140.html

Comment 15 Jiri Denemark 2019-11-14 16:00:50 UTC
Acked version 3 of the series: https://www.redhat.com/archives/libvir-list/2019-November/msg00070.html

Comment 16 Jiri Denemark 2019-11-25 14:43:56 UTC
Implemented upstream by

commit 5e939cea896fb3373a6f68f86e325c657429ed3d
Refs: v5.9.0-352-g5e939cea89
Author:     Jiri Denemark <jdenemar>
AuthorDate: Thu Sep 26 18:42:02 2019 +0200
Commit:     Jiri Denemark <jdenemar>
CommitDate: Wed Nov 20 17:22:07 2019 +0100

    qemu: Store default CPU in domain XML

    When starting a domain without a CPU model specified in the domain XML,
    QEMU will choose a default one. Which is fine unless the domain gets
    migrated to another host because libvirt doesn't perform any CPU ABI
    checks and the virtual CPU provided by QEMU on the destination host can
    differ from the one on the source host.

    With QEMU 4.2.0 we can probe for the default CPU model used by QEMU for
    a particular machine type and store it in the domain XML. This way the
    chosen CPU model is more visible to users and libvirt will make sure
    the guest will see the exact same CPU after migration.

    Architecture specific notes
    - aarch64: We only set the default CPU for TCG domains as KVM requires
      explicit "-cpu host" to work.

    - ppc64: The default CPU for KVM is "host" thanks to some hacks in QEMU,
      we will translate the default model to the model corresponding to the
      host CPU ("POWER8" on a Power8 host, "POWER9" on Power9 host, etc.).
      This is not a problem as the corresponding CPU model is in fact an
      alias for "host". This is probably not ideal, but it's not wrong and
      the default virtual CPU configured by libvirt is the same QEMU would
      use. TCG uses various CPU models depending on machine type and its
      version.

    - s390x: The default CPU for KVM is "host" while TCG defaults to "qemu".

    - x86_64: The default CPU model (qemu64) is not runnable on any host
      with KVM, but QEMU just disables unavailable features and starts
      happily.

    https://bugzilla.redhat.com/show_bug.cgi?id=1598151
    https://bugzilla.redhat.com/show_bug.cgi?id=1598162

    Signed-off-by: Jiri Denemark <jdenemar>
    Reviewed-by: Ján Tomko <jtomko>

Comment 18 jiyan 2019-12-11 03:36:32 UTC
Verified this bug on libvirt-5.10.0-1.module+el8.2.0+5135+ed3b2489.x86_64.

Version:
libvirt-5.10.0-1.module+el8.2.0+5135+ed3b2489.x86_64
qemu-kvm-4.2.0-2.module+el8.2.0+5135+ed3b2489.x86_64
kernel-4.18.0-160.el8.x86_64

Steps:
1. Prepare a shutdown VM, and edit the VM by deleting all <cpu> conf
# virsh domstate test820 
shut off

# virsh edit test820 
Domain test820 XML configuration edited.

2. Check the inactive CPU conf after step-1
# virsh dumpxml test820 --inactive |grep "<cpu" -A2
  <cpu mode='custom' match='exact' check='none'>
    <model fallback='forbid'>qemu64</model>      **qemu64 was added here**
  </cpu>

3. Start VM, then check active dumpxml, qemu cmd line and guest os cpu info
# virsh start test820 
Domain test820 started

# virsh dumpxml test820 |grep "<cpu" -A20
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>qemu64</model>
    <feature policy='require' name='x2apic'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='lahf_lm'/>
    <feature policy='disable' name='svm'/>
  </cpu>

# ps -ef |grep test
...-machine pc-q35-rhel8.1.0,accel=kvm,usb=off,vmport=off,dump-guest-core=off -cpu qemu64 -m 1024 

# virsh console test820 
Connected to domain test820
Escape character is ^]

Red Hat Enterprise Linux 8.1 (Ootpa)
Kernel 4.18.0-147.el8.x86_64 on an x86_64

localhost login: root
Password: 
Last login: Tue Dec 10 10:24:04 on ttyS0
[root@localhost ~]# lscpu 
...
Vendor ID:           AuthenticAMD
...
Model name:          QEMU Virtual CPU version 2.5+
...
Flags:               fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx lm nopl cpuid tsc_known_freq pni cx16 x2apic hypervisor lahf_lm 3dnowprefetch vmmcall

4. Hot-plug/unplug vcpus
# virsh vcpucount test820 
maximum      config         8
maximum      live           8
current      config         4
current      live           4

# virsh setvcpus test820 6

# virsh setvcpu test820 4 --disable 

# virsh vcpupin test820 2 7,25

# virsh vcpupin test820 4 7,25

# virsh vcpucount test820 
maximum      config         8
maximum      live           8
current      config         4
current      live           5

# virsh vcpupin test820 
 VCPU   CPU Affinity
----------------------
 0      0-31
 1      0-31
 2      7,25
 3      0-31
 4      7,25
 5      0-31
 6      0-31
 7      0-31

All the test results are as expected, move this bug to be verified.

Comment 20 errata-xmlrpc 2020-05-05 09:43:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017


Note You need to log in before you can comment on or make changes to this bug.