Bug 1689362 - ovirt does not respect domcapabilities
Summary: ovirt does not respect domcapabilities
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Backend.Core
Version: 4.3.1
Hardware: x86_64
OS: Linux
unspecified
high with 3 votes
Target Milestone: ovirt-4.4.4
: 4.4.4
Assignee: Milan Zamazal
QA Contact: Beni Pelled
URL:
Whiteboard:
: 1689361 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-15 17:40 UTC by Hetz Ben Hamo
Modified: 2020-12-21 12:36 UTC (History)
9 users (show)

Fixed In Version: ovirt-engine-4.4.4
Clone Of:
Environment:
Last Closed: 2020-12-21 12:36:15 UTC
oVirt Team: Virt
Embargoed:
pm-rhel: ovirt-4.4+
pm-rhel: planning_ack+
pm-rhel: devel_ack+
pm-rhel: testing_ack+


Attachments (Terms of Use)
engine log (201.38 KB, application/gzip)
2019-03-16 01:26 UTC, Hetz Ben Hamo
no flags Details
VDSM log (42.38 KB, application/gzip)
2019-03-16 01:26 UTC, Hetz Ben Hamo
no flags Details
Dump XML as requested (7.70 KB, application/zip)
2019-03-16 11:13 UTC, Hetz Ben Hamo
no flags Details
VDSM logs after running hosted-engine deployment (501.25 KB, application/zip)
2019-03-16 13:13 UTC, Hetz Ben Hamo
no flags Details
engine.log (16.74 KB, text/plain)
2020-07-08 17:03 UTC, Beni Pelled
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 98728 0 'None' ABANDONED nestedvt: enable the 'monitor' flag for AMD CPUs 2021-02-08 15:37:32 UTC
oVirt gerrit 111822 0 master MERGED core: Disable monitor CPU flag in nested AMD VMs 2021-02-08 15:37:32 UTC

Description Hetz Ben Hamo 2019-03-15 17:40:54 UTC
(I'm not sure if the component or the team is correct. If not, please redirect it).

I'm trying to run oVirt in nested virtualization with AMD's various Zen/Zen+ based CPU's (Ryzen, Threadripper,EPYC).

In Nested Virtualization mode, when trying to create or launch a VM, it stops and complains that the "monitor" flag is missing.

Checking libvirt domcapabilities shows that indeed the monitor policy is "disabled" which is correct (checking against other virtualization solutions), but oVirt doesn't respect the domcapabilities.

Could someone please disable the monitor flag check? it cannot be enabled and it's not a bug in the CPU or KVM.

Comment 1 Ryan Barry 2019-03-16 00:16:15 UTC
*** Bug 1689361 has been marked as a duplicate of this bug. ***

Comment 2 Ryan Barry 2019-03-16 00:25:43 UTC
Please attach logs (engine.log, libvirt logs, qemu logs). We don't directly check for flags outside of setting a CPU model. Is this coming from qemu?

Is vdsm-hook-nestedvt in use?

Comment 3 Hetz Ben Hamo 2019-03-16 01:26:25 UTC
Created attachment 1544682 [details]
engine log

Comment 4 Hetz Ben Hamo 2019-03-16 01:26:49 UTC
Created attachment 1544683 [details]
VDSM log

Comment 5 Hetz Ben Hamo 2019-03-16 01:34:03 UTC
I don't see any libvirt or qemu logs. Where are they? I'm enclosing both vdsm and engine logs which shows the error.

Exact message is: 6464: error : virCPUx86Compare:1731 : the CPU is incompatible with host CPU: Host CPU does not provide required features: monitor

Output of virsh domcapabilities:

# virsh domcapabilities | grep mon
<feature policy='disable' name='monitor'/>

Comment 6 Hetz Ben Hamo 2019-03-16 01:44:39 UTC
Forgot to mention: yes, installed vdsm-hook-nestedvt and checked that that it appears in host hooks (it does).

Comment 7 Ryan Barry 2019-03-16 01:45:33 UTC
So, that message comes directly from libvirt.

Libvirt and qemu logs will be on the host the VM was scheduled on before it failed (likely to be the same host as the vdsm logs). Both are under /var/log/

Is vdsm-hook-nestedvt installed?

Comment 8 Hetz Ben Hamo 2019-03-16 01:54:03 UTC
yes, vdsm-hook-nestedvt installed and running.

/var/log/libvirt/qemu doesn't help much - it has the VM log which only shows:

cat test-client.log 
2019-03-16 00:59:38.003+0000: shutting down, reason=failed
2019-03-16 01:15:28.349+0000: shutting down, reason=failed
2019-03-16 01:28:57.159+0000: shutting down, reason=failed
2019-03-16 01:29:04.283+0000: shutting down, reason=failed
2019-03-16 01:29:51.729+0000: shutting down, reason=failed
2019-03-16 01:31:44.493+0000: shutting down, reason=failed

/var/log/qemu-ga is an empty directory.

tailing the journald when starting a VM shows:

מרץ 16 03:52:44 localhost.localdomain vdsm[10633]: WARN Attempting to add an existing net user: ovirtmgmt/a3b4d8de-f2d3-4272-843c-fba78751f481
מרץ 16 03:52:45 localhost.localdomain libvirtd[6416]: 2019-03-16 01:52:45.401+0000: 6466: error : virCPUx86Compare:1731 : the CPU is incompatible with host CPU: Host CPU does not provide required features: monitor
מרץ 16 03:52:45 localhost.localdomain vdsm[10633]: WARN File: /var/lib/libvirt/qemu/channels/a3b4d8de-f2d3-4272-843c-fba78751f481.ovirt-guest-agent.0 already removed
מרץ 16 03:52:45 localhost.localdomain vdsm[10633]: WARN Attempting to remove a non existing network: ovirtmgmt/a3b4d8de-f2d3-4272-843c-fba78751f481
מרץ 16 03:52:45 localhost.localdomain vdsm[10633]: WARN Attempting to remove a non existing net user: ovirtmgmt/a3b4d8de-f2d3-4272-843c-fba78751f481
מרץ 16 03:52:45 localhost.localdomain vdsm[10633]: WARN File: /var/lib/libvirt/qemu/channels/a3b4d8de-f2d3-4272-843c-fba78751f481.org.qemu.guest_agent.0 already removed

Comment 9 Michal Skrivanek 2019-03-16 05:27:53 UTC
Please attach /proc/cpuinfo from L0 host, and domcapabilities output. Then the same from your nested L1 host, plus its domain xml from libvirt. If you manage to start a nested guest manually, can you please also get qemu cmdline and cpuinfo from the L2 guest?

Comment 10 Hetz Ben Hamo 2019-03-16 11:12:12 UTC
As requested, I'm including a dump of l0 and l1 cpuinfo and dom capabilities.
I also include the ovirt-node1 dumpxml as well as centos7 dumpxml.

I found something very interesting:

On the host (Fedora 29 with Ryzen 7) I created a CentOS 7 nested guest and installed CentOS 7 below it (so: Fedora host -> Centos nested -> Centos guest without nest) - this works perfectly ok.

However - I launched the ovirt node 1 (latest - 4.3.1) as a guest with nested virtualization and I tried to launch a VM using virsh (Centos 7 guest, no nested) - it stops with the CPU error about monitor.

So, it seems that the problem related to the Node-NG-appliance which I installed as ovirt-node-1. On a standard CentOS geust with nested, everything works, no errors...

So, how can I find what causes it in the Node-NG?

Comment 11 Hetz Ben Hamo 2019-03-16 11:13:02 UTC
Created attachment 1544756 [details]
Dump XML as requested

Comment 12 Hetz Ben Hamo 2019-03-16 11:20:40 UTC
Just to make myself clear - all VM's were created on the host (Fedora 29) using virt-manager

Comment 13 Hetz Ben Hamo 2019-03-16 13:12:38 UTC
After researching further, I found the following issue:

I installed CentOS as L1 guest with nested virtualization, and added oVirt Repo, and started the hosted-engine deployment.

It creates the HE VM, launches it and it works well (I can access it by port 6900).

However, when it comes to the storage part, after giving it the NFS share and continuing deployment, it creates the new HE VM in the NFS, moving the data and then when it tries to launch the new VM - it goes up and down.

So while it does this up & down, I mounted manually my virtual machines and tried to launch a VM using virsh (using virsh create)

And .. surprise surprise: 

# virsh create nfs-server.xml
Please enter your authentication name: hetz
Please enter your password: 
error: Failed to create domain from nfs-server.xml
error: the CPU is incompatible with host CPU: Host CPU does not provide required features: monitor

Prior to deploying the HE on this VM, KVM inside the guest OS worked perfectly well with virsh. After the failed deployed - I got the above.

I'm enclosing the whole VDSM stuff as the ansible logs doesn't show anything relevant..

Comment 14 Hetz Ben Hamo 2019-03-16 13:13:10 UTC
Created attachment 1544820 [details]
VDSM logs after running hosted-engine deployment

Comment 15 Hetz Ben Hamo 2019-03-16 13:19:38 UTC
Update #3: When running HE as a stand alone VM (not deploying using the hosted-engine --deploy) and adding a nested VM as "node" - it creates the same issue on this new "node".

Hope this helps...

Comment 16 Ryan Barry 2019-03-17 17:54:05 UTC
Thanks, Hetz. I'll look at the logs tomorrow.

vdsm does try to do CPU detection and set a host model appropriately (including HE setups -- you would have been prompted for this as part of the deployment), but we may be missing something here...

Comment 17 Ryan Barry 2019-03-21 00:08:59 UTC
Confirmed, and I know for sure that this doesn't happen with nested Intel CPUs, since I use them regularly

Comment 18 Juan Orti 2019-12-12 14:08:46 UTC
Hello, any progress on this bug? I'm experiencing the same problem deploying a nested HostedEngine. If you need more logs or tests, just tell me.

Comment 19 Mitch 2020-06-08 14:04:14 UTC
(In reply to Hetz Ben Hamo from comment #0)
> (I'm not sure if the component or the team is correct. If not, please
> redirect it).
> 
> I'm trying to run oVirt in nested virtualization with AMD's various Zen/Zen+
> based CPU's (Ryzen, Threadripper,EPYC).
> 
> In Nested Virtualization mode, when trying to create or launch a VM, it
> stops and complains that the "monitor" flag is missing.
> 
> Checking libvirt domcapabilities shows that indeed the monitor policy is
> "disabled" which is correct (checking against other virtualization
> solutions), but oVirt doesn't respect the domcapabilities.
> 
> Could someone please disable the monitor flag check? it cannot be enabled
> and it's not a bug in the CPU or KVM.

Disabling the monitor flag in the CPU capabilities works just fine on my Ryzen 3900x. I have spun up several nested VM's perfectly.

Just FFI, this is the file i edited /usr/share/libvirt/cpu_map/x86_EPYC.xml and removed the "monitor" feature line and restarted libvirtd on the kvm hosts (systemd restart libvirtd). If the CPU for the cluster is set to EPYC then everything works.

Let me know if anyone needs any more info.

Comment 20 Arik 2020-06-11 16:16:25 UTC
Milan, do you happen to know why the attached patch was abandoned?

Comment 21 Milan Zamazal 2020-06-11 18:42:15 UTC
No idea, I've never seen that patch.

Comment 22 Arik 2020-06-15 11:23:08 UTC
(In reply to Milan Zamazal from comment #21)
> No idea, I've never seen that patch.

Ack, thanks.

Worth checking on the latest version to see if it works now.

Comment 23 Sandro Bonazzola 2020-06-17 12:46:05 UTC
(In reply to Arik from comment #22)
> Ack, thanks.
> 
> Worth checking on the latest version to see if it works now.

targeting 4.4.1 accordingly, not leaving around a bug ON_QA without a target milestone.

Comment 24 Beni Pelled 2020-07-08 17:02:22 UTC
The problem still occurs,

Verification steps: 
1. Install ovirt-engine-4.4.1.7-0.3 (HE) on AMD EPYC 7451
2. Create two RHEL8.2 VMs in the env. created in section 1
3. Install ovirt-engine-4.4.1.7-0.3 on the 1st VM (from section 2) - the nested env.
4. Add the 2nd VM as Node to the nested env.
5. Create and start a VM in the nested env.


Result:
- Starting VM in the nested env. fails with: 

    2020-07-08 14:45:05,294+03 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-31) [1d1885cc] EVENT_ID: VM_DOWN_ERROR(119), VM vm1_test is down with error. Exit message: the CPU is incompatible with host CPU: Host CPU does not provide required features: monitor.


Additional information:
- The monitor flag doesn't appear (/proc/cpuinfo) on the VMs created in section 2
- engine.log attached.

Comment 25 Beni Pelled 2020-07-08 17:03:14 UTC
Created attachment 1700334 [details]
engine.log

Comment 26 Michal Skrivanek 2020-10-01 10:58:25 UTC
FYI, still occurs in 4.4.3/AV 8.3.1 as of yesterday.

Comment 27 Milan Zamazal 2020-10-13 19:18:17 UTC
virsh domcapabilities reports:

  <cpu>
    <mode name='host-passthrough' supported='yes'/>
    <mode name='host-model' supported='yes'>
      <model fallback='forbid'>EPYC-IBPB</model>
      <vendor>AMD</vendor>
      ...
      <feature policy='disable' name='monitor'/>
      ...
    </mode>
    <mode name='custom' supported='yes'>
      ...
      <model usable='yes'>Opteron_G3</model>
      ...
    </mode>
  </cpu>

We start the VM with:

  <cpu match="exact">
    <model>Opteron_G3</model>
    <topology cores="1" sockets="16" threads="1" />
    <numa>
      <cell cpus="0-15" id="0" memory="1048576" />
    </numa>
  </cpu>

And it fails with:

  libvirt.libvirtError: the CPU is incompatible with host CPU: Host CPU does not provide required features: monitor

When I explicitly disable `monitor' feature by adding

  <feature name="monitor" policy="disable"/>

to <cpu> definition above, the VM starts.

I don't know how the features are supposed to work and whether we should be responsible to add the given <feature> manually or whether libvirt should do the right thing itself. I.e. if it should be fixed in oVirt or in libvirt. If nobody here knows, I'll ask libvirt devs.

Comment 28 Hetz Ben Hamo 2020-10-13 19:33:14 UTC
I already asked them..

As you can see in this BZ (https://bugzilla.redhat.com/show_bug.cgi?id=1798004#c16) the solution seems to be to simply remove the "monitor" flag from all AMD Ryzen based CPU's (Zen, ZEN+, ZEN-2, ZEN-3, Ryzen, Threadripper, EPYC) and I asked the libvirt guys few years about it and even emailed AMD engineers about it, but since I'm not from RH, it was ignored.

So, if you could please ask them (the libvirt guys) to completely remove this flag, then it will help, not only in oVirt, but also in Nested-VM in Virt-Manager, Boxes etc..

Comment 29 Milan Zamazal 2020-10-14 09:20:50 UTC
Thank you for the BZ pointer, I'll try to move the things forward.

Comment 30 Jiri Denemark 2020-10-15 17:54:48 UTC
(In reply to Milan Zamazal from comment #27)
> When I explicitly disable `monitor' feature by adding
> 
>   <feature name="monitor" policy="disable"/>
> 
> to <cpu> definition above, the VM starts.

This is a good workaround until libvirt is fixed. In other words, libvirt is
supposed to do the right thing by itself with a host-model CPU, but
unfortunately the fix on libvirt side is not as easy as it could appear so
using this workaround may be a good idea. BTW, this workaround will be save to
be used even if libvirt is eventually fixed.

Comment 31 Hetz Ben Hamo 2020-10-15 18:02:49 UTC
Milan,

Care to share the entire <CPU> part so I can see where to add the line that you suggested please?

Thanks

Comment 32 Jiri Denemark 2020-10-15 18:07:42 UTC
(In reply to Hetz Ben Hamo from comment #28)
> As you can see in this BZ
> (https://bugzilla.redhat.com/show_bug.cgi?id=1798004#c16) the solution seems
> to be to simply remove the "monitor" flag from all AMD Ryzen based CPU's
> (Zen, ZEN+, ZEN-2, ZEN-3, Ryzen, Threadripper, EPYC)

Unfortunately the fix is not that simple. We cannot just remove the "monitor"
feature from all CPU models because it would break migration from new libvirt
with such fix to an older version of libvirt. And such migration is explicitly
required by oVirt to support upgrading individual hosts in a cluster.

BTW, the new EPYC-Rome was fixed before it was released, but the existing
EPYC, EPYC-IBPB, and Opteron_G3 cannot be simply fixed this way.

> and I asked the libvirt guys few years about it and even emailed AMD
> engineers about it, but since I'm not from RH, it was ignored.

This is certainly not the way things work, I'm pretty sure you were not
ignored because you are not from Red Hat.

Comment 33 Milan Zamazal 2020-10-15 20:28:16 UTC
(In reply to Jiri Denemark from comment #30)
> (In reply to Milan Zamazal from comment #27)
> > When I explicitly disable `monitor' feature by adding
> > 
> >   <feature name="monitor" policy="disable"/>
> > 
> > to <cpu> definition above, the VM starts.
> 
> This is a good workaround until libvirt is fixed. In other words, libvirt is
> supposed to do the right thing by itself with a host-model CPU, but
> unfortunately the fix on libvirt side is not as easy as it could appear so
> using this workaround may be a good idea. BTW, this workaround will be save
> to be used even if libvirt is eventually fixed.

OK, thank you for the information, I think it would be good enough until libvirt is fixed.

Arik, do we want to add such a workaround? Maybe providing a hook for the purpose or just documenting how to create it?

Comment 34 Milan Zamazal 2020-10-15 20:31:51 UTC
(In reply to Hetz Ben Hamo from comment #31)
> Care to share the entire <CPU> part so I can see where to add the line that
> you suggested please?

No special trick, just:

  <cpu match="exact">
    <model>Opteron_G3</model>
    <topology cores="1" sockets="16" threads="1" />
    <numa>
      <cell cpus="0-15" id="0" memory="1048576" />
    </numa>
    <feature name="monitor" policy="disable"/>
  </cpu>

I think <feature> element can be added using before_vm_start Vdsm hook.

Comment 35 Arik 2020-10-19 09:55:24 UTC
(In reply to Milan Zamazal from comment #33)
> OK, thank you for the information, I think it would be good enough until
> libvirt is fixed.

+1

> 
> Arik, do we want to add such a workaround? Maybe providing a hook for the
> purpose or just documenting how to create it?

I think we should make it work out of the box rather than documenting it
But I wonder if it wouldn't be simpler to change it on the engine side instead of introducing a hook -
do we need anything that is not available on the engine side?

Comment 36 Milan Zamazal 2020-10-19 10:11:30 UTC
(In reply to Arik from comment #35)

> I think we should make it work out of the box rather than documenting it

OK.

> But I wonder if it wouldn't be simpler to change it on the engine side
> instead of introducing a hook -
> do we need anything that is not available on the engine side?

My idea was, since it affects only nested virtualization in certain environments, to give the users chance to add the hook in case they experience the problem and the hook can be easily removed once libvirt is fixed. But it would be indeed simpler for the users if it was handled on the Engine side transparently. The only question is whether we can define clearly under which circumstances to add the flag.

Comment 37 Arik 2020-10-19 11:04:07 UTC
Yeah, I don't see an indication in the engine of whether or not a host is set with nested-virtualization enabled..
Would a fix on the VDSM side be just like https://gerrit.ovirt.org/#/c/98728 but to set the 'policy' to 'disable'?

Comment 38 Milan Zamazal 2020-10-19 11:10:28 UTC
(In reply to Arik from comment #37)
> Yeah, I don't see an indication in the engine of whether or not a host is
> set with nested-virtualization enabled..
> Would a fix on the VDSM side be just like https://gerrit.ovirt.org/#/c/98728
> but to set the 'policy' to 'disable'?

Maybe, if it doesn't break other AMD CPUs.

There is also `cpuflags' hook that can be used to set that flag, but unlike the patch above or an Engine solution, it requires manual configuration.

Comment 39 Milan Zamazal 2020-10-19 21:16:04 UTC
Correction: nestedvt hook is used on the bare metal host, not the VM host. So disabling the feature there doesn't help.

Comment 40 Beni Pelled 2020-11-19 10:23:43 UTC
Verified with:
- ovirt-engine-4.4.4-0.1.el8ev.noarch
- vdsm-4.40.36-1.el8ev.x86_64
- libvirt-daemon-6.6.0-7.module+el8.3.0+8424+5ea525c5.x86_64

Verification steps: 
1. Install ovirt-engine-4.4.4-0.1 (HE) on AMD EPYC 7451
2. Create two RHEL-8.3 VMs in the env. created in section 1
3. Install ovirt-engine-4.4.4-0.1 on the 1st VM (from section 2) - the nested env.
4. Add the 2nd VM as a host to the nested env.
5. Create and start a VM in the nested env.

Result:
- VM started successfully
- dumpxml contains the monitor fix:

      <cpu mode='custom' match='exact' check='full'>
        <model fallback='forbid'>EPYC</model>
        <topology sockets='16' dies='1' cores='1' threads='1'/>
        <feature policy='disable' name='monitor'/>
        <feature policy='require' name='x2apic'/>
        <feature policy='require' name='hypervisor'/>
        <feature policy='disable' name='svm'/>
        <feature policy='require' name='topoext'/>
        <numa>
          <cell id='0' cpus='0-15' memory='1048576' unit='KiB'/>
        </numa>
      </cpu>

Comment 41 Sandro Bonazzola 2020-12-21 12:36:15 UTC
This bugzilla is included in oVirt 4.4.4 release, published on December 21st 2020.

Since the problem described in this bug report should be resolved in oVirt 4.4.4 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.