RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1717616 - virt-install: enable HyperV Enlightenments by default
Summary: virt-install: enable HyperV Enlightenments by default
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: virt-manager
Version: 8.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: Virtualization Maintenance
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1717611
Blocks: 1624786
TreeView+ depends on / blocked
 
Reported: 2019-06-05 19:54 UTC by Eduardo Habkost
Modified: 2021-06-04 07:56 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-01 07:41:18 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Eduardo Habkost 2019-06-05 19:54:41 UTC
Some HyperV Enlightenments are useful for some guest OSes and are safe to be enabled even if not used by the guest OS.  virt-manager and virt-install can enable them by default when supported by the current host.

Comment 1 Eduardo Habkost 2019-06-05 20:01:58 UTC
Preliminary list of features we probably want to enable by default when supported by the host:

    <clock offset='localtime'>
       <timer name='hypervclock' present='yes'/>
    </clock>
    <hyperv>
       <relaxed state='on'/>
       <vapic state='on'/>
       <spinlocks state='on' retries='8191'/>
       <vpindex state='on'/>
       <runtime state='on' />
       <synic state='on'/>
       <stimer state='on'/>
       <tlbflush state='on'/>
       <frequencies state='on'/>

(I believe the above is already available on RHEL-8.0)

       <ipi state='on'/>
       <reenlightenment state='on'/>
       <evmcs state='on'/>

(I believe the above is available on RHEL-AV-8.0 or RHEL-AV-8.1.  I'm not sure)

     </hyperv>


We need a mechanism to ensure the features are supported by the current host.  The interface for querying supported enlightenments is being tracked at bug 1717611.

Comment 4 Cole Robinson 2020-09-22 18:54:06 UTC
There's some progress on the qemu API: https://bugzilla.redhat.com/show_bug.cgi?id=1851247

After that I guess this will be exposed to users via libvirt domaincapabilities. 

Once virt-install can use that API, the  thing to know after that is when to enable these features for guests.

* Is it safe to enable for all windows versions? If there are exceptions then we will want libosinfo+osinfo-db support too

* More generally are there any known tradeoffs/downsides to enabling any of these? Or are they considered unanimously good?

Comment 5 Eduardo Habkost 2020-09-22 19:01:17 UTC
(In reply to Cole Robinson from comment #4)
> There's some progress on the qemu API:
> https://bugzilla.redhat.com/show_bug.cgi?id=1851247
> 
> After that I guess this will be exposed to users via libvirt
> domaincapabilities. 
> 
> Once virt-install can use that API, the  thing to know after that is when to
> enable these features for guests.
> 
> * Is it safe to enable for all windows versions? If there are exceptions
> then we will want libosinfo+osinfo-db support too
> 
> * More generally are there any known tradeoffs/downsides to enabling any of
> these? Or are they considered unanimously good?

I don't know the answers, redirecting the questions to Vitaly.

Comment 6 Vitaly Kuznetsov 2020-09-24 13:50:30 UTC
(In reply to Cole Robinson from comment #4)
> There's some progress on the qemu API:
> https://bugzilla.redhat.com/show_bug.cgi?id=1851247
> 
> After that I guess this will be exposed to users via libvirt
> domaincapabilities. 
> 
> Once virt-install can use that API, the  thing to know after that is when to
> enable these features for guests.
> 
> * Is it safe to enable for all windows versions? If there are exceptions
> then we will want libosinfo+osinfo-db support too

We are not aware of any exceptions, enabling everything for all Windows
versions should be fine.

> 
> * More generally are there any known tradeoffs/downsides to enabling any of
> these? Or are they considered unanimously good?

There are trade-offs. In particular, enabling 'hv_synic' will disable vAPIC
support and enabling 'hv_evmcs' will disable posted interrupts and shadow
VMCS. 'hv_synic' is a no-brainer for Windows because 'hv_stimer' requires in
and without it Windows will be causing very high CPU load on the host even
when idle. 'hv_evmcs' is a good thing for nesting (when the guest is actually
Hyper-V) but for 'regular' Windows it gives no benefits (but remember the 
trade-offs!) so I'd recommend enabling it only when VMX/SVM are also exposed
(do we have a setting for it?)

Comment 7 Cole Robinson 2020-09-24 15:57:09 UTC
Thanks for the info. I don't understand the implication of the mentioned tradeoffs, but if you think 'hv_synic' is a no-brainer regardless then that's good enough for me.

If hv_evmcs primarily benefits nested windows virt, but in other usecases has downsides, then I think for virt-* we can skip enabling it by default. That seems an advanced enough usecase that I'm fine requiring users to deliberately opt into it. virt-* tools are aiming for safe 'easy win' type defaults

Comment 10 RHEL Program Management 2021-02-01 07:41:18 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 14 Cole Robinson 2021-05-26 13:59:57 UTC
(In reply to Eduardo Habkost from comment #1)
> Preliminary list of features we probably want to enable by default when
> supported by the host:
> 
>     <clock offset='localtime'>
>        <timer name='hypervclock' present='yes'/>
>     </clock>
>     <hyperv>
>        <relaxed state='on'/>
>        <vapic state='on'/>
>        <spinlocks state='on' retries='8191'/>
>        <vpindex state='on'/>
>        <runtime state='on' />
>        <synic state='on'/>
>        <stimer state='on'/>
>        <tlbflush state='on'/>
>        <frequencies state='on'/>
> 
> (I believe the above is already available on RHEL-8.0)
> 
>        <ipi state='on'/>
>        <reenlightenment state='on'/>
>        <evmcs state='on'/>
> 
> (I believe the above is available on RHEL-AV-8.0 or RHEL-AV-8.1.  I'm not
> sure)
> 
>      </hyperv>

I see the plan for is for the `hyperv=on` type option which sounds the nice way for virt-install, when it lands. But before that I'm wondering if we can just kinda hack it in virt-install for a number of those above features.

Are all those features limited only by the mix of kernel+qemu+libvirt versions? In virt-install we can check libvirt + qemu versions but not kernel. But if the kernel support is 5+ years older than libvirt+qemu maybe we just assume the feature is safe to enable.

Is there some easy place to check or gitblame to determine what kernel version added support for what feature?

Comment 15 Vitaly Kuznetsov 2021-05-26 14:42:40 UTC
(In reply to Cole Robinson from comment #14)
> 
> I see the plan for is for the `hyperv=on` type option which sounds the nice
> way for virt-install, when it lands. But before that I'm wondering if we can
> just kinda hack it in virt-install for a number of those above features.
> 
> Are all those features limited only by the mix of kernel+qemu+libvirt
> versions? 

'hv-evmcs' is kind of special as it can only be used on Intel (in conjunction
with 'vmx').

> In virt-install we can check libvirt + qemu versions but not
> kernel. But if the kernel support is 5+ years older than libvirt+qemu maybe
> we just assume the feature is safe to enable.
> 
> Is there some easy place to check or gitblame to determine what kernel
> version added support for what feature?

Required QEMU/libvirt versions are documented in libvirt:
https://libvirt.org/formatdomain.html (search for 'guests running Microsoft Windows')
but I don't think we have kernel versions documented anywhere...

I'd suggest include/uapi/linux/kvm.h in linux kernel as a suitable
gitblame victim. Prior to KVM_CAP_HYPERV_CPUID (v5.0) we were adding
a new capability for new Hyper-V features (not all). In particular,

KVM_CAP_HYPERV_SYNIC2 (hv-synic) v4.13
KVM_CAP_HYPERV_VP_INDEX (hv-vpindex) v4.13
KVM_CAP_HYPERV_TLBFLUSH (hv-tlbflush) v4.18
KVM_CAP_HYPERV_SEND_IPI (hv-ipi) v4.20
KVM_CAP_HYPERV_ENLIGHTENED_VMCS (hv-evmcs) v4.20

'hv-stimer' appeared in 4.10
'hv-frequencies' in v4.14
'hv-reenlightenment' in 4.17
'hv-stimer-direct' in v5.0

the rest is older and can probably be always enabled. We may not want to have
'hv-crash' in the default set as it is not reported anywhere and users don't
see BSODs when they happen.

Comment 16 Pavel Hrdina 2021-05-26 15:39:10 UTC
(In reply to Vitaly Kuznetsov from comment #15)
> (In reply to Cole Robinson from comment #14)
> > 
> > I see the plan for is for the `hyperv=on` type option which sounds the nice
> > way for virt-install, when it lands. But before that I'm wondering if we can
> > just kinda hack it in virt-install for a number of those above features.
> > 
> > Are all those features limited only by the mix of kernel+qemu+libvirt
> > versions? 
> 
> 'hv-evmcs' is kind of special as it can only be used on Intel (in conjunction
> with 'vmx').
> 
> > In virt-install we can check libvirt + qemu versions but not
> > kernel. But if the kernel support is 5+ years older than libvirt+qemu maybe
> > we just assume the feature is safe to enable.
> > 
> > Is there some easy place to check or gitblame to determine what kernel
> > version added support for what feature?
> 
> Required QEMU/libvirt versions are documented in libvirt:
> https://libvirt.org/formatdomain.html (search for 'guests running Microsoft
> Windows')
> but I don't think we have kernel versions documented anywhere...
> 
> I'd suggest include/uapi/linux/kvm.h in linux kernel as a suitable
> gitblame victim. Prior to KVM_CAP_HYPERV_CPUID (v5.0) we were adding
> a new capability for new Hyper-V features (not all). In particular,
> 
> KVM_CAP_HYPERV_SYNIC2 (hv-synic) v4.13
> KVM_CAP_HYPERV_VP_INDEX (hv-vpindex) v4.13
> KVM_CAP_HYPERV_TLBFLUSH (hv-tlbflush) v4.18
> KVM_CAP_HYPERV_SEND_IPI (hv-ipi) v4.20
> KVM_CAP_HYPERV_ENLIGHTENED_VMCS (hv-evmcs) v4.20
> 
> 'hv-stimer' appeared in 4.10
> 'hv-frequencies' in v4.14
> 'hv-reenlightenment' in 4.17
> 'hv-stimer-direct' in v5.0
> 
> the rest is older and can probably be always enabled. We may not want to have

Because the kernel versions are fairly recent there is no way in virt-install to properly check if all hyperv features are available.

Until we have `hyperv=on` the next ideal solution would be if QEMU could report list of supported hyperv features and libvirt would propagate that info into capabilities. virt-install would be able to check the capabilities and enable only supported features.

The question is if QEMU checks if the hyperv features are supported by kernel as well or not.

> 'hv-crash' in the default set as it is not reported anywhere and users don't
> see BSODs when they happen.

hv-crash is already handled differently in libvirt so that is not an issue:

<domain>
  ...
  <devices>
    ...
    <panic model='hyperv'/>
  </devices>
<domain>

Comment 17 Eduardo Habkost 2021-05-26 15:46:45 UTC
(In reply to Pavel Hrdina from comment #16)
> Because the kernel versions are fairly recent there is no way in
> virt-install to properly check if all hyperv features are available.
> 
> Until we have `hyperv=on` the next ideal solution would be if QEMU could
> report list of supported hyperv features and libvirt would propagate that
> info into capabilities. virt-install would be able to check the capabilities
> and enable only supported features.

That would be bug 1851247.


> 
> The question is if QEMU checks if the hyperv features are supported by
> kernel as well or not.

If we implemente bug 1851247, yes QEMU would check that.

Note that implementing `hyperv=on` may be trickier than expected, because of the kernel dependencies each new feature introduces.  There has been some discussion on how to model kernel dependencies in a way that's more usable by libvirt and management software, but no proposals upstream yet.

Comment 18 Cole Robinson 2021-06-01 16:59:09 UTC
(In reply to Pavel Hrdina from comment #16)
> 
> Because the kernel versions are fairly recent there is no way in
> virt-install to properly check if all hyperv features are available.
> 

Yeah, that was the original plan, but it seems a bit stuck.

The main one I'm interested in is hv-stimer+hv-synic which has generated a complaint in the past: https://lore.kernel.org/kvm/20200625201046.GA179502@kevinolos/

4.13 is from Sep 2017, so approaching 4 years ago. If we check for versions of 2021 libvirt and 2021 qemu, very strong chance the kernel is new enough I think. But not sure if I want to risk it.

Does qemu fail if these features are requested but the kernel isn't new enough?
Does nesting impact things, like if L0 is too old but L1 is new enough?

Comment 19 Eduardo Habkost 2021-06-03 18:49:27 UTC
(In reply to Cole Robinson from comment #18)
> (In reply to Pavel Hrdina from comment #16)
> > 
> > Because the kernel versions are fairly recent there is no way in
> > virt-install to properly check if all hyperv features are available.
> > 
> 
> Yeah, that was the original plan, but it seems a bit stuck.
> 
> The main one I'm interested in is hv-stimer+hv-synic which has generated a
> complaint in the past:
> https://lore.kernel.org/kvm/20200625201046.GA179502@kevinolos/
> 
> 4.13 is from Sep 2017, so approaching 4 years ago. If we check for versions
> of 2021 libvirt and 2021 qemu, very strong chance the kernel is new enough I
> think. But not sure if I want to risk it.

Especially considering that container workloads might be running on nodes that have older kernels.

> 
> Does qemu fail if these features are requested but the kernel isn't new
> enough?

It should.  See https://gitlab.com/qemu-project/qemu/-/blob/a97978bcc2d1f650c7d411428806e5b03082b8c7/target/i386/kvm/kvm.c#L1170


> Does nesting impact things, like if L0 is too old but L1 is new enough?

It shouldn't, AFAIK.  I don't believe any Hyper-V enlightenment has hardware dependencies, and that would also mean L1 shouldn't depend on anything provided by L0.  Vitaly, do you confirm?

Comment 20 Vitaly Kuznetsov 2021-06-04 07:56:26 UTC
(In reply to Eduardo Habkost from comment #19)
> 
> > Does nesting impact things, like if L0 is too old but L1 is new enough?
> 
> It shouldn't, AFAIK.  I don't believe any Hyper-V enlightenment has hardware
> dependencies, and that would also mean L1 shouldn't depend on anything
> provided by L0.  Vitaly, do you confirm?

Yes. The only hardware dependent Hyper-V enlightenment is 'hv-evmcs' as it
requires Intel VMX but L1 exposing it means we're talking about 3-level nesting
configurations which should be extremely rare (and untested/unsupported).

We have, however, gain more hardware dependent features soon. In particular,
I'll be adding an enlightenment to enable APICv/AVIC along with SynIC.


Note You need to log in before you can comment on or make changes to this bug.