Bug 2003862

Summary: Missing default hv_stimer_direct, hv_ipi, hv_evmcs flags
Product: [oVirt] ovirt-engine Reporter: menli <menli>
Component: GeneralAssignee: Milan Zamazal <mzamazal>
Status: CLOSED CURRENTRELEASE QA Contact: Lukas Svaty <lsvaty>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.4.8.3CC: ahadas, bugs, vkuznets
Target Milestone: ovirt-4.5.0Keywords: ZStream
Target Release: 4.5.0Flags: sbonazzo: ovirt-4.5+
Hardware: x86_64   
OS: Windows   
Whiteboard:
Fixed In Version: ovirt-engine-4.5.0 Doc Type: Enhancement
Doc Text:
hv_stimer_direct and hv_ipi Hyper-V flags are newly added to VMs when the cluster level is higher than 4.6.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-28 09:26:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2021545    
Bug Blocks:    

Description menli@redhat.com 2021-09-14 01:46:10 UTC
Description of problem:

Missing default hv_stimer_direct, hv_ipi, hv_evmcs flags on RHV env.

Version-Release number of selected component (if applicable):
vdsm-4.40.80.4-1.el8ev.x86_64
ovirt-engine-4.4.8.3-0.10.el8ev.noarch

How reproducible:
100%

Steps to Reproduce:
1.Start a windows guest on rhel env
2.check the hyper-v flages


Actual results:
RHV windows guest hyper-v flags default. Compare with kvm team parameter:
Missing : hv_stimer_direct, hv_ipi, hv_evmcs
Extra added : hv-reset

CPU flages on RHV env:

-cpu Skylake-Client-noTSX-IBRS,ssbd=on,md-clear=on,mpx=off,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-runtime,hv-synic,hv-stimer,hv-reset,hv-frequencies,hv-reenlightenment,hv-tlbflush

Expected results:
suggest added the missing flags

Additional info:

Comment 1 Milan Zamazal 2021-10-11 09:11:44 UTC
We can certainly add them (and remove hv-reset). If I'm looking correctly, the new flags should be available since AV 8.3, cluster level 4.5.

Where can we find the authoritative source of the recommended flags? And I've seen somewhere that hv-evmcs may not work in all environments, is it safe and necessary to enable it?

Comment 2 Vitaly Kuznetsov 2021-10-12 11:03:04 UTC
(In reply to Milan Zamazal from comment #1)
> We can certainly add them (and remove hv-reset). If I'm looking correctly,
> the new flags should be available since AV 8.3, cluster level 4.5.
> 
> Where can we find the authoritative source of the recommended flags? And
> I've seen somewhere that hv-evmcs may not work in all environments, is it
> safe and necessary to enable it?

Basically, our suggestion is to enable everything. 'hv-reset' is not in the
default set just because genuine Hyper-V doesn't have it there but it's not
a problem to keep it on, it's just a different reset method. 'hv-evmcs' is
Intel-specific, it cannot be enabled on AMD hosts. Also, 'hv-evmcs' and 
'hv_stimer_direct' only benefit nested environments (Hyper-V or WSL2, for
example) -- but it's safe to have them for all Windows guests. 'hv-ipi'
is not related to nesting.

Comment 3 Milan Zamazal 2021-10-12 13:25:41 UTC
Thank you for clarification. So I'll add the requested flags (I think nothing else missing) and will keep hv-reset.

Comment 4 Milan Zamazal 2021-10-26 17:59:26 UTC
nh-evmcs prevents VMs from starting on hosts where nested virtualization is not enabled:

  Hyper-V enlightened VMCS (hv-evmcs) is not supported by kernel
  qemu-kvm: kvm_init_vcpu: kvm_arch_init_vcpu failed (0): Function not implemented

Moreover, according to QEMU documentation, the effects of this flag are uncertain: "Some virtualization features (e.g. Posted Interrupts) are disabled when hv-evmcs is enabled. It may make sense to measure your nested workload with and without the feature to find out if enabling it is beneficial."

So I think it's better to not enable this flag.

Comment 6 Sandro Bonazzola 2022-04-28 09:26:34 UTC
This bugzilla is included in oVirt 4.5.0 release, published on April 20th 2022.

Since the problem described in this bug report should be resolved in oVirt 4.5.0 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.