Bug 1873555

Summary: One critical alert is constantly firing per running VM
Product: Container Native Virtualization (CNV) Reporter: Dan Kenigsberg <danken>
Component: VirtualizationAssignee: Antonio Cardace <acardace>
Status: CLOSED ERRATA QA Contact: zhe peng <zpeng>
Severity: high Docs Contact:
Priority: urgent    
Version: 2.4.0CC: acardace, alitke, chale, cnv-qe-bugs, deven.phillips, fdeutsch, ipinto, martinsson.patrik, nsatsia, sgott, usurse
Target Milestone: ---   
Target Release: 4.8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: virt-operator-container-v4.8.0-60 hco-bundle-registry-container-v4.8.0-375 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-07-27 14:20:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Use maxUnavailable instead of minAvailable on pdb none

Description Dan Kenigsberg 2020-08-28 15:31:59 UTC
Description of problem:


Version-Release number of selected component (if applicable):
2.4.0

How reproducible:
Always

Steps to Reproduce:
Start a VM; Wait for a while

Actual results:
A critical PodDisruptionBuget alert shows up firing

Expected results:
Silence. Zero alerts.

Additional info:

Almost all VMs have an alert:
$ oc get --no-headers vmis -A |grep Running|wc -l
19
$ oc get --no-headers pdb -A|grep kubevirt|wc -l
18

Comment 1 Daniel Belenky 2020-09-08 09:14:46 UTC
It happens because for every VMI that wants to be live migrated on eviction, we create a PDB that requires minimum of 2 unavailable pods for that VMI.
2 is to support the original pod and the target pod for the migration. The problem is, that until a migration is required, we only have 1 pod for each VMI.
The solution is probably to set max unavailable to 0 instead. I'll prepare a patch today.

Comment 2 Daniel Belenky 2020-09-08 09:47:40 UTC
Created attachment 1714064 [details]
Use maxUnavailable instead of minAvailable on pdb

Comment 6 Kedar Bidarkar 2020-11-25 13:17:35 UTC
*** Bug 1899174 has been marked as a duplicate of this bug. ***

Comment 7 Antonio Cardace 2021-03-12 13:04:18 UTC
Discussing the best solution at https://github.com/kubevirt/kubevirt/pull/4136, I'm inclined to follow Roman's idea to merge the PDB controller into the migration one as that's the place where the migration is initiated.

Comment 9 Antonio Cardace 2021-04-15 12:18:12 UTC
Opened https://github.com/kubevirt/kubevirt/pull/5424 and https://github.com/kubevirt/kubevirt/pull/5460 as proposals to fix this, I think the latter is the best solution.

Comment 10 Patrik Martinsson 2021-04-26 08:03:00 UTC
*** Bug 1952509 has been marked as a duplicate of this bug. ***

Comment 16 sgott 2021-06-07 12:02:08 UTC
To verify, follow steps to reproduce in description.

Basically the critical alert previously seen will not be observed.

Comment 17 zhe peng 2021-06-09 10:22:19 UTC
verify with build :
virt-operator-container-v4.8.0-60
hco-bundle-registry-container-v4.8.0-380

step:
create 5 vmi
Wait > 10mins

there is no critical PodDisruptionBuget alert shows up.

do migration for all vmi
wait > 10 mins

no alert shows up

move to verified.

Comment 18 Fabian Deutsch 2021-06-09 10:33:35 UTC
Great news.

Comment 21 errata-xmlrpc 2021-07-27 14:20:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.8.0 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2920

Comment 22 Deven Phillips 2024-11-05 13:38:37 UTC
I am still seeing this in recent (OpenShift 4.16.17) versions of KubeVirt.