Bug 1850937

Summary: kubemacpool fails in a specific order of components startup
Product: Container Native Virtualization (CNV) Reporter: Ram Lavi <ralavi>
Component: NetworkingAssignee: Ram Lavi <ralavi>
Status: CLOSED ERRATA QA Contact: Meni Yakove <myakove>
Severity: high Docs Contact:
Priority: high    
Version: 2.4.0CC: cnv-qe-bugs, ncredi, oramraz, phoracek
Target Milestone: ---   
Target Release: 2.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: kubemacpool-container-v2.4.0-35 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-07-28 19:10:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ram Lavi 2020-06-25 08:34:58 UTC
Description of problem:
When kubemacpool component is deployed before kubevirt component, then the pod that won the leadership election could loose his kubemacpool-leader label if the pod already was the leader in the former election.

The absence of kubemacpool-leader label in all kubemacpool pods renders the kubemacpool service down.

Version-Release number of selected component (if applicable):
kubemacpool v0.14.4
cnao 0.39.1

How reproducible:
50%

Steps to Reproduce:
1. cause kubevirt to be deployed after kubemacpool.
2. look for the leadership label:
oc get pods -n openshift-cnv -l app=kubemacpool -oyaml | grep kubemacpool-leader

Actual results:
kubemacpool-leader label doesn't exist on any of the kubemacpool pods.

Expected results:
kubemacpool-leader should exist only onces.


Additional info:

Comment 1 Ram Lavi 2020-06-25 15:56:51 UTC
Should be fixed by https://github.com/k8snetworkplumbingwg/kubemacpool/pull/209

Comment 2 yzaindbe 2020-06-30 11:27:43 UTC
After checking around 10 clusters with the updated KMP version, all of them had one kubemacpool-leader as expected.

Comment 5 errata-xmlrpc 2020-07-28 19:10:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3194