Description of problem: When kubemacpool component is deployed before kubevirt component, then the pod that won the leadership election could loose his kubemacpool-leader label if the pod already was the leader in the former election. The absence of kubemacpool-leader label in all kubemacpool pods renders the kubemacpool service down. Version-Release number of selected component (if applicable): kubemacpool v0.14.4 cnao 0.39.1 How reproducible: 50% Steps to Reproduce: 1. cause kubevirt to be deployed after kubemacpool. 2. look for the leadership label: oc get pods -n openshift-cnv -l app=kubemacpool -oyaml | grep kubemacpool-leader Actual results: kubemacpool-leader label doesn't exist on any of the kubemacpool pods. Expected results: kubemacpool-leader should exist only onces. Additional info:
Should be fixed by https://github.com/k8snetworkplumbingwg/kubemacpool/pull/209
After checking around 10 clusters with the updated KMP version, all of them had one kubemacpool-leader as expected.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3194