According to https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment, kubernetes requires that each Deployment have a unique Pod selector: > Do not overlap labels or selectors with other controllers (including other > Deployments and StatefulSets). Kubernetes doesn't stop you from overlapping, > and if multiple controllers have overlapping selectors those controllers might > conflict and behave unexpectedly. However, the metal3 Deployment managed by the cluster-baremetal-operator does *not* have a unique Pod selector. It only selects for the label "k8s-app=controller", which is also present on the machine-api-controllers Deployment managed by the machine-api-operator in the same namespace (openshift-machine-api). Unfortunately the Pod selector for a Deployment is immutable, so there is no way to update it without deleting and recreating the Deployment. We intend to update the selector in OpenShift 4.8 (it cannot be done in 4.7 because of reasons) - see bug 1903700. To handle downgrades from 4.8 to 4.7, cluster-baremetal-operator in 4.7 must be able to deal with finding a different selector from the current "k8s-app=controller" in the Deployment without trying to reset it (which will fail). Ideally this would also enable us to create the Deployment with a unique selector when 4.7 is being installed from scratch rather than upgraded from 4.6.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633