Bug 1903700

Summary: metal3 Deployment doesn't have unique Pod selector
Product: OpenShift Container Platform Reporter: Zane Bitter <zbitter>
Component: Bare Metal Hardware ProvisioningAssignee: sdasu
Bare Metal Hardware Provisioning sub component: cluster-baremetal-operator QA Contact: Ori Michaeli <omichael>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: medium CC: aos-bugs, bfournie, dhellmann, rbartal, rcernin
Version: 4.8Keywords: Triaged
Target Milestone: ---   
Target Release: 4.8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-07-27 22:34:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1903717    
Bug Blocks:    

Description Zane Bitter 2020-12-02 16:26:14 UTC
According to https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment, kubernetes requires that each Deployment have a unique Pod selector:

> Do not overlap labels or selectors with other controllers (including other
> Deployments and StatefulSets). Kubernetes doesn't stop you from overlapping,
> and if multiple controllers have overlapping selectors those controllers might
> conflict and behave unexpectedly.

However, the metal3 Deployment managed by the cluster-baremetal-operator does *not* have a unique Pod selector. It only selects for the label "k8s-app=controller", which is also present on the machine-api-controllers Deployment managed by the machine-api-operator in the same namespace (openshift-machine-api).

Unfortunately the Pod selector for a Deployment is immutable, so there is no way to update it without deleting and recreating the Deployment. The pod will get bounced from one node to another at least once during an upgrade, so this is not a problem per se. However, we cannot delete the Deployment during the upgrade from 4.6 to 4.7 because in 4.6 the Deployment is managed by the machine-api-operator and if it sees the Deployment missing it will assume that it is responsible for attempting to recreate it (https://github.com/openshift/machine-api-operator/blob/release-4.6/pkg/operator/baremetal_pod.go#L244-L246). So deleting it during an upgrade to 4.7 could cause a controller fight.

However, this could be fixed in 4.8 by deleting the Deployment if it has the wrong selector. Only one copy of the cluster-baremetal-operator can run at a time, so once we have upgraded it's safe to go ahead and do this.

Comment 1 Robin Cernin 2021-01-21 03:42:23 UTC
Seems like you already provided fix in https://github.com/openshift/cluster-baremetal-operator/commit/1b6f46d32f6e239f8dc522111a7445c48e8f1bed ?

Comment 2 Zane Bitter 2021-01-21 21:01:04 UTC
(In reply to Robin Cernin from comment #1)
> Seems like you already provided fix in
> https://github.com/openshift/cluster-baremetal-operator/commit/
> 1b6f46d32f6e239f8dc522111a7445c48e8f1bed ?

It's fixed for new deployments of 4.7 (bug 1903717), but not for upgrades from 4.6. Those cannot be fixed until 4.8, and this bug is here to remind us to do that.

Comment 4 Ori Michaeli 2021-05-03 09:16:22 UTC
Verified with 4.6.25 -> 4.7.8 -> 4.8.0-fc-1

Comment 7 errata-xmlrpc 2021-07-27 22:34:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438