Bug 1903700 - metal3 Deployment doesn't have unique Pod selector
Summary: metal3 Deployment doesn't have unique Pod selector
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Bare Metal Hardware Provisioning
Version: 4.8
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.8.0
Assignee: sdasu
QA Contact: Ori Michaeli
URL:
Whiteboard:
Depends On: 1903717
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-02 16:26 UTC by Zane Bitter
Modified: 2021-07-27 22:34 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-27 22:34:25 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-baremetal-operator pull 126 0 None open Bug 1903700: Fix Pod Selectors in metal3 pods created by CBO 2021-04-13 15:12:26 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:34:49 UTC

Description Zane Bitter 2020-12-02 16:26:14 UTC
According to https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment, kubernetes requires that each Deployment have a unique Pod selector:

> Do not overlap labels or selectors with other controllers (including other
> Deployments and StatefulSets). Kubernetes doesn't stop you from overlapping,
> and if multiple controllers have overlapping selectors those controllers might
> conflict and behave unexpectedly.

However, the metal3 Deployment managed by the cluster-baremetal-operator does *not* have a unique Pod selector. It only selects for the label "k8s-app=controller", which is also present on the machine-api-controllers Deployment managed by the machine-api-operator in the same namespace (openshift-machine-api).

Unfortunately the Pod selector for a Deployment is immutable, so there is no way to update it without deleting and recreating the Deployment. The pod will get bounced from one node to another at least once during an upgrade, so this is not a problem per se. However, we cannot delete the Deployment during the upgrade from 4.6 to 4.7 because in 4.6 the Deployment is managed by the machine-api-operator and if it sees the Deployment missing it will assume that it is responsible for attempting to recreate it (https://github.com/openshift/machine-api-operator/blob/release-4.6/pkg/operator/baremetal_pod.go#L244-L246). So deleting it during an upgrade to 4.7 could cause a controller fight.

However, this could be fixed in 4.8 by deleting the Deployment if it has the wrong selector. Only one copy of the cluster-baremetal-operator can run at a time, so once we have upgraded it's safe to go ahead and do this.

Comment 1 Robin Cernin 2021-01-21 03:42:23 UTC
Seems like you already provided fix in https://github.com/openshift/cluster-baremetal-operator/commit/1b6f46d32f6e239f8dc522111a7445c48e8f1bed ?

Comment 2 Zane Bitter 2021-01-21 21:01:04 UTC
(In reply to Robin Cernin from comment #1)
> Seems like you already provided fix in
> https://github.com/openshift/cluster-baremetal-operator/commit/
> 1b6f46d32f6e239f8dc522111a7445c48e8f1bed ?

It's fixed for new deployments of 4.7 (bug 1903717), but not for upgrades from 4.6. Those cannot be fixed until 4.8, and this bug is here to remind us to do that.

Comment 4 Ori Michaeli 2021-05-03 09:16:22 UTC
Verified with 4.6.25 -> 4.7.8 -> 4.8.0-fc-1

Comment 7 errata-xmlrpc 2021-07-27 22:34:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2438


Note You need to log in before you can comment on or make changes to this bug.