Bug 1869523 - The `nodeSelector` field of the CSV wasn't filled in the corresponding Deployment object
Summary: The `nodeSelector` field of the CSV wasn't filled in the corresponding Deploy...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: OLM
Version: 4.6
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 4.6.0
Assignee: Alexander Greene
QA Contact: Jian Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-18 08:21 UTC by Jian Zhang
Modified: 2020-10-27 16:28 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:28:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github operator-framework operator-lifecycle-manager pull 1728 0 None closed Bug 1869523: Fix nodeSelector subscription config override 2021-01-14 07:38:25 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:28:53 UTC

Description Jian Zhang 2020-08-18 08:21:58 UTC
Description of problem:
When setting the `nodeSelector` field of the CSV, it wasn't filled in the Deployment object.


Version-Release number of selected component (if applicable):
[root@preserve-olm-env data]# oc exec catalog-operator-8f8cbc6ff-5jgp5 -- olm --version
OLM version: 0.16.0
git commit: 1fdd347ab723bf6aec30c79dfb217bcbf21a13e9

How reproducible:
always

Steps to Reproduce:
1. Install OCP 4.6.
2. Check the pods of the PackageServer.
[root@preserve-olm-env data]# oc project openshift-operator-lifecycle-manager
Now using project "openshift-operator-lifecycle-manager" on server "https://api.jiazha-0817.qe.devcluster.openshift.com:6443".

[root@preserve-olm-env data]# oc get pods -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP             NODE                                         NOMINATED NODE   READINESS GATES
catalog-operator-8f8cbc6ff-5jgp5   1/1     Running   0          29h   10.129.0.34    ip-10-0-172-206.us-east-2.compute.internal   <none>           <none>
olm-operator-5bf7479f8-rc8zc       1/1     Running   0          29h   10.129.0.33    ip-10-0-172-206.us-east-2.compute.internal   <none>           <none>
packageserver-668f64c765-flblg     1/1     Running   1          21m   10.131.1.143   ip-10-0-134-255.us-east-2.compute.internal   <none>           <none>
packageserver-668f64c765-hgnm7     1/1     Running   0          21m   10.128.0.42    ip-10-0-212-131.us-east-2.compute.internal   <none>           <none>

[root@preserve-olm-env data]# oc get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-133-14.us-east-2.compute.internal    Ready    master   30h   v1.19.0-rc.2+3d1d343-dirty
ip-10-0-134-255.us-east-2.compute.internal   Ready    worker   30h   v1.19.0-rc.2+3d1d343-dirty
ip-10-0-172-206.us-east-2.compute.internal   Ready    master   30h   v1.19.0-rc.2+3d1d343-dirty
ip-10-0-173-225.us-east-2.compute.internal   Ready    worker   30h   v1.19.0-rc.2+3d1d343-dirty
ip-10-0-212-131.us-east-2.compute.internal   Ready    master   30h   v1.19.0-rc.2+3d1d343-dirty
ip-10-0-223-160.us-east-2.compute.internal   Ready    worker   30h   v1.19.0-rc.2+3d1d343-dirty

[root@preserve-olm-env data]# oc get nodes ip-10-0-134-255.us-east-2.compute.internal  --show-labels
NAME                                         STATUS   ROLES    AGE   VERSION                      LABELS
ip-10-0-134-255.us-east-2.compute.internal   Ready    worker   30h   v1.19.0-rc.2+3d1d343-dirty   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=us-east-2,failure-domain.beta.kubernetes.io/zone=us-east-2a,kubernetes.io/arch=amd64,kubernetes.io/hostname=ip-10-0-134-255,kubernetes.io/os=linux,node-role.kubernetes.io/worker=,node.kubernetes.io/instance-type=m5.large,node.openshift.io/os_id=rhcos,topology.ebs.csi.aws.com/zone=us-east-2a,topology.kubernetes.io/region=us-east-2,topology.kubernetes.io/zone=us-east-2a

Actual results:
One of the Packageserver pods running on the worker node. No `nodeSelector` field in the deployment object.

[root@preserve-olm-env data]# oc get csv packageserver -o yaml|grep nodeSelector -A2
              nodeSelector:
                kubernetes.io/os: linux
                node-role.kubernetes.io/master: ""

[root@preserve-olm-env data]# oc get deployment packageserver -o yaml|grep nodeSelector -A2


Expected results:
All of the PackageServer pods running on the master nodes.


Additional info:
The same issue when subscribing the optional operators, as follows,
[root@preserve-olm-env data]# oc get csv
NAME                                           DISPLAY                   VERSION                 REPLACES   PHASE
sriov-network-operator.4.6.0-202008121454.p0   SR-IOV Network Operator   4.6.0-202008121454.p0              Succeeded

[root@preserve-olm-env data]# oc get csv -o yaml|grep nodeSelector -A3

--
                nodeSelector:
                  node-role.kubernetes.io/master: ""

[root@preserve-olm-env data]# oc get deployment sriov-network-operator -o yaml|grep nodeSelector -A3

Comment 3 Jian Zhang 2020-08-21 02:16:42 UTC
[root@preserve-olm-env data]# oc exec catalog-operator-7574bc8948-gx62l -- olm --version
OLM version: 0.16.0
git commit: c3852d57c86707deb80c042c2155ad82c2d9628f

LGTM, the nodeSelector field are added to the Deployment object. Verify it.
[root@preserve-olm-env data]#  oc get csv packageserver -o yaml|grep nodeSelector -A2
              nodeSelector:
                kubernetes.io/os: linux
                node-role.kubernetes.io/master: ""
[root@preserve-olm-env data]# 
[root@preserve-olm-env data]# oc get deployment packageserver -o yaml|grep nodeSelector -A2
            f:nodeSelector:
              .: {}
              f:kubernetes.io/os: {}
--
      nodeSelector:
        kubernetes.io/os: linux
        node-role.kubernetes.io/master: ""

[root@preserve-olm-env data]# oc get pods -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
catalog-operator-7574bc8948-gx62l   1/1     Running   0          25m   10.130.0.30   ip-10-0-139-53.us-east-2.compute.internal   <none>           <none>
olm-operator-84c9b6765c-bkg2d       1/1     Running   0          23m   10.130.0.33   ip-10-0-139-53.us-east-2.compute.internal   <none>           <none>
packageserver-5fd856c947-cd6f4      1/1     Running   0          23m   10.130.0.34   ip-10-0-139-53.us-east-2.compute.internal   <none>           <none>
packageserver-5fd856c947-rkxt2      1/1     Running   0          23m   10.128.0.27   ip-10-0-166-12.us-east-2.compute.internal   <none>           <none>
[root@preserve-olm-env data]# 
[root@preserve-olm-env data]# 
[root@preserve-olm-env data]# oc get node 
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-136-128.us-east-2.compute.internal   Ready    worker   42m   v1.19.0-rc.2+99cb93a-dirty
ip-10-0-139-53.us-east-2.compute.internal    Ready    master   53m   v1.19.0-rc.2+99cb93a-dirty
ip-10-0-166-12.us-east-2.compute.internal    Ready    master   52m   v1.19.0-rc.2+99cb93a-dirty
ip-10-0-184-164.us-east-2.compute.internal   Ready    worker   42m   v1.19.0-rc.2+99cb93a-dirty
ip-10-0-201-79.us-east-2.compute.internal    Ready    worker   42m   v1.19.0-rc.2+99cb93a-dirty
ip-10-0-221-91.us-east-2.compute.internal    Ready    master   53m   v1.19.0-rc.2+99cb93a-dirty

Comment 5 errata-xmlrpc 2020-10-27 16:28:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.