Bug 1923099 - [OCP 4.6] The daemonSet does not get updated when the nodeSelector and Tolerations get changed in fileIntegrity object
Summary: [OCP 4.6] The daemonSet does not get updated when the nodeSelector and Tolera...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: File Integrity Operator
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.6.z
Assignee: Juan Antonio Osorio
QA Contact: xiyuan
URL:
Whiteboard:
Depends On: 1923096
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-02-01 12:16 UTC by Prashant Dhamdhere
Modified: 2021-02-16 09:19 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1923096
Environment:
Last Closed: 2021-02-16 09:18:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:0568 0 None None None 2021-02-16 09:19:04 UTC

Description Prashant Dhamdhere 2021-02-01 12:16:35 UTC
+++ This bug was initially created as a clone of Bug #1923096 +++

Description of problem:

The daemonSet does not get updated with the new value when the nodeSelector and Tolerations sections of the FileIntegrity object get changed.

Version-Release number of selected component (if applicable):

4.7.0-0.nightly-2021-01-31-031653

How reproducible:
 
Always

Steps to Reproduce:

1. Deploy File Integrity Operator
$ oc get pods -nopenshift-file-integrity -w
NAME                                       READY   STATUS    RESTARTS   AGE
file-integrity-operator-65db875847-zxlkk   1/1     Running   0          18s

$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-d7157ff308ddade43a9419e0ced98794   True      False      False      3              3                   3                     0                      5h31m
worker   rendered-worker-12392dc145480f768d4cfc43caa8958b   True      False      False      2              2                   2                     0                      5h31m
wscan    rendered-wscan-aea02f26081e1c2900a8fb21eb67c39c    True      False      False      1              1                   1                     0                      3h26m

2. Create FileIntegrity object

$ oc create -f - <<< '{"apiVersion":"fileintegrity.openshift.io/v1alpha1","kind":"FileIntegrity","metadata":{"name":"example-fileintegrity","namespace":"openshift-file-integrity"},"spec":{"nodeSelector":{"node-role.kubernetes.io/worker":""},"config":{}}}'
fileintegrity.fileintegrity.openshift.io/example-fileintegrity created

$ oc describe ds aide-ds-example-fileintegrity | grep Node-Selector
Node-Selector:  node-role.kubernetes.io/worker=

3. Check for daemonset and nodeSelector value

$ oc get all -nopenshift-file-integrity
NAME                                           READY   STATUS    RESTARTS   AGE
pod/aide-ds-example-fileintegrity-8chdq        1/1     Running   0          42s
pod/aide-ds-example-fileintegrity-c7bmg        1/1     Running   0          42s
pod/aide-ds-example-fileintegrity-whs5x        1/1     Running   0          42s
pod/file-integrity-operator-65db875847-zxlkk   1/1     Running   0          4m51s

NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/file-integrity-operator-metrics   ClusterIP   172.30.212.144   <none>        8383/TCP,8686/TCP   4m19s

NAME                                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
daemonset.apps/aide-ds-example-fileintegrity   3         3         3       3            3           node-role.kubernetes.io/worker=   43s   <<-----

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/file-integrity-operator   1/1     1            1           5m14s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/file-integrity-operator-65db875847   1         1         1       5m16s


4. Patch the FileIntegrity object with another nodeSelector value i.e wscan

$ oc patch fileintegrities.fileintegrity.openshift.io example-fileintegrity --type json --patch='[{"op":"remove","path":"/spec/nodeSelector/node-role.kubernetes.io~1worker"},{"op":"add","path":"/spec/nodeSelector/node-role.kubernetes.io~1wscan","value":""}]'
fileintegrity.fileintegrity.openshift.io/example-fileintegrity patched

5. Check daemonset pod if the nodeSelector value get changed to wscan

$ oc describe ds aide-ds-example-fileintegrity | grep Node-Selector
Node-Selector:  node-role.kubernetes.io/worker=


Actual results:
 
The daemonSet does not get updated with the new value when the nodeSelector and Tolerations sections of the FileIntegrity object get changed.

Expected results:

The daemonSet should get updated with the new value when the nodeSelector and Tolerations sections of the FileIntegrity object get changed.

 
Additional info:

Comment 3 Prashant Dhamdhere 2021-02-04 09:09:36 UTC
[ Bug Verification ]


This looks good to me. Now, the daemonSet gets udated with the new value when the nodeSelector and Tolerations 
sections of the FileIntegrity object get changed.


Verified On:
4.6.0-0.nightly-2021-02-04-001315
file-integrity-operator.v0.1.10


$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2021-02-04-001315   True        False         43m     Cluster version is 4.6.0-0.nightly-2021-02-04-001315

$ oc get csv
NAME                              DISPLAY                   VERSION   REPLACES   PHASE
file-integrity-operator.v0.1.10   File Integrity Operator   0.1.10               Succeeded

$ oc get pods
NAME                                       READY   STATUS    RESTARTS   AGE
file-integrity-operator-54fbb9f57d-rtqfl   1/1     Running   0          73s

$ oc get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-148-15.us-east-2.compute.internal    Ready    master   83m   v1.19.0+e49167a
ip-10-0-159-91.us-east-2.compute.internal    Ready    worker   78m   v1.19.0+e49167a
ip-10-0-181-179.us-east-2.compute.internal   Ready    master   79m   v1.19.0+e49167a
ip-10-0-190-177.us-east-2.compute.internal   Ready    worker   78m   v1.19.0+e49167a
ip-10-0-197-78.us-east-2.compute.internal    Ready    master   79m   v1.19.0+e49167a
ip-10-0-216-202.us-east-2.compute.internal   Ready    worker   74m   v1.19.0+e49167a

$ oc label node ip-10-0-216-202.us-east-2.compute.internal node-role.kubernetes.io/wscan=
node/ip-10-0-216-202.us-east-2.compute.internal labeled

$ oc create -f - <<EOF
> apiVersion: machineconfiguration.openshift.io/v1
> kind: MachineConfigPool
> metadata:
>   name: wscan
> spec:
>   machineConfigSelector:
>     matchExpressions:
>       - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,wscan]}
>   nodeSelector:
>     matchLabels:
>       node-role.kubernetes.io/wscan: ""
> EOF
machineconfigpool.machineconfiguration.openshift.io/wscan created

$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-f8682511c5881cce62d95028f21cca5a   True      False      False      3              3                   3                     0                      85m
worker   rendered-worker-b8e61456e95ac919df983c13631a762c   True      False      False      2              2                   2                     0                      85m
wscan    rendered-wscan-b8e61456e95ac919df983c13631a762c    True      False      False      1              1                   1                     0                      2m30s

$ oc create -f - <<< '{"apiVersion":"fileintegrity.openshift.io/v1alpha1","kind":"FileIntegrity","metadata":{"name":"example-fileintegrity","namespace":"openshift-file-integrity"},"spec":{"nodeSelector":{"node-role.kubernetes.io/worker":""},"config":{}}}'
fileintegrity.fileintegrity.openshift.io/example-fileintegrity created

$ oc get pods
NAME                                       READY   STATUS    RESTARTS   AGE
aide-ds-example-fileintegrity-gd2tv        1/1     Running   0          36s
aide-ds-example-fileintegrity-ljkvx        1/1     Running   0          36s
aide-ds-example-fileintegrity-rjhmz        1/1     Running   0          36s
file-integrity-operator-54fbb9f57d-rtqfl   1/1     Running   0          18m

$  oc describe ds aide-ds-example-fileintegrity | grep Node-Selector
Node-Selector:  node-role.kubernetes.io/worker=

$ oc get ds
NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
aide-ds-example-fileintegrity   3         3         3       3            3           node-role.kubernetes.io/worker=   70s


$ oc patch fileintegrities.fileintegrity.openshift.io example-fileintegrity --type json --patch='[{"op":"remove","path":"/spec/nodeSelector/node-role.kubernetes.io~1worker"},{"op":"add","path":"/spec/nodeSelector/node-role.kubernetes.io~1wscan","value":""}]'
fileintegrity.fileintegrity.openshift.io/example-fileintegrity patched

$ oc get ds
NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                    AGE
aide-ds-example-fileintegrity   1         1         0       0            0           node-role.kubernetes.io/wscan=   91s

$  oc describe ds aide-ds-example-fileintegrity | grep Node-Selector
Node-Selector:  node-role.kubernetes.io/wscan=


$ oc get fileintegrities.fileintegrity.openshift.io example-fileintegrity -o json |grep -A 2 "nodeSelector"|tail -4
--
        "nodeSelector": {
            "node-role.kubernetes.io/wscan": ""
        },

$ oc logs file-integrity-operator-54fbb9f57d-rtqfl |grep nodeSelector
{"level":"info","ts":1612429024.8492694,"logger":"controller_fileintegrity","msg":"FileIntegrity needed nodeSelector update","Request.Namespace":"openshift-file-integrity","Request.Name":"example-fileintegrity"}


$ oc get fileintegrities.fileintegrity.openshift.io example-fileintegrity -o json |grep -A 5 "tolerations" |tail -7
--
        "tolerations": [
            {
                "effect": "NoSchedule",
                "key": "node-role.kubernetes.io/master",
                "operator": "Exists"
            }

$ oc get nodes
NAME                                         STATUS   ROLES          AGE   VERSION
ip-10-0-148-15.us-east-2.compute.internal    Ready    master         90m   v1.19.0+e49167a
ip-10-0-159-91.us-east-2.compute.internal    Ready    worker         85m   v1.19.0+e49167a
ip-10-0-181-179.us-east-2.compute.internal   Ready    master         85m   v1.19.0+e49167a
ip-10-0-190-177.us-east-2.compute.internal   Ready    worker         85m   v1.19.0+e49167a
ip-10-0-197-78.us-east-2.compute.internal    Ready    master         86m   v1.19.0+e49167a
ip-10-0-216-202.us-east-2.compute.internal   Ready    worker,wscan   80m   v1.19.0+e49167a

$ oc adm taint node ip-10-0-216-202.us-east-2.compute.internal key1=value1:NoSchedule
node/ip-10-0-216-202.us-east-2.compute.internal tainted

$ oc label node ip-10-0-216-202.us-east-2.compute.internal taint=true
node/ip-10-0-216-202.us-east-2.compute.internal labeled

$ oc patch fileintegrities.fileintegrity.openshift.io example-fileintegrity -p '{"spec": {"tolerations": [{"effect": "NoSchedule","key": "key1","value": "value1","operator": "Equal"}]}}' --type merge
fileintegrity.fileintegrity.openshift.io/example-fileintegrity patched


$ oc get ds
NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                    AGE
aide-ds-example-fileintegrity   1         1         1       1            1           node-role.kubernetes.io/wscan=   4m12s


$ oc get pods
NAME                                       READY   STATUS    RESTARTS   AGE
aide-ds-example-fileintegrity-dqlbl        1/1     Running   0          13s
file-integrity-operator-54fbb9f57d-rtqfl   1/1     Running   0          22m


$ oc get ds aide-ds-example-fileintegrity -o yaml |grep -A 5 tolerations|tail -7
--
      tolerations:
      - effect: NoSchedule
        key: key1
        operator: Equal
        value: value1
      volumes:

$ oc logs file-integrity-operator-54fbb9f57d-rtqfl|grep tolerations
{"level":"info","ts":1612429180.9145136,"logger":"controller_fileintegrity","msg":"FileIntegrity needed tolerations update","Request.Namespace":"openshift-file-integrity","Request.Name":"example-fileintegrity"}

Comment 5 errata-xmlrpc 2021-02-16 09:18:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.6 file-integrity-operator image security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0568


Note You need to log in before you can comment on or make changes to this bug.