Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1923096

Summary: The daemonSet does not get updated when the nodeSelector and Tolerations get changed in fileIntegrity object
Product: OpenShift Container Platform Reporter: Prashant Dhamdhere <pdhamdhe>
Component: File Integrity OperatorAssignee: Juan Antonio Osorio <josorior>
Status: CLOSED ERRATA QA Contact: xiyuan
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.7CC: jhrozek, josorior, pdhamdhe
Target Milestone: ---   
Target Release: 4.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1923099 (view as bug list) Environment:
Last Closed: 2021-02-24 21:18:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1923099    

Description Prashant Dhamdhere 2021-02-01 12:11:13 UTC
Description of problem:

The daemonSet does not get updated with the new value when the nodeSelector and Tolerations sections of the FileIntegrity object get changed.

Version-Release number of selected component (if applicable):

4.7.0-0.nightly-2021-01-31-031653

How reproducible:
 
Always

Steps to Reproduce:

1. Deploy File Integrity Operator
$ oc get pods -nopenshift-file-integrity -w
NAME                                       READY   STATUS    RESTARTS   AGE
file-integrity-operator-65db875847-zxlkk   1/1     Running   0          18s

$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-d7157ff308ddade43a9419e0ced98794   True      False      False      3              3                   3                     0                      5h31m
worker   rendered-worker-12392dc145480f768d4cfc43caa8958b   True      False      False      2              2                   2                     0                      5h31m
wscan    rendered-wscan-aea02f26081e1c2900a8fb21eb67c39c    True      False      False      1              1                   1                     0                      3h26m

2. Create FileIntegrity object

$ oc create -f - <<< '{"apiVersion":"fileintegrity.openshift.io/v1alpha1","kind":"FileIntegrity","metadata":{"name":"example-fileintegrity","namespace":"openshift-file-integrity"},"spec":{"nodeSelector":{"node-role.kubernetes.io/worker":""},"config":{}}}'
fileintegrity.fileintegrity.openshift.io/example-fileintegrity created

$ oc describe ds aide-ds-example-fileintegrity | grep Node-Selector
Node-Selector:  node-role.kubernetes.io/worker=

3. Check for daemonset and nodeSelector value

$ oc get all -nopenshift-file-integrity
NAME                                           READY   STATUS    RESTARTS   AGE
pod/aide-ds-example-fileintegrity-8chdq        1/1     Running   0          42s
pod/aide-ds-example-fileintegrity-c7bmg        1/1     Running   0          42s
pod/aide-ds-example-fileintegrity-whs5x        1/1     Running   0          42s
pod/file-integrity-operator-65db875847-zxlkk   1/1     Running   0          4m51s

NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/file-integrity-operator-metrics   ClusterIP   172.30.212.144   <none>        8383/TCP,8686/TCP   4m19s

NAME                                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
daemonset.apps/aide-ds-example-fileintegrity   3         3         3       3            3           node-role.kubernetes.io/worker=   43s   <<-----

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/file-integrity-operator   1/1     1            1           5m14s

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/file-integrity-operator-65db875847   1         1         1       5m16s


4. Patch the FileIntegrity object with another nodeSelector value i.e wscan

$ oc patch fileintegrities.fileintegrity.openshift.io example-fileintegrity --type json --patch='[{"op":"remove","path":"/spec/nodeSelector/node-role.kubernetes.io~1worker"},{"op":"add","path":"/spec/nodeSelector/node-role.kubernetes.io~1wscan","value":""}]'
fileintegrity.fileintegrity.openshift.io/example-fileintegrity patched

5. Check daemonset pod if the nodeSelector value get changed to wscan

$ oc describe ds aide-ds-example-fileintegrity | grep Node-Selector
Node-Selector:  node-role.kubernetes.io/worker=


Actual results:
 
The daemonSet does not get updated with the new value when the nodeSelector and Tolerations sections of the FileIntegrity object get changed.

Expected results:

The daemonSet should get updated with the new value when the nodeSelector and Tolerations sections of the FileIntegrity object get changed.

 
Additional info:

Comment 1 Prashant Dhamdhere 2021-02-01 15:51:12 UTC
[ Bug Verification ]

This looks good to me. Now, the daemonSet gets udated with the new value when the nodeSelector and Tolerations 
sections of the FileIntegrity object get changed.


Verified on:
4.7.0-0.nightly-2021-01-31-031653

$ gh pr checkout 137
From github.com:openshift/file-integrity-operator
 * [new ref]         refs/pull/137/head -> fio-updates
Switched to branch 'fio-updates'

$ git branch 
* fio-updates
  master
  release-4.8
  
  
$ make deploy-local 
Creating 'openshift-file-integrity' namespace/project
Error from server (AlreadyExists): error when creating "deploy/ns.yaml": namespaces "openshift-file-integrity" already exists
/home/pdhamdhe/go/bin/operator-sdk build quay.io/file-integrity-operator/file-integrity-operator:latest --image-builder podman
INFO[0023] Building OCI image quay.io/file-integrity-operator/file-integrity-operator:latest 
STEP 1: FROM registry.access.redhat.com/ubi8/go-toolset AS builder
Getting image source signatures
Copying blob d9e72d058dc5 done  
Copying blob cca21acb641a done  
Copying blob 4dc5724b57ac done  
Copying blob a108724c930f done  
Copying blob 620696f92fec done  
Copying config de940396bc done  
Writing manifest to image destination
Storing signatures
STEP 2: USER root
--> 1d4b005ce6a
STEP 3: WORKDIR /go/src/github.com/openshift/file-integrity-operator
--> cf25b7cc5b1
STEP 4: ENV GOFLAGS="-mod=vendor"
--> 4c69264fd3c
STEP 5: COPY . .
--> 446f1b532ed
STEP 6: RUN make operator-bin
GOFLAGS=-mod=vendor GO111MODULE=auto go build -o /go/src/github.com/openshift/file-integrity-operator/build/_output/bin/file-integrity-operator github.com/openshift/file-integrity-operator/cmd/manager
--> 8a8d9f338ae
STEP 7: FROM registry.centos.org/centos:8
Getting image source signatures
Copying blob 926a85fb4806 done  
Copying config e64b6e20a6 done  
Writing manifest to image destination
Storing signatures
STEP 8: RUN yum -y install aide && yum clean all
CentOS Linux 8 - AppStream                      3.5 MB/s | 6.3 MB     00:01    
CentOS Linux 8 - BaseOS                         784 kB/s | 2.3 MB     00:02    
CentOS Linux 8 - Extras                         8.2 kB/s | 8.6 kB     00:01    
Dependencies resolved.
================================================================================
 Package               Architecture  Version             Repository        Size
================================================================================
Installing:
 aide                  x86_64        0.16-14.el8         appstream        156 k
Installing dependencies:
 e2fsprogs-libs        x86_64        1.45.6-1.el8        baseos           233 k

Transaction Summary
================================================================================
Install  2 Packages

Total download size: 389 k
Installed size: 874 k
Downloading Packages:
(1/2): e2fsprogs-libs-1.45.6-1.el8.x86_64.rpm   1.6 MB/s | 233 kB     00:00    .2 kB     --:-- ETA
(2/2): aide-0.16-14.el8.x86_64.rpm              638 kB/s | 156 kB     00:00    
--------------------------------------------------------------------------------
Total                                           326 kB/s | 389 kB     00:01     
warning: /var/cache/dnf/appstream-02e86d1c976ab532/packages/aide-0.16-14.el8.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 8483c65d: NOKEY
CentOS Linux 8 - AppStream                      1.6 MB/s | 1.6 kB     00:00    
Importing GPG key 0x8483C65D:
 Userid     : "CentOS (CentOS Official Signing Key) <security>"
 Fingerprint: 99DB 70FA E1D7 CE22 7FB6 4882 05B5 55B3 8483 C65D
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1 
  Installing       : e2fsprogs-libs-1.45.6-1.el8.x86_64                     1/2 
  Running scriptlet: e2fsprogs-libs-1.45.6-1.el8.x86_64                     1/2 
  Installing       : aide-0.16-14.el8.x86_64                                2/2 
  Running scriptlet: aide-0.16-14.el8.x86_64                                2/2 
  Verifying        : aide-0.16-14.el8.x86_64                                1/2 
  Verifying        : e2fsprogs-libs-1.45.6-1.el8.x86_64                     2/2 

Installed:
  aide-0.16-14.el8.x86_64           e2fsprogs-libs-1.45.6-1.el8.x86_64          

Complete!
21 files removed
--> 57996ae5ce0
STEP 9: ENV OPERATOR=/usr/local/bin/file-integrity-operator     USER_UID=1001     USER_NAME=file-integrity-operator
--> 57c6689d627
STEP 10: COPY --from=builder /go/src/github.com/openshift/file-integrity-operator/build/_output/bin/file-integrity-operator ${OPERATOR}
--> e94b65738d6
STEP 11: COPY build/bin /usr/local/bin
--> 8d46549fe6c
STEP 12: RUN  /usr/local/bin/user_setup
+ mkdir -p /root
+ chown 1001:0 /root
+ chmod ug+rwx /root
+ chmod g+rw /etc/passwd
+ rm /usr/local/bin/user_setup
--> dc191f69452
STEP 13: ENTRYPOINT ["/usr/local/bin/entrypoint"]
--> 03430ef1815
STEP 14: USER ${USER_UID}
STEP 15: COMMIT quay.io/file-integrity-operator/file-integrity-operator:latest
--> 6ddb41fbb93
6ddb41fbb937d73401b92c8d2f8e279084f6d3fa23d8c941b41e0725831c1b14
INFO[0309] Operator build complete.                     
podman build -f ./images/aide/Dockerfile -t quay.io/file-integrity-operator/aide:latest .
STEP 1: FROM registry.centos.org/centos:8
STEP 2: RUN yum -y install aide && yum clean all
--> Using cache 57996ae5ce03e12d9ffc2a67b70201815360ea385cdc1e3647c9035c8fab00f4
STEP 3: COMMIT quay.io/file-integrity-operator/aide:latest
--> 57996ae5ce0
57996ae5ce03e12d9ffc2a67b70201815360ea385cdc1e3647c9035c8fab00f4
podman build -t quay.io/file-integrity-operator/file-integrity-operator-bundle:latest -f bundle.Dockerfile .
STEP 1: FROM scratch
STEP 2: LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1
--> e393dad66ee
STEP 3: LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/
--> be35e49cad6
STEP 4: LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/
--> 838050be09f
STEP 5: LABEL operators.operatorframework.io.bundle.package.v1=file-integrity-operator
--> a1d1f7f5370
STEP 6: LABEL operators.operatorframework.io.bundle.channels.v1=alpha
--> 90a4a193431
STEP 7: LABEL operators.operatorframework.io.bundle.channel.default.v1=alpha
--> 74676d46c47
STEP 8: COPY deploy/olm-catalog/file-integrity-operator/manifests /manifests/
--> 19a00516dfe
STEP 9: COPY deploy/olm-catalog/file-integrity-operator/metadata /metadata/
STEP 10: COMMIT quay.io/file-integrity-operator/file-integrity-operator-bundle:latest
--> 3bd0c4c81cd
3bd0c4c81cd9ecd2d26be05ba541dd42fca584c1d1c5e36f5f2790ba43994ac9
IMAGE_FORMAT variable missing. We're in local enviornment.
Temporarily exposing the default route to the image registry
config.imageregistry.operator.openshift.io/cluster patched
Pushing image quay.io/file-integrity-operator/file-integrity-operator:latest to the image registry
IMAGE_REGISTRY_HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'); \
	podman login --tls-verify=false -u kubeadmin -p sha256~Z8jhekFPOWOBNbgCYjIX51y5TCMl_67qn4u8fOPQ8xw ${IMAGE_REGISTRY_HOST}; \
	podman push --tls-verify=false quay.io/file-integrity-operator/file-integrity-operator:latest ${IMAGE_REGISTRY_HOST}/openshift-file-integrity/file-integrity-operator:latest; \
	podman push --tls-verify=false quay.io/file-integrity-operator/aide:latest ${IMAGE_REGISTRY_HOST}/openshift-file-integrity/aide:latest
Login Succeeded!
Getting image source signatures
Copying blob 194abdb17221 done  
Copying blob e1b93f17f3bd done  
Copying blob a8184aa4bac8 done  
Copying blob 6e561f1529fa done  
Copying blob 618ce6bf40a6 done  
Copying config 6ddb41fbb9 done  
Writing manifest to image destination
Storing signatures
Getting image source signatures
Copying blob 618ce6bf40a6 skipped: already exists  
Copying blob e1b93f17f3bd [--------------------------------------] 0.0b / 0.0b
Copying config 57996ae5ce done  
Writing manifest to image destination
Storing signatures
Removing the route from the image registry
config.imageregistry.operator.openshift.io/cluster patched
customresourcedefinition.apiextensions.k8s.io/fileintegrities.fileintegrity.openshift.io unchanged
customresourcedefinition.apiextensions.k8s.io/fileintegritynodestatuses.fileintegrity.openshift.io unchanged
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
namespace/openshift-file-integrity configured
deployment.apps/file-integrity-operator created
role.rbac.authorization.k8s.io/file-integrity-operator created
role.rbac.authorization.k8s.io/file-integrity-daemon created
clusterrole.rbac.authorization.k8s.io/file-integrity-operator unchanged
rolebinding.rbac.authorization.k8s.io/file-integrity-operator created
rolebinding.rbac.authorization.k8s.io/file-integrity-daemon created
clusterrolebinding.rbac.authorization.k8s.io/file-integrity-operator unchanged
serviceaccount/file-integrity-operator created
serviceaccount/file-integrity-daemon created


$ oc get pods
NAME                                       READY   STATUS    RESTARTS   AGE
file-integrity-operator-67bbcdcb7c-ltvx8   1/1     Running   0          67m

$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-d7157ff308ddade43a9419e0ced98794   True      False      False      3              3                   3                     0                      9h
worker   rendered-worker-12392dc145480f768d4cfc43caa8958b   True      False      False      2              2                   2                     0                      9h
wscan    rendered-wscan-aea02f26081e1c2900a8fb21eb67c39c    True      False      False      1              1                   1                     0                      7h43m


$ oc create -f - <<< '{"apiVersion":"fileintegrity.openshift.io/v1alpha1","kind":"FileIntegrity","metadata":{"name":"example-fileintegrity","namespace":"openshift-file-integrity"},"spec":{"nodeSelector":{"node-role.kubernetes.io/worker":""},"config":{}}}'
fileintegrity.fileintegrity.openshift.io/example-fileintegrity created


$ oc get pods -w
NAME                                       READY   STATUS    RESTARTS   AGE
aide-ds-example-fileintegrity-bzw7v        1/1     Running   0          21s
aide-ds-example-fileintegrity-gzqth        1/1     Running   0          21s
aide-ds-example-fileintegrity-rx94g        1/1     Running   0          20s
file-integrity-operator-67bbcdcb7c-ltvx8   1/1     Running   0          68m

$ oc describe ds aide-ds-example-fileintegrity | grep Node-Selector
Node-Selector:  node-role.kubernetes.io/worker=

$ oc get ds
NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
aide-ds-example-fileintegrity   3         3         3       3            3           node-role.kubernetes.io/worker=   80s


$ oc patch fileintegrities.fileintegrity.openshift.io example-fileintegrity --type json --patch='[{"op":"remove","path":"/spec/nodeSelector/node-role.kubernetes.io~1worker"},{"op":"add","path":"/spec/nodeSelector/node-role.kubernetes.io~1wscan","value":""}]'
fileintegrity.fileintegrity.openshift.io/example-fileintegrity patched

$ oc get ds
NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                    AGE
aide-ds-example-fileintegrity   1         1         1       1            1           node-role.kubernetes.io/wscan=   2m10s

$ oc describe ds aide-ds-example-fileintegrity | grep Node-Selector
Node-Selector:  node-role.kubernetes.io/wscan=


$ oc get fileintegrities.fileintegrity.openshift.io example-fileintegrity -o json |grep -A 2 "nodeSelector"|tail -4
--
        "nodeSelector": {
            "node-role.kubernetes.io/wscan": ""
        },


$ oc logs file-integrity-operator-67bbcdcb7c-ltvx8|grep nodeSelector
{"level":"info","ts":1612190390.7104225,"logger":"controller_fileintegrity","msg":"FileIntegrity needed nodeSelector update","Request.Namespace":"openshift-file-integrity","Request.Name":"example-fileintegrity"}


$ oc get fileintegrities.fileintegrity.openshift.io example-fileintegrity -o json |grep -A 5 "tolerations" |tail -7
--
        "tolerations": [
            {
                "effect": "NoSchedule",
                "key": "node-role.kubernetes.io/master",
                "operator": "Exists"
            }


$ oc get nodes
NAME                                STATUS   ROLES          AGE   VERSION
pdhamdhe-osp-jd6dj-master-0         Ready    master         10h   v1.20.0+3b90e69
pdhamdhe-osp-jd6dj-master-1         Ready    master         10h   v1.20.0+3b90e69
pdhamdhe-osp-jd6dj-master-2         Ready    master         10h   v1.20.0+3b90e69
pdhamdhe-osp-jd6dj-worker-0-jj42m   Ready    worker         10h   v1.20.0+3b90e69
pdhamdhe-osp-jd6dj-worker-0-z7fq4   Ready    worker,wscan   9h    v1.20.0+3b90e69
pdhamdhe-osp-jd6dj-worker-0-zb6hh   Ready    worker         10h   v1.20.0+3b90e69


$ oc adm taint node pdhamdhe-osp-jd6dj-worker-0-z7fq4 key1=value1:NoSchedule
node/pdhamdhe-osp-jd6dj-worker-0-z7fq4 tainted


$ oc label node pdhamdhe-osp-jd6dj-worker-0-z7fq4 taint=true
node/pdhamdhe-osp-jd6dj-worker-0-z7fq4 labeled

$ oc patch fileintegrities.fileintegrity.openshift.io example-fileintegrity -p '{"spec": {"tolerations": [{"effect": "NoSchedule","key": "key1","value": "value1","operator": "Equal"}]}}' --type merge
fileintegrity.fileintegrity.openshift.io/example-fileintegrity patched


$ oc get ds
NAME                            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                    AGE
aide-ds-example-fileintegrity   1         1         1       1            1           node-role.kubernetes.io/wscan=   35m


$ oc get pods
NAME                                       READY   STATUS    RESTARTS   AGE
aide-ds-example-fileintegrity-782c5        1/1     Running   0          2m43s
file-integrity-operator-67bbcdcb7c-ltvx8   1/1     Running   0          104m

$ oc get ds aide-ds-example-fileintegrity -o yaml |grep -A 5 tolerations|tail -7
--
      tolerations:
      - effect: NoSchedule
        key: key1
        operator: Equal
        value: value1
      volumes:


$ oc logs file-integrity-operator-67bbcdcb7c-ltvx8|grep tolera
{"level":"info","ts":1612191552.8866239,"logger":"controller_fileintegrity","msg":"FileIntegrity needed tolerations update","Request.Namespace":"openshift-file-integrity","Request.Name":"example-fileintegrity"}
{"level":"info","ts":1612192277.3090322,"logger":"controller_fileintegrity","msg":"FileIntegrity needed tolerations update","Request.Namespace":"openshift-file-integrity","Request.Name":"example-fileintegrity"}

Comment 8 errata-xmlrpc 2021-02-24 21:18:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7 file-integrity-operator image security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0100