+++ This bug was initially created as a clone of Bug #1910050 +++ Description of problem: The file integrity aide-ds pod goes in CrashLoopBackOff state during the scan and the pod reports the fatal error while creating configMap $ oc get pods NAME READY STATUS RESTARTS AGE aide-ds-example-fileintegrity5-c6bvq 0/1 CrashLoopBackOff 3 3m23s aide-ds-example-fileintegrity5-d6xqs 0/1 CrashLoopBackOff 3 3m23s aide-ds-example-fileintegrity5-j94ms 0/1 CrashLoopBackOff 3 3m23s file-integrity-operator-65db875847-f6d8m 1/1 Running 0 4m1s ip-10-0-135-238.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s ip-10-0-158-39.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s ip-10-0-185-241.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s ip-10-0-186-71.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s ip-10-0-204-29.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s ip-10-0-218-248.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s $ oc logs aide-ds-example-fileintegrity5-c6bvq Starting the AIDE runner daemon debug: aide files locked by aideLoop running aide check debug: No scan result available aide check returned status 0 debug: aide files unlocked by aideLoop debug: Getting FileIntegrity openshift-file-integrity/example-fileintegrity5 debug: Getting FileIntegrity openshift-file-integrity/example-fileintegrity5 debug: Getting FileIntegrity openshift-file-integrity/example-fileintegrity5 debug: Getting FileIntegrity openshift-file-integrity/example-fileintegrity5 debug: Getting FileIntegrity openshift-file-integrity/example-fileintegrity5 debug: Getting FileIntegrity openshift-file-integrity/example-fileintegrity5 FATAL:Can't create configMap to report OK: 'configmaps "aide-ds-example-fileintegrity5-ip-10-0-218-248.us-east-2.compute.internal" already exists', aborting $ oc get fileintegritynodestatuses No resources found in openshift-file-integrity namespace. Version-Release number of selected component (if applicable): 4.6.0-0.nightly-2020-12-21-163117 How reproducible: Always Steps to Reproduce: 1. Clone FIO repo $ git clone https://github.com/openshift/file-integrity-operator.git $ git branch * master $ git log|grep commit|head -n1 commit 3bc93810285be756bb8f66d54ea2c7e063ceb35c 2. Create and switch to namespace $ oc create -f file-integrity-operator/deploy/ns.yaml $ oc project openshift-file-integrity 3. Create CustomResourceDefinition for fio $ for l in `ls -1 file-integrity-operator/deploy/crds/*crd.yaml`; do oc create -f $l; done $ oc create -f file-integrity-operator/deploy/ 4. Create CustomResource $ oc create -f - << EOF apiVersion: fileintegrity.openshift.io/v1alpha1 kind: FileIntegrity metadata: name: example-fileintegrity5 namespace: openshift-file-integrity spec: config: gracePeriod: 20 debug: true nodeSelector: node-role.kubernetes.io/worker: "" EOF 5. Monitor aide-ds scan pod status $ oc get pods -w Actual results: The file integrity aide-ds pod goes in CrashLoopBackOff state during scan and the pod reports the fatal error while creating configMap. $ oc get pods NAME READY STATUS RESTARTS AGE aide-ds-example-fileintegrity5-c6bvq 0/1 CrashLoopBackOff 3 3m23s aide-ds-example-fileintegrity5-d6xqs 0/1 CrashLoopBackOff 3 3m23s aide-ds-example-fileintegrity5-j94ms 0/1 CrashLoopBackOff 3 3m23s file-integrity-operator-65db875847-f6d8m 1/1 Running 0 4m1s ip-10-0-135-238.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s ip-10-0-158-39.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s ip-10-0-185-241.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s ip-10-0-186-71.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s ip-10-0-204-29.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s ip-10-0-218-248.us-east-2.compute.internal-rmholdoff 0/1 Completed 0 3m40s Expected results: The file integrity aide-ds pod should not go in CrashLoopBackOff state during the scan and it should not report any fatal error while creating configMap. Also, the rmholdoff pods should get removed. Additional info: Comparing with 4.6 released version of FIO, there are several bug fix missing (maybe the list should be longer, but these cannot be verified before the above issue gets resolved https://bugzilla.redhat.com/show_bug.cgi?id=1869293 https://bugzilla.redhat.com/show_bug.cgi?id=1861303
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.6 file-integrity-operator image security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0568