+++ This bug was initially created as a clone of Bug #1919098 +++ Description of problem: The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object already present on the cluster # oc get pods NAME READY STATUS RESTARTS AGE aggregator-pod-worker-scan 0/1 Completed 0 41m aggregator-pod-worker1-scan 0/1 Completed 0 3m47s compliance-operator-f764c7fbd-g75bh 1/1 Running 0 20h ocp4-openshift-compliance-pp-5f447d678d-b7k98 1/1 Running 0 54m rhcos4-openshift-compliance-pp-d459d8984-6q92m 1/1 Running 0 54m worker-scan-pdhamdhe-osp21-fjmlr-worker-0-2kp92-pod 0/2 Completed 0 42m worker-scan-pdhamdhe-osp21-fjmlr-worker-0-f8qkq-pod 0/2 Completed 0 42m worker1-scan-pdhamdhe-osp21-fjmlr-worker-0-2kp92-pod 0/2 Completed 0 4m37s worker1-scan-pdhamdhe-osp21-fjmlr-worker-0-f8qkq-pod 0/2 Completed 0 4m37s # oc get compliancesuite NAME PHASE RESULT example-compliancesuite DONE COMPLIANT example1-compliancesuite DONE NON-COMPLIANT # oc get complianceremediation NAME STATE worker-scan-no-empty-passwords Outdated <<--- worker1-scan-audit-rules-dac-modification-chmod Applied # oc get mc |grep "75-\|NAME" NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 75-worker-scan-no-empty-passwords 2.2.0 58m 75-worker1-scan-audit-rules-dac-modification-chmod 3.1.0 4m59 # oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-dbfc2178bb60fefef18a87cf7492f041 True False False 3 3 3 0 22h worker rendered-worker-9a8ec6d3604a3a9db54fcd5442f6f316 True False False 1 1 1 0 22h wscan rendered-wscan-a757137f63a73ae96251a4e61324f633 False False False 2 0 0 0 15h <<--- Version-Release number of selected component (if applicable): 4.7.0-0.nightly-2021-01-19-095812 How reproducible: Always Steps to Reproduce: 1. Deploy Compliance Operator 2. Create ComplianceSuite object which generates outdated complianceRemediation object # oc create -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:rem_mod_base > rule: xccdf_org.ssgproject.content_rule_no_empty_passwords > debug: true > nodeSelector: > node-role.kubernetes.io/wscan: "" > EOF compliancesuite.compliance.openshift.io/example-compliancesuite created 3. Monitor scan pods and Check for compliance scan result $ oc get pods -w -nopenshift-compliance $ oc get compliancesuite -w 4. Once machineConfigPool gets updated then change the image to create outdated complianceRemediation object and apply it $ oc get mcp -w # oc apply -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:rem_mod_change > rule: xccdf_org.ssgproject.content_rule_no_empty_passwords > debug: true > nodeSelector: > node-role.kubernetes.io/wscan: "" > EOF 5. Annotate the scan to re-run it $ oc annotate compliancescans/worker-scan compliance.openshift.io/rescan= 6. Check compliancesuite result after rerun the scan and the complianceRemediations object reports status outdated $ oc get compliancesuite $ oc get ComplianceRemediation NAME STATE worker-scan-no-empty-passwords Outdated 7. Now, create another compliacesuite object with autoApplyRemediations apply oc create -f - <<EOF apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: example1-compliancesuite spec: autoApplyRemediations: true schedule: "0 1 * * *" scans: - name: worker1-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest rule: "xccdf_org.ssgproject.content_rule_audit_rules_dac_modification_chmod" debug: true nodeSelector: node-role.kubernetes.io/wscan: "" EOF 8. Monitor scan pods and Check for complianceSuite result $ oc get pods -w -nopenshift-compliance $ oc get compliancesuite -w $ oc get ComplianceRemediation 9. Check the machineConfigPool i.e wscan and it gets paused $ oc get mcp -w Actual results: The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object already present on the cluster $ oc describe compliancesuite example1-compliancesuite | tail Phase: RUNNING Result: NOT-AVAILABLE Results Storage: Name: worker1-scan Namespace: openshift-compliance Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal HaveOutdatedRemediations 6m49s (x5 over 77m) suitectrl One of suite's scans produced outdated remediations, please check for complianceremediation objects labeled with complianceoperator.openshift.io/outdated-remediation Expected results: The autoApplyRemediation should not be paused the machineConfigPool even though there is outdated complianceRemediation object already present on the cluster Additional info: The machineConfigPool gets updated as soon as the outdated complianceRemediation object get removed. # oc get complianceremediation NAME STATE worker-scan-no-empty-passwords Outdated worker1-scan-audit-rules-dac-modification-chmod Applied # oc patch complianceremediation worker-scan-no-empty-passwords -p '{"spec":{"outdated": null}}' --type=merge complianceremediation.compliance.openshift.io/worker-scan-no-empty-passwords patched # oc get complianceremediation NAME STATE worker-scan-no-empty-passwords Applied worker1-scan-audit-rules-dac-modification-chmod Applied # oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-dbfc2178bb60fefef18a87cf7492f041 True False False 3 3 3 0 23h worker rendered-worker-9a8ec6d3604a3a9db54fcd5442f6f316 True False False 1 1 1 0 23h wscan rendered-wscan-a757137f63a73ae96251a4e61324f633 False True False 2 1 1 0 16h wscan rendered-wscan-e40afd9e8df4f463953b79093eb917c7 True False False 2 2 2 0 16h
[ Bug Verification ] Looks good to me. Now, the autoApplyRemediation does not pause the machineConfigPool even though there is outdated complianceRemediation object already present on the cluster. Also newly added annotations help to apply and remove outdated complianceRemediation object. Verified on: 4.6.0-0.nightly-2021-01-30-211400 compliance-operator.v0.1.25 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.0-0.nightly-2021-01-30-211400 True False 145m Cluster version is 4.6.0-0.nightly-2021-01-30-211400 $ oc get csv NAME DISPLAY VERSION REPLACES PHASE compliance-operator.v0.1.25 Compliance Operator 0.1.25 Succeeded elasticsearch-operator.4.6.0-202101300140.p0 OpenShift Elasticsearch Operator 4.6.0-202101300140.p0 Succeeded $ oc get pods NAME READY STATUS RESTARTS AGE compliance-operator-6995fbbf5b-km9f4 1/1 Running 0 67m ocp4-openshift-compliance-pp-c4898f8b-nvwwj 1/1 Running 0 66m rhcos4-openshift-compliance-pp-86d8d69446-ncztq 1/1 Running 0 66m $ oc create -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:rem_mod_base > rule: xccdf_org.ssgproject.content_rule_no_empty_passwords > debug: true > nodeSelector: > node-role.kubernetes.io/worker: "" > EOF compliancesuite.compliance.openshift.io/example-compliancesuite created $ oc get compliancesuite -w NAME PHASE RESULT example-compliancesuite RUNNING NOT-AVAILABLE example-compliancesuite AGGREGATING NOT-AVAILABLE example-compliancesuite DONE NON-COMPLIANT $ oc get mc |grep "75-\|NAME" NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 75-worker-scan-no-empty-passwords 2.2.0 57s $ oc get complianceremediation --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Applied compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite $ oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-31168e44adda32d56e549e8aa20ee1b8 True False False 3 3 3 0 111m worker rendered-worker-310680ddccf8fe820efc58f903433092 False True False 3 1 1 0 111m worker rendered-worker-310680ddccf8fe820efc58f903433092 False True False 3 2 2 0 113m worker rendered-worker-310680ddccf8fe820efc58f903433092 False True False 3 2 2 0 113m worker rendered-worker-a04f50c719855057a8f808a2f070db4f True False False 3 3 3 0 116m $ oc apply -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:rem_mod_change > rule: xccdf_org.ssgproject.content_rule_no_empty_passwords > debug: true > nodeSelector: > node-role.kubernetes.io/worker: "" > EOF Warning: oc apply should be used on resource created by either oc create --save-config or oc apply compliancesuite.compliance.openshift.io/example-compliancesuite configured $ oc annotate compliancescans/worker-scan compliance.openshift.io/rescan= compliancescan.compliance.openshift.io/worker-scan annotated $ oc get compliancesuite -w NAME PHASE RESULT example-compliancesuite RUNNING NOT-AVAILABLE example-compliancesuite AGGREGATING NOT-AVAILABLE example-compliancesuite DONE COMPLIANT $ oc get pods NAME READY STATUS RESTARTS AGE aggregator-pod-worker-scan 0/1 Completed 0 28s compliance-operator-6995fbbf5b-km9f4 1/1 Running 0 79m ocp4-openshift-compliance-pp-c4898f8b-vh5zs 1/1 Running 0 7m44s rhcos4-openshift-compliance-pp-86d8d69446-6msss 1/1 Running 0 10m worker-scan-ip-10-0-150-230.us-east-2.compute.internal-pod 0/2 Completed 0 48s worker-scan-ip-10-0-180-200.us-east-2.compute.internal-pod 0/2 Completed 0 48s worker-scan-ip-10-0-194-66.us-east-2.compute.internal-pod 0/2 Completed 0 48s $ oc get compliancecheckresult NAME STATUS SEVERITY worker-scan-no-empty-passwords PASS high $ oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Outdated compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite,complianceoperator.openshift.io/outdated-remediation= $ oc describe compliancesuite example-compliancesuite |tail Result: COMPLIANT Results Storage: Name: worker-scan Namespace: openshift-compliance Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ResultAvailable 98s (x6 over 12m) suitectrl The result is: NON-COMPLIANT Normal HaveOutdatedRemediations 58s (x2 over 62s) suitectrl One of suite's scans produced outdated remediations, please check for complianceremediation objects labeled with complianceoperator.openshift.io/outdated-remediation Normal ResultAvailable 58s (x2 over 62s) suitectrl The result is: COMPLIANT $ oc create -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example1-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker1-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/complianceascode/ocp4:latest > rule: "xccdf_org.ssgproject.content_rule_audit_rules_dac_modification_chmod" > debug: true > nodeSelector: > node-role.kubernetes.io/worker: "" > EOF compliancesuite.compliance.openshift.io/example1-compliancesuite created $ oc get compliancesuite NAME PHASE RESULT example-compliancesuite DONE COMPLIANT example1-compliancesuite DONE NON-COMPLIANT $ oc get mc |grep "75-\|NAME" NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 75-worker-scan-no-empty-passwords 2.2.0 18m 75-worker1-scan-audit-rules-dac-modification-chmod 3.1.0 78s $ oc get complianceremediation NAME STATE worker-scan-no-empty-passwords Outdated worker1-scan-audit-rules-dac-modification-chmod Applied $ oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-31168e44adda32d56e549e8aa20ee1b8 True False False 3 3 3 0 131m worker rendered-worker-a04f50c719855057a8f808a2f070db4f False True False 3 2 2 0 131m worker rendered-worker-789973dc47913471b6bf6c1a08fa6721 True False False 3 3 3 0 133m $ oc get compliancecheckresult NAME STATUS SEVERITY worker-scan-no-empty-passwords PASS high worker1-scan-audit-rules-dac-modification-chmod FAIL medium $ oc annotate compliancescans/worker1-scan compliance.openshift.io/rescan= compliancescan.compliance.openshift.io/worker1-scan annotated $ oc get compliancesuite NAME PHASE RESULT example-compliancesuite DONE COMPLIANT example1-compliancesuite DONE COMPLIANT $ oc get compliancecheckresult NAME STATUS SEVERITY worker-scan-no-empty-passwords PASS high worker1-scan-audit-rules-dac-modification-chmod PASS medium $ oc annotate compliancesuites/example-compliancesuite compliance.openshift.io/remove-outdated= compliancesuite.compliance.openshift.io/example-compliancesuite annotated $ oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Applied compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite worker1-scan-audit-rules-dac-modification-chmod Applied compliance.openshift.io/scan-name=worker1-scan,compliance.openshift.io/suite=example1-compliancesuite $ oc annotate compliancesuites/example-compliancesuite compliance.openshift.io/apply-remediations= compliancesuite.compliance.openshift.io/example-compliancesuite annotated $ oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-31168e44adda32d56e549e8aa20ee1b8 True False False 3 3 3 0 155m worker rendered-worker-789973dc47913471b6bf6c1a08fa6721 False True False 3 1 1 0 155m worker rendered-worker-789973dc47913471b6bf6c1a08fa6721 False True False 3 2 2 0 158m worker rendered-worker-789973dc47913471b6bf6c1a08fa6721 False True False 3 2 2 0 159m worker rendered-worker-3d8536c23324c8f4d1b41bc37d8332bf True False False 3 3 3 0 161m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.6 compliance-operator security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0436