Description of problem: The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object already present on the cluster # oc get pods NAME READY STATUS RESTARTS AGE aggregator-pod-worker-scan 0/1 Completed 0 41m aggregator-pod-worker1-scan 0/1 Completed 0 3m47s compliance-operator-f764c7fbd-g75bh 1/1 Running 0 20h ocp4-openshift-compliance-pp-5f447d678d-b7k98 1/1 Running 0 54m rhcos4-openshift-compliance-pp-d459d8984-6q92m 1/1 Running 0 54m worker-scan-pdhamdhe-osp21-fjmlr-worker-0-2kp92-pod 0/2 Completed 0 42m worker-scan-pdhamdhe-osp21-fjmlr-worker-0-f8qkq-pod 0/2 Completed 0 42m worker1-scan-pdhamdhe-osp21-fjmlr-worker-0-2kp92-pod 0/2 Completed 0 4m37s worker1-scan-pdhamdhe-osp21-fjmlr-worker-0-f8qkq-pod 0/2 Completed 0 4m37s # oc get compliancesuite NAME PHASE RESULT example-compliancesuite DONE COMPLIANT example1-compliancesuite DONE NON-COMPLIANT # oc get complianceremediation NAME STATE worker-scan-no-empty-passwords Outdated <<--- worker1-scan-audit-rules-dac-modification-chmod Applied # oc get mc |grep "75-\|NAME" NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 75-worker-scan-no-empty-passwords 2.2.0 58m 75-worker1-scan-audit-rules-dac-modification-chmod 3.1.0 4m59 # oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-dbfc2178bb60fefef18a87cf7492f041 True False False 3 3 3 0 22h worker rendered-worker-9a8ec6d3604a3a9db54fcd5442f6f316 True False False 1 1 1 0 22h wscan rendered-wscan-a757137f63a73ae96251a4e61324f633 False False False 2 0 0 0 15h <<--- Version-Release number of selected component (if applicable): 4.7.0-0.nightly-2021-01-19-095812 How reproducible: Always Steps to Reproduce: 1. Deploy Compliance Operator 2. Create ComplianceSuite object which generates outdated complianceRemediation object # oc create -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:rem_mod_base > rule: xccdf_org.ssgproject.content_rule_no_empty_passwords > debug: true > nodeSelector: > node-role.kubernetes.io/wscan: "" > EOF compliancesuite.compliance.openshift.io/example-compliancesuite created 3. Monitor scan pods and Check for compliance scan result $ oc get pods -w -nopenshift-compliance $ oc get compliancesuite -w 4. Once machineConfigPool gets updated then change the image to create outdated complianceRemediation object and apply it $ oc get mcp -w # oc apply -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:rem_mod_change > rule: xccdf_org.ssgproject.content_rule_no_empty_passwords > debug: true > nodeSelector: > node-role.kubernetes.io/wscan: "" > EOF 5. Annotate the scan to re-run it $ oc annotate compliancescans/worker-scan compliance.openshift.io/rescan= 6. Check compliancesuite result after rerun the scan and the complianceRemediations object reports status outdated $ oc get compliancesuite $ oc get ComplianceRemediation NAME STATE worker-scan-no-empty-passwords Outdated 7. Now, create another compliacesuite object with autoApplyRemediations apply oc create -f - <<EOF apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: example1-compliancesuite spec: autoApplyRemediations: true schedule: "0 1 * * *" scans: - name: worker1-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest rule: "xccdf_org.ssgproject.content_rule_audit_rules_dac_modification_chmod" debug: true nodeSelector: node-role.kubernetes.io/wscan: "" EOF 8. Monitor scan pods and Check for complianceSuite result $ oc get pods -w -nopenshift-compliance $ oc get compliancesuite -w $ oc get ComplianceRemediation 9. Check the machineConfigPool i.e wscan and it gets paused $ oc get mcp -w Actual results: The autoApplyRemediation pauses the machineConfigPool if there is outdated complianceRemediation object already present on the cluster $ oc describe compliancesuite example1-compliancesuite | tail Phase: RUNNING Result: NOT-AVAILABLE Results Storage: Name: worker1-scan Namespace: openshift-compliance Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal HaveOutdatedRemediations 6m49s (x5 over 77m) suitectrl One of suite's scans produced outdated remediations, please check for complianceremediation objects labeled with complianceoperator.openshift.io/outdated-remediation Expected results: The autoApplyRemediation should not be paused the machineConfigPool even though there is outdated complianceRemediation object already present on the cluster Additional info: The machineConfigPool gets updated as soon as the outdated complianceRemediation object get removed. # oc get complianceremediation NAME STATE worker-scan-no-empty-passwords Outdated worker1-scan-audit-rules-dac-modification-chmod Applied # oc patch complianceremediation worker-scan-no-empty-passwords -p '{"spec":{"outdated": null}}' --type=merge complianceremediation.compliance.openshift.io/worker-scan-no-empty-passwords patched # oc get complianceremediation NAME STATE worker-scan-no-empty-passwords Applied worker1-scan-audit-rules-dac-modification-chmod Applied # oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-dbfc2178bb60fefef18a87cf7492f041 True False False 3 3 3 0 23h worker rendered-worker-9a8ec6d3604a3a9db54fcd5442f6f316 True False False 1 1 1 0 23h wscan rendered-wscan-a757137f63a73ae96251a4e61324f633 False True False 2 1 1 0 16h wscan rendered-wscan-e40afd9e8df4f463953b79093eb917c7 True False False 2 2 2 0 16h
[PR Pre-Merge Testing] Looks good to me. Now, The autoApplyRemediation does not pause the machineConfigPool when couple of remediation gets applied through another compliancsuite objects. Also newly added annotations help to apply and remove outdated complianceRemediation object. Verified on: 4.7.0-0.nightly-2021-01-27-002938 $ gh pr checkout 546 remote: Enumerating objects: 22, done. remote: Counting objects: 100% (22/22), done. remote: Compressing objects: 100% (7/7), done. remote: Total 22 (delta 15), reused 22 (delta 15), pack-reused 0 Unpacking objects: 100% (22/22), 7.12 KiB | 2.37 MiB/s, done. From https://github.com/openshift/compliance-operator * [new ref] refs/pull/546/head -> annotations Switched to branch 'annotations' A new release of gh is available: 1.3.0 → v1.5.0 https://github.com/cli/cli/releases/tag/v1.5.0 $ git branch * annotations fresh-rems handle-products master platform-tailor release-4.6 release-4.7 $ make deploy-local Creating 'openshift-compliance' namespace/project namespace/openshift-compliance created podman build -t quay.io/compliance-operator/compliance-operator:latest -f build/Dockerfile . STEP 1: FROM golang:1.15 AS builder STEP 2: WORKDIR /go/src/github.com/openshift/compliance-operator --> Using cache 6108d7207bf73d3088c41058489867512a6c496324a355045ef48d486b924fa4 --> 6108d7207bf STEP 3: ENV GOFLAGS=-mod=vendor --> Using cache 8ad547c085058b172380029a7687661e6f2f86dfa7bb12b0d029d8284a2a363b --> 8ad547c0850 STEP 4: COPY . . --> fe9595d5ead STEP 5: RUN make manager GOFLAGS=-mod=vendor GO111MODULE=auto go build -race -o /go/src/github.com/openshift/compliance-operator/build/_output/bin/compliance-operator github.com/openshift/compliance-operator/cmd/manager --> 4bb2707fedd STEP 6: FROM registry.access.redhat.com/ubi8/ubi-minimal:latest STEP 7: ENV OPERATOR=/usr/local/bin/compliance-operator USER_UID=1001 USER_NAME=compliance-operator --> Using cache cad1dadf97338aae70599047dd47947ae3b08798b686224383ccf1c941ba9099 --> cad1dadf973 STEP 8: COPY --from=builder /go/src/github.com/openshift/compliance-operator/build/_output/bin/compliance-operator ${OPERATOR} --> 281e8f4c6c4 STEP 9: COPY build/bin /usr/local/bin --> 5b7bbf67645 STEP 10: RUN /usr/local/bin/user_setup + mkdir -p /root + chown 1001:0 /root + chmod ug+rwx /root + chmod g+rw /etc/passwd + rm /usr/local/bin/user_setup --> 1ec902c6667 STEP 11: ENTRYPOINT ["/usr/local/bin/entrypoint"] --> b481575a512 STEP 12: USER ${USER_UID} STEP 13: COMMIT quay.io/compliance-operator/compliance-operator:latest --> 8a347aca394 8a347aca3946c6b2294a83568559a1a4355502aad8d996180843f07e25029be1 podman build -t quay.io/compliance-operator/compliance-operator-bundle:latest -f bundle.Dockerfile . STEP 1: FROM scratch STEP 2: LABEL operators.operatorframework.io.bundle.mediatype.v1=registry+v1 --> Using cache 19c0108d23041f78bd69b187edc43c2d37942056cef1ba1244589a1109aaf843 --> 19c0108d230 STEP 3: LABEL operators.operatorframework.io.bundle.manifests.v1=manifests/ --> Using cache 43cc33cfe59fca6121f3eb97f0b1e6960afb1d326d47db4a2f5b0d2a065a2baa --> 43cc33cfe59 STEP 4: LABEL operators.operatorframework.io.bundle.metadata.v1=metadata/ --> Using cache c6a1f3681bc55bb1a5bf64593bece6376fedbffe2f915cb02fba57f78985902f --> c6a1f3681bc STEP 5: LABEL operators.operatorframework.io.bundle.package.v1=compliance-operator --> Using cache 96f8773deabdd5ccb35bda484adda75fdbf7edc3bf6386e3fd9617364a1fae6d --> 96f8773deab STEP 6: LABEL operators.operatorframework.io.bundle.channels.v1=alpha --> Using cache 9ecf452b4b6165399b9645a8d26b0ff859859dbae006fd46f0392991b834b21b --> 9ecf452b4b6 STEP 7: LABEL operators.operatorframework.io.bundle.channel.default.v1=alpha --> Using cache 8bab849fcbff2df5620eac82b541e22bd61c9fd71f5e077e330576f7a3feeb16 --> 8bab849fcbf STEP 8: COPY deploy/olm-catalog/compliance-operator/manifests /manifests/ --> Using cache a110f8e42f84683fb9dcd46dd2f266622263890a56820cb3584862210af6bc50 --> a110f8e42f8 STEP 9: COPY deploy/olm-catalog/compliance-operator/metadata /metadata/ --> Using cache 580c68e23a585ecede07dfe1ecc265a33820d8476ac588d8cc03e17e4140c9e8 STEP 10: COMMIT quay.io/compliance-operator/compliance-operator-bundle:latest --> 580c68e23a5 580c68e23a585ecede07dfe1ecc265a33820d8476ac588d8cc03e17e4140c9e8 Temporarily exposing the default route to the image registry config.imageregistry.operator.openshift.io/cluster patched Pushing image quay.io/compliance-operator/compliance-operator:latest to the image registry IMAGE_REGISTRY_HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'); \ podman login "--tls-verify=false" -u kubeadmin -p sha256~jttQrmZrYCV-uaULb88HCXBUn8v_bO-nFXjWQd2PZcA ${IMAGE_REGISTRY_HOST}; \ podman push "--tls-verify=false" quay.io/compliance-operator/compliance-operator:latest ${IMAGE_REGISTRY_HOST}/openshift/compliance-operator:latest Login Succeeded! Getting image source signatures Copying blob 48cff9696c14 done Copying blob 67550323ae75 done Copying blob 24e880aa7564 done Copying blob eddba477a8ae skipped: already exists Copying blob f80c95f61fff skipped: already exists Copying config 8a347aca39 done Writing manifest to image destination Storing signatures Removing the route from the image registry config.imageregistry.operator.openshift.io/cluster patched IMAGE_FORMAT variable missing. We're in local enviornment. customresourcedefinition.apiextensions.k8s.io/compliancecheckresults.compliance.openshift.io unchanged customresourcedefinition.apiextensions.k8s.io/complianceremediations.compliance.openshift.io unchanged customresourcedefinition.apiextensions.k8s.io/compliancescans.compliance.openshift.io unchanged customresourcedefinition.apiextensions.k8s.io/compliancesuites.compliance.openshift.io unchanged customresourcedefinition.apiextensions.k8s.io/profilebundles.compliance.openshift.io unchanged customresourcedefinition.apiextensions.k8s.io/profiles.compliance.openshift.io unchanged customresourcedefinition.apiextensions.k8s.io/rules.compliance.openshift.io unchanged customresourcedefinition.apiextensions.k8s.io/scansettingbindings.compliance.openshift.io unchanged customresourcedefinition.apiextensions.k8s.io/scansettings.compliance.openshift.io unchanged customresourcedefinition.apiextensions.k8s.io/tailoredprofiles.compliance.openshift.io unchanged customresourcedefinition.apiextensions.k8s.io/variables.compliance.openshift.io unchanged sed -i 's%quay.io/compliance-operator/compliance-operator:latest%image-registry.openshift-image-registry.svc:5000/openshift/compliance-operator:latest%' deploy/operator.yaml namespace/openshift-compliance unchanged deployment.apps/compliance-operator created role.rbac.authorization.k8s.io/compliance-operator created clusterrole.rbac.authorization.k8s.io/compliance-operator unchanged role.rbac.authorization.k8s.io/resultscollector created role.rbac.authorization.k8s.io/api-resource-collector created role.rbac.authorization.k8s.io/remediation-aggregator created role.rbac.authorization.k8s.io/rerunner created role.rbac.authorization.k8s.io/profileparser created clusterrole.rbac.authorization.k8s.io/api-resource-collector unchanged rolebinding.rbac.authorization.k8s.io/compliance-operator created clusterrolebinding.rbac.authorization.k8s.io/compliance-operator unchanged rolebinding.rbac.authorization.k8s.io/resultscollector created rolebinding.rbac.authorization.k8s.io/remediation-aggregator created clusterrolebinding.rbac.authorization.k8s.io/api-resource-collector unchanged rolebinding.rbac.authorization.k8s.io/api-resource-collector created rolebinding.rbac.authorization.k8s.io/rerunner created rolebinding.rbac.authorization.k8s.io/profileparser created serviceaccount/compliance-operator created serviceaccount/resultscollector created serviceaccount/remediation-aggregator created serviceaccount/rerunner created serviceaccount/api-resource-collector created serviceaccount/profileparser created deployment.apps/compliance-operator triggers updated $ oc project openshift-compliance Already on project "openshift-compliance" on server "https://api.pdhamdhe-ocp47.qe.devcluster.openshift.com:6443". $ oc get pods NAME READY STATUS RESTARTS AGE compliance-operator-854f667d5c-cczmk 1/1 Running 0 2m3s ocp4-openshift-compliance-pp-7cd9f6b64f-44w9n 1/1 Running 0 76s rhcos4-openshift-compliance-pp-999fd896f-qd8qf 1/1 Running 0 76s $ oc create -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:rem_mod_base > rule: xccdf_org.ssgproject.content_rule_no_empty_passwords > debug: true > nodeSelector: > node-role.kubernetes.io/worker: "" > EOF compliancesuite.compliance.openshift.io/example-compliancesuite created $ oc get compliancesuite -w NAME PHASE RESULT example-compliancesuite RUNNING NOT-AVAILABLE example-compliancesuite AGGREGATING NOT-AVAILABLE example-compliancesuite DONE NON-COMPLIANT $ oc get pods NAME READY STATUS RESTARTS AGE aggregator-pod-worker-scan 0/1 Completed 0 25s compliance-operator-854f667d5c-cczmk 1/1 Running 0 12m ocp4-openshift-compliance-pp-7cd9f6b64f-d6f78 1/1 Running 0 3m33s rhcos4-openshift-compliance-pp-999fd896f-q8n59 1/1 Running 0 3m33s worker-scan-ip-10-0-143-223.us-east-2.compute.internal-pod 0/2 Completed 0 65s worker-scan-ip-10-0-177-195.us-east-2.compute.internal-pod 0/2 Completed 0 65s worker-scan-ip-10-0-215-211.us-east-2.compute.internal-pod 0/2 Completed 0 65s $ oc get complianceremediation --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Applied compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite $ oc get mc |grep "75-\|NAME" NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 75-worker-scan-no-empty-passwords 2.2.0 37s $ oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-4d6442c76b008d045258d7f5dbc16c44 True False False 3 3 3 0 4h10m worker rendered-worker-1cd5155cf493e700c47c3592bdda7c34 False True False 3 0 0 0 4h10m worker rendered-worker-1cd5155cf493e700c47c3592bdda7c34 False True False 3 1 1 0 4h12m worker rendered-worker-1cd5155cf493e700c47c3592bdda7c34 False True False 3 2 2 0 4h14m worker rendered-worker-b3145404aa57270154a1ef82fdd9e043 True False False 3 3 3 0 4h15m $ oc apply -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:rem_mod_change > rule: xccdf_org.ssgproject.content_rule_no_empty_passwords > debug: true > nodeSelector: > node-role.kubernetes.io/worker: "" > EOF Warning: oc apply should be used on resource created by either oc create --save-config or oc apply compliancesuite.compliance.openshift.io/example-compliancesuite configured $ oc annotate compliancescans/worker-scan compliance.openshift.io/rescan= compliancescan.compliance.openshift.io/worker-scan annotated $ oc get compliancesuite -w NAME PHASE RESULT example-compliancesuite RUNNING NOT-AVAILABLE example-compliancesuite AGGREGATING NOT-AVAILABLE example-compliancesuite DONE COMPLIANT $ oc get compliancecheckresult NAME STATUS SEVERITY worker-scan-no-empty-passwords PASS high $ oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Outdated compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite,complianceoperator.openshift.io/outdated-remediation= $ oc annotate compliancesuites/example-compliancesuite compliance.openshift.io/apply-remediations= compliancesuite.compliance.openshift.io/example-compliancesuite annotated $ oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Outdated compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite,complianceoperator.openshift.io/outdated-remediation= $ oc get pods -w NAME READY STATUS RESTARTS AGE aggregator-pod-worker-scan 0/1 Completed 0 115s compliance-operator-854f667d5c-cczmk 1/1 Running 0 23m ocp4-openshift-compliance-pp-7cd9f6b64f-cnvdh 1/1 Running 0 9m6s rhcos4-openshift-compliance-pp-999fd896f-fcg56 1/1 Running 0 9m6s worker-scan-ip-10-0-143-223.us-east-2.compute.internal-pod 0/2 Completed 0 2m35s worker-scan-ip-10-0-177-195.us-east-2.compute.internal-pod 0/2 Completed 0 2m35s worker-scan-ip-10-0-215-211.us-east-2.compute.internal-pod 0/2 Completed 0 2m35s $ oc describe compliancesuite example-compliancesuite Name: example-compliancesuite Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ComplianceSuite Metadata: Creation Timestamp: 2021-01-27T08:30:33Z Finalizers: suite.finalizers.compliance.openshift.io Generation: 4 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:autoApplyRemediations: Manager: kubectl-create Operation: Update Time: 2021-01-27T08:30:33Z API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: f:schedule: Manager: kubectl-client-side-apply Operation: Update Time: 2021-01-27T08:39:53Z API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: v:"suite.finalizers.compliance.openshift.io": f:spec: f:scans: f:status: .: f:phase: f:result: f:scanStatuses: Manager: compliance-operator Operation: Update Time: 2021-01-27T08:40:27Z Resource Version: 134178 Self Link: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/compliancesuites/example-compliancesuite UID: ed6883bf-4691-4fd9-a891-e31dbae71e7f Spec: Auto Apply Remediations: true Scans: Content: ssg-rhcos4-ds.xml Content Image: quay.io/jhrozek/ocp4-openscap-content:rem_mod_change Debug: true Name: worker-scan Node Selector: node-role.kubernetes.io/worker: Profile: xccdf_org.ssgproject.content_profile_moderate Raw Result Storage: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Rule: xccdf_org.ssgproject.content_rule_no_empty_passwords Scan Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Scan Type: Node Schedule: 0 1 * * * Status: Phase: DONE Result: COMPLIANT Scan Statuses: Current Index: 1 Name: worker-scan Phase: DONE Result: COMPLIANT Results Storage: Name: worker-scan Namespace: openshift-compliance Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ResultAvailable 6m34s (x6 over 15m) suitectrl The result is: NON-COMPLIANT Normal HaveOutdatedRemediations 4m14s (x4 over 5m30s) suitectrl One of suite's scans produced outdated remediations, please check for complianceremediation objects labeled with complianceoperator.openshift.io/outdated-remediation Normal ResultAvailable 4m14s (x4 over 5m30s) suitectrl The result is: COMPLIANT $ oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Outdated compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite,complianceoperator.openshift.io/outdated-remediation= $ oc annotate compliancesuites/example-compliancesuite compliance.openshift.io/remove-outdated= compliancesuite.compliance.openshift.io/example-compliancesuite annotated $ oc describe compliancesuite example-compliancesuite Name: example-compliancesuite Namespace: openshift-compliance Labels: <none> Annotations: <none> API Version: compliance.openshift.io/v1alpha1 Kind: ComplianceSuite Metadata: Creation Timestamp: 2021-01-27T08:30:33Z Finalizers: suite.finalizers.compliance.openshift.io Generation: 4 Managed Fields: API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:autoApplyRemediations: Manager: kubectl-create Operation: Update Time: 2021-01-27T08:30:33Z API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:kubectl.kubernetes.io/last-applied-configuration: f:spec: f:schedule: Manager: kubectl-client-side-apply Operation: Update Time: 2021-01-27T08:39:53Z API Version: compliance.openshift.io/v1alpha1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:finalizers: .: v:"suite.finalizers.compliance.openshift.io": f:spec: f:scans: f:status: .: f:phase: f:result: f:scanStatuses: Manager: compliance-operator Operation: Update Time: 2021-01-27T08:40:27Z Resource Version: 135805 Self Link: /apis/compliance.openshift.io/v1alpha1/namespaces/openshift-compliance/compliancesuites/example-compliancesuite UID: ed6883bf-4691-4fd9-a891-e31dbae71e7f Spec: Auto Apply Remediations: true Scans: Content: ssg-rhcos4-ds.xml Content Image: quay.io/jhrozek/ocp4-openscap-content:rem_mod_change Debug: true Name: worker-scan Node Selector: node-role.kubernetes.io/worker: Profile: xccdf_org.ssgproject.content_profile_moderate Raw Result Storage: Pv Access Modes: ReadWriteOnce Rotation: 3 Size: 1Gi Rule: xccdf_org.ssgproject.content_rule_no_empty_passwords Scan Tolerations: Effect: NoSchedule Key: node-role.kubernetes.io/master Operator: Exists Scan Type: Node Schedule: 0 1 * * * Status: Phase: DONE Result: COMPLIANT Scan Statuses: Current Index: 1 Name: worker-scan Phase: DONE Result: COMPLIANT Results Storage: Name: worker-scan Namespace: openshift-compliance Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ResultAvailable 8m8s (x6 over 17m) suitectrl The result is: NON-COMPLIANT Normal HaveOutdatedRemediations 8s (x5 over 7m4s) suitectrl One of suite's scans produced outdated remediations, please check for complianceremediation objects labeled with complianceoperator.openshift.io/outdated-remediation Normal ResultAvailable 8s (x6 over 7m4s) suitectrl The result is: COMPLIANT $ oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Applied compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite $ oc get compliancecheckresult -l complianceoperator.openshift.io/outdated-remediation No resources found in openshift-compliance namespace. $ ocreate -f - <<EOF apiVersion: compliance.openshift.io/v1alpha1 kind: ComplianceSuite metadata: name: example1-compliancesuite spec: autoApplyRemediations: true schedule: "0 1 * * *" scans: - name: worker1-scan profile: xccdf_org.ssgproject.content_profile_moderate content: ssg-rhcos4-ds.xml contentImage: quay.io/complianceascode/ocp4:latest rule: "xccdf_org.ssgproject.content_rule_audit_rules_dac_modification_chmod" debug: true nodeSelector: node-role.kubernetes.io/worker: "" EOF compliancesuite.compliance.openshift.io/example1-compliancesuite created $ oc get compliancesuite -w NAME PHASE RESULT example-compliancesuite DONE COMPLIANT example1-compliancesuite RUNNING NOT-AVAILABLE example1-compliancesuite AGGREGATING NOT-AVAILABLE example1-compliancesuite DONE NON-COMPLIANT $ oc get pods NAME READY STATUS RESTARTS AGE aggregator-pod-worker1-scan 0/1 Completed 0 24s compliance-operator-854f667d5c-cczmk 1/1 Running 0 50m ocp4-openshift-compliance-pp-7cd9f6b64f-vcchm 1/1 Running 0 5m43s rhcos4-openshift-compliance-pp-999fd896f-z722x 1/1 Running 0 7m38s worker1-scan-ip-10-0-143-223.us-east-2.compute.internal-pod 0/2 Completed 0 64s worker1-scan-ip-10-0-177-195.us-east-2.compute.internal-pod 0/2 Completed 0 64s worker1-scan-ip-10-0-215-211.us-east-2.compute.internal-pod 0/2 Completed 0 64s $ oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Applied compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite worker1-scan-audit-rules-dac-modification-chmod Applied compliance.openshift.io/scan-name=worker1-scan,compliance.openshift.io/suite=example1-compliancesuite $ oc get compliancecheckresult NAME STATUS SEVERITY worker-scan-no-empty-passwords PASS high worker1-scan-audit-rules-dac-modification-chmod FAIL medium $ oc get mc |grep "75-\|NAME" NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 75-worker-scan-no-empty-passwords 2.2.0 39m 75-worker1-scan-audit-rules-dac-modification-chmod 3.1.0 64s $ oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-4d6442c76b008d045258d7f5dbc16c44 True False False 3 3 3 0 4h49m worker rendered-worker-6dfbb72a4d5e7d1c482431381ed8e16d False True False 3 0 0 0 4h49m worker rendered-worker-6dfbb72a4d5e7d1c482431381ed8e16d False True False 3 1 1 0 4h49m worker rendered-worker-6dfbb72a4d5e7d1c482431381ed8e16d False True False 3 1 1 0 4h49m worker rendered-worker-6dfbb72a4d5e7d1c482431381ed8e16d False True False 3 2 2 0 4h51m worker rendered-worker-6dfbb72a4d5e7d1c482431381ed8e16d False True False 3 2 2 0 4h51m worker rendered-worker-c5a39ad38c3663b456ae2a957ee5761c True False False 3 3 3 0 4h53m $ oc get compliancecheckresult NAME STATUS SEVERITY worker-scan-no-empty-passwords PASS high worker1-scan-audit-rules-dac-modification-chmod FAIL medium $ oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Applied compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite worker1-scan-audit-rules-dac-modification-chmod Applied compliance.openshift.io/scan-name=worker1-scan,compliance.openshift.io/suite=example1-compliancesuite $ oc describe compliancesuite example1-compliancesuite | tail Name: worker1-scan Phase: DONE Result: NON-COMPLIANT Results Storage: Name: worker1-scan Namespace: openshift-compliance Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ResultAvailable 7m8s (x2 over 7m18s) suitectrl The result is: NON-COMPLIANT
[PR Pre-Merge Testing continued..] Second test shows that, the autoApplyRemediation does not pause the machineConfigPool even though there is outdated complianceRemediation object already present on the cluster. $ oc create -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:rem_mod_base > rule: xccdf_org.ssgproject.content_rule_no_empty_passwords > debug: true > nodeSelector: > node-role.kubernetes.io/worker: "" > EOF compliancesuite.compliance.openshift.io/example-compliancesuite created $ oc get compliancesuite -w NAME PHASE RESULT example-compliancesuite RUNNING NOT-AVAILABLE example-compliancesuite AGGREGATING NOT-AVAILABLE example-compliancesuite DONE NON-COMPLIANT $ oc get mc |grep "75-\|NAME" NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 75-worker-scan-no-empty-passwords 2.2.0 4m9s $ oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-4d6442c76b008d045258d7f5dbc16c44 True False False 3 3 3 0 6h17m worker rendered-worker-1cd5155cf493e700c47c3592bdda7c34 False True False 3 1 1 0 6h17m worker rendered-worker-1cd5155cf493e700c47c3592bdda7c34 False True False 3 2 2 0 6h17m worker rendered-worker-1cd5155cf493e700c47c3592bdda7c34 False True False 3 2 2 0 6h17m worker rendered-worker-b3145404aa57270154a1ef82fdd9e043 True False False 3 3 3 0 6h19m $ oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Applied compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite $ oc get compliancecheckresult NAME STATUS SEVERITY worker-scan-no-empty-passwords FAIL high $ oc apply -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/jhrozek/ocp4-openscap-content:rem_mod_change > rule: xccdf_org.ssgproject.content_rule_no_empty_passwords > debug: true > nodeSelector: > node-role.kubernetes.io/worker: "" > EOF Warning: oc apply should be used on resource created by either oc create --save-config or oc apply compliancesuite.compliance.openshift.io/example-compliancesuite configured $ oc annotate compliancescans/worker-scan compliance.openshift.io/rescan= compliancescan.compliance.openshift.io/worker-scan annotated $ oc get compliancesuite -w NAME PHASE RESULT example-compliancesuite RUNNING NOT-AVAILABLE example-compliancesuite AGGREGATING NOT-AVAILABLE example-compliancesuite DONE COMPLIANT $ oc get pods NAME READY STATUS RESTARTS AGE aggregator-pod-worker-scan 0/1 Completed 0 33s compliance-operator-854f667d5c-cczmk 1/1 Running 0 144m ocp4-openshift-compliance-pp-7cd9f6b64f-qdg7d 1/1 Running 0 4m33s rhcos4-openshift-compliance-pp-999fd896f-24svw 1/1 Running 0 4m33s worker-scan-ip-10-0-143-223.us-east-2.compute.internal-pod 0/2 Completed 0 73s worker-scan-ip-10-0-177-195.us-east-2.compute.internal-pod 0/2 Completed 0 73s worker-scan-ip-10-0-215-211.us-east-2.compute.internal-pod 0/2 Completed 0 73s $ oc create -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: example1-compliancesuite > spec: > autoApplyRemediations: true > schedule: "0 1 * * *" > scans: > - name: worker1-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/complianceascode/ocp4:latest > rule: "xccdf_org.ssgproject.content_rule_audit_rules_dac_modification_chmod" > debug: true > nodeSelector: > node-role.kubernetes.io/worker: "" > EOF compliancesuite.compliance.openshift.io/example1-compliancesuite created $ oc get compliancesuite -w NAME PHASE RESULT example-compliancesuite DONE COMPLIANT example1-compliancesuite RUNNING NOT-AVAILABLE example1-compliancesuite AGGREGATING NOT-AVAILABLE example1-compliancesuite DONE NON-COMPLIANT $ oc get compliancecheckresult NAME STATUS SEVERITY worker-scan-no-empty-passwords PASS high worker1-scan-audit-rules-dac-modification-chmod FAIL medium $ oc get complianceremediations NAME STATE worker-scan-no-empty-passwords Outdated worker1-scan-audit-rules-dac-modification-chmod Applied $ oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-4d6442c76b008d045258d7f5dbc16c44 True False False 3 3 3 0 6h24m worker rendered-worker-b3145404aa57270154a1ef82fdd9e043 False True False 3 0 0 0 6h24m worker rendered-worker-b3145404aa57270154a1ef82fdd9e043 False True False 3 1 1 0 6h24m worker rendered-worker-b3145404aa57270154a1ef82fdd9e043 False True False 3 1 1 0 6h24m $ oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Outdated compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite,complianceoperator.openshift.io/outdated-remediation= worker1-scan-audit-rules-dac-modification-chmod Applied compliance.openshift.io/scan-name=worker1-scan,compliance.openshift.io/suite=example1-compliancesuite
What's more,the new added annotation also working as expected. Details seen from below: # oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-0.nightly-2021-01-27-022930 True False 5h40m Cluster version is 4.7.0-0.nightly-2021-01-27-022930 # oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Outdated compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite,complianceoperator.openshift.io/outdated-remediation= worker1-scan-audit-rules-dac-modification-chmod Applied compliance.openshift.io/scan-name=worker1-scan,compliance.openshift.io/suite=example1-compliancesuite # oc annotate compliancesuites/example-compliancesuite compliance.openshift.io/apply-remediations= compliancesuite.compliance.openshift.io/example-compliancesuite annotated # oc annotate compliancesuites/example-compliancesuite compliance.openshift.io/remove-outdated= compliancesuite.compliance.openshift.io/example-compliancesuite annotated # oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2e102580f497e7cd8abe66aa220cb869 True False False 3 3 3 0 5h40m worker rendered-worker-f7d1988acf33f7612b53b2643c78574f False True False 2 0 0 0 5h40m worker rendered-worker-f7d1988acf33f7612b53b2643c78574f False True False 2 1 1 0 5h42m worker rendered-worker-f7d1988acf33f7612b53b2643c78574f False True False 2 1 1 0 5h42m worker rendered-worker-4f6704fafea914059d6c287791f0ee8c True False False 2 2 2 0 5h44m # oc get complianceremediations --show-labels NAME STATE LABELS worker-scan-no-empty-passwords Applied compliance.openshift.io/scan-name=worker-scan,compliance.openshift.io/suite=example-compliancesuite worker1-scan-audit-rules-dac-modification-chmod Applied compliance.openshift.io/scan-name=worker1-scan,compliance.openshift.io/suite=example1-compliancesuite
Correct one point in https://bugzilla.redhat.com/show_bug.cgi?id=1919098#c3. Since there is "autoApplyRemediations: true" in compliancesuite example-compliancesuite, the step "oc annotate compliancesuites/example-compliancesuite compliance.openshift.io/apply-remediations" is unnecessary.
PR Pre-Merge verification/testing is already done. So as per the comments 1,2,3,4 changing status of this bug to VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.7 compliance-operator image update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0435