Hide Forgot
Description of problem: Due to a bug in remediation templating processing, some remediations, notably those around SSHD are not rendered with the Compliance Operator Version-Release number of selected component (if applicable): 0.1.46 How reproducible: always Steps to Reproduce: 1. install CO v0.1.46 2. run the moderate suite 3. oc get remediations | grep sshd Actual results: nothing Expected results: oc get remediations | grep sshd workers-scan-sshd-set-idle-timeout NotApplied workers-scan-sshd-set-keepalive-0 NotApplied Additional info: This was fixed upstream, filing a bug for tracking purposes, docs purposes and so that we can QE the issue and possibly have a test for the future
[Bug_Verification] The sshd-related remediations are getting rendered successfully in compliance-operator.v0.1.47 Verified on: 4.10.0-0.nightly-2021-12-21-130047 + compliance-operator.v0.1.47 $ oc project openshift-compliance Now using project "openshift-compliance" on server "https://api.pdhamdhe2212.qe.devcluster.openshift.com:6443". $ oc get csv -nopenshift-compliance NAME DISPLAY VERSION REPLACES PHASE compliance-operator.v0.1.47 Compliance Operator 0.1.47 Succeeded elasticsearch-operator.5.3.2-5 OpenShift Elasticsearch Operator 5.3.2-5 Succeeded $ oc create -f - <<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ComplianceSuite > metadata: > name: worker-compliancesuite > namespace: openshift-compliance > spec: > autoApplyRemediations: false > schedule: "0 1 * * *" > scans: > - name: worker-scan > profile: xccdf_org.ssgproject.content_profile_moderate > content: ssg-rhcos4-ds.xml > contentImage: quay.io/complianceascode/ocp4:latest > nodeSelector: > node-role.kubernetes.io/worker: "" > EOF compliancesuite.compliance.openshift.io/worker-compliancesuite created $ oc get suite -w NAME PHASE RESULT worker-compliancesuite LAUNCHING NOT-AVAILABLE worker-compliancesuite RUNNING NOT-AVAILABLE worker-compliancesuite AGGREGATING NOT-AVAILABLE worker-compliancesuite DONE NON-COMPLIANT worker-compliancesuite DONE NON-COMPLIANT $ oc get pods NAME READY STATUS RESTARTS AGE aggregator-pod-worker-scan 0/1 Completed 0 119s compliance-operator-55fd995f9-7z9pf 1/1 Running 1 (7m59s ago) 8m40s ocp4-openshift-compliance-pp-54f5ffdd5b-5z6x6 1/1 Running 0 7m22s rhcos4-openshift-compliance-pp-868bf9bd9b-q6xgx 1/1 Running 0 7m23s worker-scan-ip-10-0-133-98.us-east-2.compute.internal-pod 0/2 Completed 0 3m3s worker-scan-ip-10-0-164-181.us-east-2.compute.internal-pod 0/2 Completed 0 3m3s worker-scan-ip-10-0-201-172.us-east-2.compute.internal-pod 0/2 Completed 0 3m3s $ oc get scan NAME PHASE RESULT worker-scan DONE NON-COMPLIANT $ oc get suite NAME PHASE RESULT worker-compliancesuite DONE NON-COMPLIANT $ oc get complianceremediations |grep sshd worker-scan-sshd-set-idle-timeout NotApplied worker-scan-sshd-set-keepalive-0 NotApplied
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Compliance Operator bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:0014