Bug 2033009 - The sshd-related remediations are not rendered
Summary: The sshd-related remediations are not rendered
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Compliance Operator
Version: 4.10
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: ---
Assignee: Jakub Hrozek
QA Contact: Prashant Dhamdhere
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-15 16:57 UTC by Jakub Hrozek
Modified: 2022-01-04 12:05 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: When rendering remediations, the compliance operator would check the remediation for being well-formed using a regular expression which was both too strict and not really necessary. Consequence: Some remediations, notably those that render sshd_config would not be created at all as they would not pass the regular expression check. Fix: The regular expression was removed. The fix was found to not be required. Result: All remediations now render correctly.
Clone Of:
Environment:
Last Closed: 2022-01-04 12:05:52 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift compliance-operator pull 758 0 None Merged Remove regex checks for url-encoded content 2021-12-15 16:58:14 UTC
Red Hat Product Errata RHBA-2022:0014 0 None None None 2022-01-04 12:05:56 UTC

Description Jakub Hrozek 2021-12-15 16:57:15 UTC
Description of problem:
Due to a bug in remediation templating processing, some remediations, notably those around SSHD are not rendered with the Compliance Operator

Version-Release number of selected component (if applicable):
0.1.46

How reproducible:
always

Steps to Reproduce:
1. install CO v0.1.46
2. run the moderate suite
3. oc get remediations | grep sshd

Actual results:
nothing

Expected results:
oc get remediations | grep sshd
workers-scan-sshd-set-idle-timeout                                                        NotApplied
workers-scan-sshd-set-keepalive-0                                                         NotApplied

Additional info:
This was fixed upstream, filing a bug for tracking purposes, docs purposes and so that we can QE the issue and possibly have a test for the future

Comment 3 Prashant Dhamdhere 2021-12-22 07:28:48 UTC
[Bug_Verification]

The sshd-related remediations are getting rendered successfully in compliance-operator.v0.1.47


Verified on:
4.10.0-0.nightly-2021-12-21-130047 +  compliance-operator.v0.1.47


$ oc project openshift-compliance
Now using project "openshift-compliance" on server "https://api.pdhamdhe2212.qe.devcluster.openshift.com:6443".


$ oc get csv -nopenshift-compliance
NAME                             DISPLAY                            VERSION   REPLACES   PHASE
compliance-operator.v0.1.47      Compliance Operator                0.1.47               Succeeded
elasticsearch-operator.5.3.2-5   OpenShift Elasticsearch Operator   5.3.2-5              Succeeded


$ oc create -f - <<EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: ComplianceSuite
> metadata:
>   name: worker-compliancesuite
>   namespace: openshift-compliance
> spec:
>   autoApplyRemediations: false
>   schedule: "0 1 * * *"
>   scans:
>     - name: worker-scan
>       profile: xccdf_org.ssgproject.content_profile_moderate
>       content: ssg-rhcos4-ds.xml
>       contentImage: quay.io/complianceascode/ocp4:latest
>       nodeSelector:
>         node-role.kubernetes.io/worker: ""
> EOF
compliancesuite.compliance.openshift.io/worker-compliancesuite created


$ oc get suite -w
NAME                     PHASE       RESULT
worker-compliancesuite   LAUNCHING   NOT-AVAILABLE
worker-compliancesuite   RUNNING     NOT-AVAILABLE
worker-compliancesuite   AGGREGATING   NOT-AVAILABLE
worker-compliancesuite   DONE          NON-COMPLIANT
worker-compliancesuite   DONE          NON-COMPLIANT


$ oc get pods
NAME                                                         READY   STATUS      RESTARTS        AGE
aggregator-pod-worker-scan                                   0/1     Completed   0               119s
compliance-operator-55fd995f9-7z9pf                          1/1     Running     1 (7m59s ago)   8m40s
ocp4-openshift-compliance-pp-54f5ffdd5b-5z6x6                1/1     Running     0               7m22s
rhcos4-openshift-compliance-pp-868bf9bd9b-q6xgx              1/1     Running     0               7m23s
worker-scan-ip-10-0-133-98.us-east-2.compute.internal-pod    0/2     Completed   0               3m3s
worker-scan-ip-10-0-164-181.us-east-2.compute.internal-pod   0/2     Completed   0               3m3s
worker-scan-ip-10-0-201-172.us-east-2.compute.internal-pod   0/2     Completed   0               3m3s

$ oc get scan
NAME          PHASE   RESULT
worker-scan   DONE    NON-COMPLIANT


$ oc get suite
NAME                     PHASE   RESULT
worker-compliancesuite   DONE    NON-COMPLIANT


$ oc get complianceremediations |grep sshd
worker-scan-sshd-set-idle-timeout                                                        NotApplied
worker-scan-sshd-set-keepalive-0                                                         NotApplied

Comment 7 errata-xmlrpc 2022-01-04 12:05:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Compliance Operator bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:0014


Note You need to log in before you can comment on or make changes to this bug.