Bug 2049141 - The ClientAliveInterval value 600 is hard coded for rule's default value is 300. CCR is failing
Summary: The ClientAliveInterval value 600 is hard coded for rule's default value is 3...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Compliance Operator
Version: 4.8
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Vincent Shen
QA Contact: Prashant Dhamdhere
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-01 15:43 UTC by Mithilesh Kaur Bagga
Modified: 2022-04-19 08:41 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Remediations using the sshd jinja macros were hard-coded to specific sshd configuration Consequence: The hard-coded configurations were inconsistent with what the rules were checking for, so after customers applied the remediation, the check still failed Fix: Use the new content, which parameterizes the sshd configuration and is inline with the values in the rule Result: The rules should pass after applying the remediation
Clone Of:
Environment:
Last Closed: 2022-04-18 07:54:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ComplianceAsCode content pull 8177 0 None open OCP enable variable support for sshd remediation 2022-02-11 12:11:22 UTC
Red Hat Product Errata RHBA-2022:1148 0 None None None 2022-04-18 07:54:10 UTC

Comment 1 Vincent Shen 2022-02-07 23:38:42 UTC
Fix PR: https://github.com/ComplianceAsCode/content/pull/8177

Comment 5 Prashant Dhamdhere 2022-04-01 14:10:55 UTC
[Bug Verification]

Looks good. The auto-remediation applied successfully and the rule passed after re-run the scan 
Also verified that the variable support enabled for sshd remediation.


Verified on:
4.10.0-0.nightly-2022-03-29-163038 + compliance-operator.v0.1.49


1] Auto-remediations applied successfully and the rule passed after re-run the scan 

$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-0.nightly-2022-03-29-163038   True        False         9h      Cluster version is 4.10.0-0.nightly-2022-03-29-163038

$ oc project openshift-compliance
Now using project "openshift-compliance" on server "https://api.pdhamdhe01.qe.devcluster.openshift.com:6443".

$ oc get csv
NAME                               DISPLAY                            VERSION     REPLACES   PHASE
compliance-operator.v0.1.49        Compliance Operator                0.1.49                 Succeeded
elasticsearch-operator.5.4.0-128   OpenShift Elasticsearch Operator   5.4.0-128              Succeeded

$ oc get pods
NAME                                              READY   STATUS    RESTARTS        AGE
compliance-operator-9bf58698f-fp8kc               1/1     Running   1 (5m20s ago)   5m59s
ocp4-openshift-compliance-pp-59cd7665d6-rfg5j     1/1     Running   0               4m42s
rhcos4-openshift-compliance-pp-5c85d4d5c8-vhnzl   1/1     Running   0               4m42s


$ oc get rules rhcos4-sshd-set-idle-timeout -ojsonpath={.instructions}
Run the following command to see what the timeout interval is:
$ sudo grep ClientAliveInterval /etc/ssh/sshd_config
If properly configured, the output should be:
ClientAliveInterval

$ oc create -f - << EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: my-ssb-moderate
profiles:
  - name: rhcos4-moderate
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default-auto-apply
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
EOF
scansettingbinding.compliance.openshift.io/my-ssb-moderate created

$ oc get suite -w
NAME              PHASE       RESULT
my-ssb-moderate   LAUNCHING   NOT-AVAILABLE
my-ssb-moderate   LAUNCHING   NOT-AVAILABLE
my-ssb-moderate   RUNNING     NOT-AVAILABLE
my-ssb-moderate   RUNNING     NOT-AVAILABLE
my-ssb-moderate   AGGREGATING   NOT-AVAILABLE
my-ssb-moderate   AGGREGATING   NOT-AVAILABLE
my-ssb-moderate   DONE          NON-COMPLIANT
my-ssb-moderate   DONE          NON-COMPLIANT

$ oc get suite 
NAME              PHASE   RESULT
my-ssb-moderate   DONE    NON-COMPLIANT

$ oc get scan
NAME                     PHASE   RESULT
rhcos4-moderate-master   DONE    NON-COMPLIANT
rhcos4-moderate-worker   DONE    NON-COMPLIANT

$ oc get mc |grep 75-
75-rhcos4-moderate-master-audit-rules-dac-modification-chmod                                                                                      3.1.0             3m13s
75-rhcos4-moderate-master-audit-rules-dac-modification-chown                                                                                      3.1.0             2m52s
75-rhcos4-moderate-master-audit-rules-dac-modification-fchmod                                                                                     3.1.0             2m2s
75-rhcos4-moderate-master-audit-rules-dac-modification-fchmodat                                                                                   3.1.0             102s
75-rhcos4-moderate-master-audit-rules-dac-modification-fchown                                                                                     3.1.0             3m7s

$ oc get mc |grep sshd-set-idle-timeout
75-rhcos4-moderate-master-sshd-set-idle-timeout                                                                                                   3.1.0             37m
75-rhcos4-moderate-worker-sshd-set-idle-timeout                                                                                                   3.1.0             37m


$ oc get compliancecheckresults | grep sshd-set-idle-timeout
rhcos4-moderate-master-sshd-set-idle-timeout                                                        FAIL     medium
rhcos4-moderate-worker-sshd-set-idle-timeout                                                        FAIL     medium


$ oc-compliance rerun-now compliancesuite/my-ssb-moderate
Rerunning scans from 'my-ssb-moderate': rhcos4-moderate-master, rhcos4-moderate-worker
Re-running scan 'openshift-compliance/rhcos4-moderate-master'
Re-running scan 'openshift-compliance/rhcos4-moderate-worker'

$ oc get suite -w
NAME              PHASE     RESULT
my-ssb-moderate   RUNNING   NOT-AVAILABLE
my-ssb-moderate   RUNNING   NOT-AVAILABLE
my-ssb-moderate   AGGREGATING   NOT-AVAILABLE
my-ssb-moderate   AGGREGATING   NOT-AVAILABLE
my-ssb-moderate   DONE          NON-COMPLIANT
my-ssb-moderate   DONE          NON-COMPLIANT


$ oc get scan
NAME                     PHASE   RESULT
rhcos4-moderate-master   DONE    NON-COMPLIANT
rhcos4-moderate-worker   DONE    NON-COMPLIANT


$ oc get rems |grep sshd-set-idle
rhcos4-moderate-master-sshd-set-idle-timeout                                                        Applied
rhcos4-moderate-worker-sshd-set-idle-timeout                                                        Applied

$ oc get compliancecheckresults | grep sshd-set-idle-timeout
rhcos4-moderate-master-sshd-set-idle-timeout                                                        PASS     medium
rhcos4-moderate-worker-sshd-set-idle-timeout                                                        PASS     medium


$ oc get nodes
NAME                                         STATUS   ROLES    AGE     VERSION
ip-10-0-140-108.us-east-2.compute.internal   Ready    master   3h44m   v1.23.5+1f952b3
ip-10-0-145-217.us-east-2.compute.internal   Ready    worker   3h37m   v1.23.5+1f952b3
ip-10-0-169-29.us-east-2.compute.internal    Ready    master   3h43m   v1.23.5+1f952b3
ip-10-0-171-31.us-east-2.compute.internal    Ready    worker   3h37m   v1.23.5+1f952b3
ip-10-0-194-236.us-east-2.compute.internal   Ready    master   3h43m   v1.23.5+1f952b3
ip-10-0-212-172.us-east-2.compute.internal   Ready    worker   3h38m   v1.23.5+1f952b3


$ for NODE in $(oc get node -lnode-role.kubernetes.io/worker= --no-headers |awk '{print $1}'); do echo -n "$NODE "; oc debug -q node/$NODE -- chroot /host sudo grep ClientAliveInterval /etc/ssh/sshd_config; done
ip-10-0-145-217.us-east-2.compute.internal ClientAliveInterval 300
ip-10-0-171-31.us-east-2.compute.internal ClientAliveInterval 300
ip-10-0-212-172.us-east-2.compute.internal ClientAliveInterval 300


$ for NODE in $(oc get node -lnode-role.kubernetes.io/master= --no-headers |awk '{print $1}'); do echo -n "$NODE "; oc debug -q node/$NODE -- chroot /host sudo grep ClientAliveInterval /etc/ssh/sshd_config; done
ip-10-0-140-108.us-east-2.compute.internal ClientAliveInterval 300
ip-10-0-169-29.us-east-2.compute.internal ClientAliveInterval 300
ip-10-0-194-236.us-east-2.compute.internal ClientAliveInterval 300



2] Verified that the variable support enabled for sshd remediation


$ oc get csv
NAME                               DISPLAY                            VERSION     REPLACES   PHASE
compliance-operator.v0.1.49        Compliance Operator                0.1.49                 Succeeded
elasticsearch-operator.5.4.0-128   OpenShift Elasticsearch Operator   5.4.0-128              Succeeded

$ oc get pods
NAME                                              READY   STATUS    RESTARTS   AGE
compliance-operator-9bf58698f-dg8dg               1/1     Running   0          40m
ocp4-openshift-compliance-pp-59cd7665d6-v2mwf     1/1     Running   0          40m
rhcos4-openshift-compliance-pp-5c85d4d5c8-zzvsl   1/1     Running   0          40m

$ oc get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-140-108.us-east-2.compute.internal   Ready    master   9h    v1.23.5+1f952b3
ip-10-0-145-217.us-east-2.compute.internal   Ready    worker   9h    v1.23.5+1f952b3
ip-10-0-169-29.us-east-2.compute.internal    Ready    master   9h    v1.23.5+1f952b3
ip-10-0-171-31.us-east-2.compute.internal    Ready    worker   9h    v1.23.5+1f952b3
ip-10-0-194-236.us-east-2.compute.internal   Ready    master   9h    v1.23.5+1f952b3
ip-10-0-212-172.us-east-2.compute.internal   Ready    worker   9h    v1.23.5+1f952b3


$ for NODE in $(oc get node -lnode-role.kubernetes.io/master= --no-headers |awk '{print $1}'); do echo -n "$NODE "; oc debug -q node/$NODE -- chroot /host sudo grep ClientAliveInterval /etc/ssh/sshd_config; done
ip-10-0-140-108.us-east-2.compute.internal ClientAliveInterval 180
ip-10-0-169-29.us-east-2.compute.internal ClientAliveInterval 180
ip-10-0-194-236.us-east-2.compute.internal ClientAliveInterval 180


$ for NODE in $(oc get node -lnode-role.kubernetes.io/worker= --no-headers |awk '{print $1}'); do echo -n "$NODE "; oc debug -q node/$NODE -- chroot /host sudo grep ClientAliveInterval /etc/ssh/sshd_config; done
ip-10-0-145-217.us-east-2.compute.internal ClientAliveInterval 180
ip-10-0-171-31.us-east-2.compute.internal ClientAliveInterval 180
ip-10-0-212-172.us-east-2.compute.internal ClientAliveInterval 180


$ oc create -f - << EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: TailoredProfile
> metadata:
>   name: rhcos4-sshd-set-idle-timeout-tailored
>   namespace: openshift-compliance
> spec:
>   extends: rhcos4-moderate
>   description: set idle timeout to sshd
>   title: set idle timeout to sshd
>   enableRules:
>     - name: rhcos4-sshd-set-idle-timeout
>       rationale: Node
>   setValues:
>   - name: rhcos4-sshd-idle-timeout-value 
>     rationale: Node
>     value: "600"
> EOF
tailoredprofile.compliance.openshift.io/rhcos4-sshd-set-idle-timeout-tailored created


$ oc get TailoredProfile
NAME                                    STATE
rhcos4-sshd-set-idle-timeout-tailored   READY


$ oc create -f - << EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: rhcos4-sshd
profiles:
  - apiGroup: compliance.openshift.io/v1alpha1
    kind: TailoredProfile
    name: rhcos4-sshd-set-idle-timeout-tailored
settingsRef:
  apiGroup: compliance.openshift.io/v1alpha1
  kind: ScanSetting
  name: default-auto-apply
EOF
scansettingbinding.compliance.openshift.io/rhcos4-sshd created


$ oc get suite -w
NAME          PHASE       RESULT
rhcos4-sshd   LAUNCHING   NOT-AVAILABLE
rhcos4-sshd   LAUNCHING   NOT-AVAILABLE
rhcos4-sshd   RUNNING     NOT-AVAILABLE
rhcos4-sshd   RUNNING     NOT-AVAILABLE
rhcos4-sshd   AGGREGATING   NOT-AVAILABLE
rhcos4-sshd   AGGREGATING   NOT-AVAILABLE
rhcos4-sshd   DONE          NON-COMPLIANT
rhcos4-sshd   DONE          NON-COMPLIANT


$ oc get ccr rhcos4-sshd-set-idle-timeout-tailored-worker-sshd-set-idle-timeout
NAME                                                                 STATUS   SEVERITY
rhcos4-sshd-set-idle-timeout-tailored-worker-sshd-set-idle-timeout   FAIL     medium


$ oc get rems rhcos4-sshd-set-idle-timeout-tailored-worker-sshd-set-idle-timeout
NAME                                                                 STATE
rhcos4-sshd-set-idle-timeout-tailored-worker-sshd-set-idle-timeout   Applied


$ oc get mcp -w
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-6ac5e4204e65501b3787f66f192b15de   False     False      False      3              2                   2                     0                      9h
worker   rendered-worker-c65ce7c1e5125c781d079753f0f5cd7a   True      False      False      3              3                   3                     0                      9h
master   rendered-master-6ac5e4204e65501b3787f66f192b15de   False     False      False      3              2                   2                     0                      9h
worker   rendered-worker-c65ce7c1e5125c781d079753f0f5cd7a   True      False      False      3              3                   3                     0                      9h
master   rendered-master-6ac5e4204e65501b3787f66f192b15de   False     True       False      3              2                   2                     0                      9h
worker   rendered-worker-c65ce7c1e5125c781d079753f0f5cd7a   True      False      False      3              3                   3                     0                      9h


$ oc get mcp -w
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-6ac5e4204e65501b3787f66f192b15de   False     True       False      3              2                   2                     0                      9h
worker   rendered-worker-c65ce7c1e5125c781d079753f0f5cd7a   True      False      False      3              3                   3                     0                      9h


$ oc get mcp -w
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-2463453285fad5b971300ddd363fb4fb   True      False      False      3              3                   3                     0                      9h
worker   rendered-worker-c65ce7c1e5125c781d079753f0f5cd7a   True      False      False      3              3                   3                     0                      9h


$ oc-compliance rerun-now compliancesuite/rhcos4-sshd
Rerunning scans from 'rhcos4-sshd': rhcos4-sshd-set-idle-timeout-tailored-master, rhcos4-sshd-set-idle-timeout-tailored-worker
Re-running scan 'openshift-compliance/rhcos4-sshd-set-idle-timeout-tailored-master'
Re-running scan 'openshift-compliance/rhcos4-sshd-set-idle-timeout-tailored-worker'

$ oc get suite -w
NAME          PHASE   RESULT
rhcos4-sshd   DONE    NON-COMPLIANT
rhcos4-sshd   LAUNCHING   NOT-AVAILABLE
rhcos4-sshd   RUNNING     NOT-AVAILABLE
rhcos4-sshd   LAUNCHING   NOT-AVAILABLE
rhcos4-sshd   RUNNING     NOT-AVAILABLE
rhcos4-sshd   RUNNING     NOT-AVAILABLE
rhcos4-sshd   AGGREGATING   NOT-AVAILABLE
rhcos4-sshd   AGGREGATING   NOT-AVAILABLE
rhcos4-sshd   DONE          NON-COMPLIANT


$ oc get suite
NAME          PHASE   RESULT
rhcos4-sshd   DONE    NON-COMPLIANT


$ oc get rems rhcos4-sshd-set-idle-timeout-tailored-worker-sshd-set-idle-timeout
NAME                                                                 STATE
rhcos4-sshd-set-idle-timeout-tailored-worker-sshd-set-idle-timeout   Applied


$ oc get ccr rhcos4-sshd-set-idle-timeout-tailored-worker-sshd-set-idle-timeout
NAME                                                                 STATUS   SEVERITY
rhcos4-sshd-set-idle-timeout-tailored-worker-sshd-set-idle-timeout   PASS     medium


$ for NODE in $(oc get node -lnode-role.kubernetes.io/worker= --no-headers |awk '{print $1}'); do echo -n "$NODE "; oc debug -q node/$NODE -- chroot /host sudo grep ClientAliveInterval /etc/ssh/sshd_config; done
ip-10-0-145-217.us-east-2.compute.internal ClientAliveInterval 600
ip-10-0-171-31.us-east-2.compute.internal ClientAliveInterval 600
ip-10-0-212-172.us-east-2.compute.internal ClientAliveInterval 600


$ for NODE in $(oc get node -lnode-role.kubernetes.io/master= --no-headers |awk '{print $1}'); do echo -n "$NODE "; oc debug -q node/$NODE -- chroot /host sudo grep ClientAliveInterval /etc/ssh/sshd_config; done
ip-10-0-140-108.us-east-2.compute.internal ClientAliveInterval 600
ip-10-0-169-29.us-east-2.compute.internal ClientAliveInterval 600
ip-10-0-194-236.us-east-2.compute.internal ClientAliveInterval 600

Comment 7 errata-xmlrpc 2022-04-18 07:54:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Compliance Operator bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:1148


Note You need to log in before you can comment on or make changes to this bug.