Description of problem: No clear instructions for rule ocp4-cis-node-master-kubelet-configure-tls-cipher-suites and ocp4-cis-node-worker-kubelet-configure-tls-cipher-suites $ oc get compliancecheckresults ocp4-cis-node-master-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions} rule test. $ oc get compliancecheckresults ocp4-cis-node-worker-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions} rule test. Version-Release number of selected component (if applicable): 4.8.0-0.nightly-2021-04-13-171608 + compliance-operator.v0.1.30 How reproducible: Always Steps to Reproduce: Install compliance operator Trigger a compliancesuite with cis profiles: $ oc create -f - << EOF apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-ssb-r profiles: - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 EOF Actual results: Wait until compliancesuite done, check the instructions for the two rules. There is no clear instructions how to judge the rule is PASS or FAIL $ oc get compliancecheckresults ocp4-cis-node-master-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions} rule test. $ oc get compliancecheckresults ocp4-cis-node-worker-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions} rule test. Expected results: There should be clear instructions how to judge the rule is PASS or FAIL Additional information:
Thanks for the bug report, those rules have a missing OCIL attribute.
PR: https://github.com/ComplianceAsCode/content/pull/6835
The PR merged, therefore moving to MODIFIED
$ oc get ip NAME CSV APPROVAL APPROVED $ oc get csv NAME DISPLAY VERSION REPLACES PHASE compliance-operator.v0.1.32 Compliance Operator 0.1.32 Succeeded $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-05-10-225140 True False 113m Cluster version is 4.8.0-0.nightly-2021-05-10-225140 STEP 1. create compliancesuite and check checkresults for the 2 rules: $ oc create -f -<< EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ScanSettingBinding > metadata: > name: my-ssb-r > profiles: > - name: ocp4-cis > kind: Profile > apiGroup: compliance.openshift.io/v1alpha1 > - name: ocp4-cis-node > kind: Profile > apiGroup: compliance.openshift.io/v1alpha1 > settingsRef: > name: default > kind: ScanSetting > apiGroup: compliance.openshift.io/v1alpha1 > EOF scansettingbinding.compliance.openshift.io/my-ssb-r created TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 $ oc get compliancecheckresults | grep kubelet-configure-tls-cipher-suites ocp4-cis-node-master-kubelet-configure-tls-cipher-suites FAIL medium ocp4-cis-node-worker-kubelet-configure-tls-cipher-suites FAIL medium STEP 2: update corresponding file per instruction: $ oc get compliancecheckresults ocp4-cis-node-master-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions} Run the following command on the kubelet node(s): $ sudo grep tlsCipherSuites /etc/kubernetes/kubelet.conf Verify that the set of ciphers contains only the following: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, $ oc get node NAME STATUS ROLES AGE VERSION xiyuan113-hp7mg-master-0.c.openshift-qe.internal Ready master 120m v1.21.0-rc.0+86f0080 xiyuan113-hp7mg-master-1.c.openshift-qe.internal Ready master 120m v1.21.0-rc.0+86f0080 xiyuan113-hp7mg-master-2.c.openshift-qe.internal Ready master 120m v1.21.0-rc.0+86f0080 xiyuan113-hp7mg-w-a-0.c.openshift-qe.internal Ready worker 109m v1.21.0-rc.0+86f0080 xiyuan113-hp7mg-w-b-1.c.openshift-qe.internal Ready worker 108m v1.21.0-rc.0+86f0080 [xiyuan@MiWiFi-RA69-srv func]$ oc debug no/xiyuan113-hp7mg-master-0.c.openshift-qe.internal -- chroot /host cat /etc/kubernetes/kubelet.conf Starting pod/xiyuan113-hp7mg-master-0copenshift-qeinternal-debug ... To use host binaries, run `chroot /host` kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 authentication: x509: clientCAFile: /etc/kubernetes/kubelet-ca.crt anonymous: enabled: false cgroupDriver: systemd cgroupRoot: / clusterDNS: - 172.30.0.10 clusterDomain: cluster.local containerLogMaxSize: 50Mi maxPods: 250 kubeAPIQPS: 50 kubeAPIBurst: 100 rotateCertificates: true serializeImagePulls: false staticPodPath: /etc/kubernetes/manifests systemCgroups: /system.slice systemReserved: ephemeral-storage: 1Gi featureGates: APIPriorityAndFairness: true LegacyNodeRoleBehavior: false NodeDisruptionExclusion: true RotateKubeletServerCertificate: true ServiceNodeExclusion: true SupportPodPidsLimit: true DownwardAPIHugePages: true serverTLSBootstrap: true tlsMinVersion: VersionTLS12 tlsCipherSuites: - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Removing debug pod ... $ oc get node -o custom-columns=NAME:.metadata.name --no-headers xiyuan113-hp7mg-master-0.c.openshift-qe.internal xiyuan113-hp7mg-master-1.c.openshift-qe.internal xiyuan113-hp7mg-master-2.c.openshift-qe.internal xiyuan113-hp7mg-w-a-0.c.openshift-qe.internal xiyuan113-hp7mg-w-b-1.c.openshift-qe.internal $ for node in `oc get node -o custom-columns=NAME:.metadata.name --no-headers`; do echo -e "************for node $node***********"; oc debug node/$node -- chroot /host sed -i '/CHACHA20_POLY1305_SHA256/d' /etc/kubernetes/kubelet.conf; oc debug node/$node -- chroot /host systemctl restart kubelet; done ************for node xiyuan113-hp7mg-master-0.c.openshift-qe.internal*********** Starting pod/xiyuan113-hp7mg-master-0copenshift-qeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... Starting pod/xiyuan113-hp7mg-master-0copenshift-qeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... ************for node xiyuan113-hp7mg-master-1.c.openshift-qe.internal*********** Starting pod/xiyuan113-hp7mg-master-1copenshift-qeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... Starting pod/xiyuan113-hp7mg-master-1copenshift-qeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... ************for node xiyuan113-hp7mg-master-2.c.openshift-qe.internal*********** Starting pod/xiyuan113-hp7mg-master-2copenshift-qeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... Starting pod/xiyuan113-hp7mg-master-2copenshift-qeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... ************for node xiyuan113-hp7mg-w-a-0.c.openshift-qe.internal*********** Starting pod/xiyuan113-hp7mg-w-a-0copenshift-qeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... Starting pod/xiyuan113-hp7mg-w-a-0copenshift-qeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... ************for node xiyuan113-hp7mg-w-b-1.c.openshift-qe.internal*********** Starting pod/xiyuan113-hp7mg-w-b-1copenshift-qeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... Starting pod/xiyuan113-hp7mg-w-b-1copenshift-qeinternal-debug ... To use host binaries, run `chroot /host` Removing debug pod ... STEP 3: rerun the compliancesuite ./oc-compliance rerun-now compliancesuite my-ssb-r Rerunning scans from 'my-ssb-r': ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master Re-running scan 'openshift-compliance/ocp4-cis' Re-running scan 'openshift-compliance/ocp4-cis-node-worker' Re-running scan 'openshift-compliance/ocp4-cis-node-master' $ oc get suite NAME PHASE RESULT my-ssb-r DONE NON-COMPLIANT oc get compliancecheckresults | grep kubelet-configure-tls-cipher-suites ocp4-cis-node-master-kubelet-configure-tls-cipher-suites PASS medium ocp4-cis-node-worker-kubelet-configure-tls-cipher-suites PASS medium
For step 2 in https://bugzilla.redhat.com/show_bug.cgi?id=1949377#c11, below is the right procedure to update the tlsCiperSuites: $ oc create -f - << EOF apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: kubelet-config-m spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" kubeletConfig: tlsCipherSuites: - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 EOF kubeletconfig.machineconfiguration.openshift.io/kubelet-config-m created
Per comment https://bugzilla.redhat.com/show_bug.cgi?id=1949377#c11 and https://bugzilla.redhat.com/show_bug.cgi?id=1949377#c12, move status to Verified.
update more official procedure to update tlsCipherSuites for kubelet for step 2 in https://bugzilla.redhat.com/show_bug.cgi?id=1949377#c11: $ oc create -f - << EOF apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom custom: ciphers: - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 minTLSVersion: VersionTLS12 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: "" EOF kubeletconfig.machineconfiguration.openshift.io/set-kubelet-tls-security-profile created
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Compliance Operator version 0.1.35 for OpenShift Container Platform 4.6-4.8), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:2652