Bug 1949377 - No clear instructions for rule ocp4-cis-node-master-kubelet-configure-tls-cipher-suites and ocp4-cis-node-worker-kubelet-configure-tls-cipher-suites
Summary: No clear instructions for rule ocp4-cis-node-master-kubelet-configure-tls-cip...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Compliance Operator
Version: 4.8
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Jakub Hrozek
QA Contact: xiyuan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-14 07:21 UTC by xiyuan
Modified: 2022-07-27 14:20 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-07 11:29:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:2652 0 None None None 2021-07-07 11:31:09 UTC

Description xiyuan 2021-04-14 07:21:00 UTC
Description of problem:
No clear instructions for rule ocp4-cis-node-master-kubelet-configure-tls-cipher-suites
and ocp4-cis-node-worker-kubelet-configure-tls-cipher-suites
$ oc get compliancecheckresults ocp4-cis-node-master-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions}
rule test.
$ oc get compliancecheckresults ocp4-cis-node-worker-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions}
rule test.
Version-Release number of selected component (if applicable):
4.8.0-0.nightly-2021-04-13-171608 + compliance-operator.v0.1.30

How reproducible:
Always

Steps to Reproduce:
Install compliance operator
Trigger a compliancesuite with cis profiles:
$ oc create -f - << EOF
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: my-ssb-r
profiles:
  - name: ocp4-cis
	kind: Profile
	apiGroup: compliance.openshift.io/v1alpha1
  - name: ocp4-cis-node
	kind: Profile
	apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
EOF
Actual results:
Wait until compliancesuite done, check the instructions for the two rules. There is no clear instructions how to judge the rule is PASS or FAIL
$ oc get compliancecheckresults ocp4-cis-node-master-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions}
rule test.
$ oc get compliancecheckresults ocp4-cis-node-worker-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions}
rule test.

Expected results:
There should be clear instructions how to judge the rule is PASS or FAIL

Additional information:

Comment 1 Jakub Hrozek 2021-04-14 09:24:20 UTC
Thanks for the bug report, those rules have a missing OCIL attribute.

Comment 2 Jakub Hrozek 2021-04-14 10:13:31 UTC
PR: https://github.com/ComplianceAsCode/content/pull/6835

Comment 3 Jakub Hrozek 2021-04-19 07:43:20 UTC
The PR merged, therefore moving to MODIFIED

Comment 11 xiyuan 2021-05-11 12:44:46 UTC
$ oc get ip
NAME            CSV                           APPROVAL    APPROVED
$ oc get csv
NAME                          DISPLAY               VERSION   REPLACES   PHASE
compliance-operator.v0.1.32   Compliance Operator   0.1.32               Succeeded
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.0-0.nightly-2021-05-10-225140   True        False         113m    Cluster version is 4.8.0-0.nightly-2021-05-10-225140

STEP 1. create compliancesuite and check checkresults for the 2 rules:
$ oc create -f -<< EOF
> apiVersion: compliance.openshift.io/v1alpha1
> kind: ScanSettingBinding
> metadata:
>   name: my-ssb-r
> profiles:
>   - name: ocp4-cis
>     kind: Profile
>     apiGroup: compliance.openshift.io/v1alpha1
>   - name: ocp4-cis-node
>     kind: Profile
>     apiGroup: compliance.openshift.io/v1alpha1
> settingsRef:
>   name: default
>   kind: ScanSetting
>   apiGroup: compliance.openshift.io/v1alpha1
> EOF
scansettingbinding.compliance.openshift.io/my-ssb-r created
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
$ oc get compliancecheckresults | grep kubelet-configure-tls-cipher-suites
ocp4-cis-node-master-kubelet-configure-tls-cipher-suites                       FAIL             medium
ocp4-cis-node-worker-kubelet-configure-tls-cipher-suites                       FAIL             medium

STEP 2: update corresponding file per instruction:
$ oc get compliancecheckresults ocp4-cis-node-master-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions}
Run the following command on the kubelet node(s):
$ sudo grep tlsCipherSuites /etc/kubernetes/kubelet.conf
Verify that the set of ciphers contains only the following:

TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,

$ oc get node
NAME                                               STATUS   ROLES    AGE    VERSION
xiyuan113-hp7mg-master-0.c.openshift-qe.internal   Ready    master   120m   v1.21.0-rc.0+86f0080
xiyuan113-hp7mg-master-1.c.openshift-qe.internal   Ready    master   120m   v1.21.0-rc.0+86f0080
xiyuan113-hp7mg-master-2.c.openshift-qe.internal   Ready    master   120m   v1.21.0-rc.0+86f0080
xiyuan113-hp7mg-w-a-0.c.openshift-qe.internal      Ready    worker   109m   v1.21.0-rc.0+86f0080
xiyuan113-hp7mg-w-b-1.c.openshift-qe.internal      Ready    worker   108m   v1.21.0-rc.0+86f0080
[xiyuan@MiWiFi-RA69-srv func]$ oc debug no/xiyuan113-hp7mg-master-0.c.openshift-qe.internal -- chroot /host cat  /etc/kubernetes/kubelet.conf
Starting pod/xiyuan113-hp7mg-master-0copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  x509:
    clientCAFile: /etc/kubernetes/kubelet-ca.crt
  anonymous:
    enabled: false
cgroupDriver: systemd
cgroupRoot: /
clusterDNS:
  - 172.30.0.10
clusterDomain: cluster.local
containerLogMaxSize: 50Mi
maxPods: 250
kubeAPIQPS: 50
kubeAPIBurst: 100
rotateCertificates: true
serializeImagePulls: false
staticPodPath: /etc/kubernetes/manifests
systemCgroups: /system.slice
systemReserved:
  ephemeral-storage: 1Gi
featureGates:
  APIPriorityAndFairness: true
  LegacyNodeRoleBehavior: false
  NodeDisruptionExclusion: true
  RotateKubeletServerCertificate: true
  ServiceNodeExclusion: true
  SupportPodPidsLimit: true
  DownwardAPIHugePages: true
serverTLSBootstrap: true
tlsMinVersion: VersionTLS12
tlsCipherSuites:
  - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
  - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256

Removing debug pod ...
$ oc get node -o custom-columns=NAME:.metadata.name --no-headers
xiyuan113-hp7mg-master-0.c.openshift-qe.internal
xiyuan113-hp7mg-master-1.c.openshift-qe.internal
xiyuan113-hp7mg-master-2.c.openshift-qe.internal
xiyuan113-hp7mg-w-a-0.c.openshift-qe.internal
xiyuan113-hp7mg-w-b-1.c.openshift-qe.internal

$ for node in `oc get node -o custom-columns=NAME:.metadata.name --no-headers`; do echo -e "************for node $node***********"; oc debug node/$node -- chroot /host sed -i '/CHACHA20_POLY1305_SHA256/d'  /etc/kubernetes/kubelet.conf; oc debug node/$node -- chroot /host  systemctl restart kubelet; done
************for node xiyuan113-hp7mg-master-0.c.openshift-qe.internal***********
Starting pod/xiyuan113-hp7mg-master-0copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
Starting pod/xiyuan113-hp7mg-master-0copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
************for node xiyuan113-hp7mg-master-1.c.openshift-qe.internal***********
Starting pod/xiyuan113-hp7mg-master-1copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
Starting pod/xiyuan113-hp7mg-master-1copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
************for node xiyuan113-hp7mg-master-2.c.openshift-qe.internal***********
Starting pod/xiyuan113-hp7mg-master-2copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
Starting pod/xiyuan113-hp7mg-master-2copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
************for node xiyuan113-hp7mg-w-a-0.c.openshift-qe.internal***********
Starting pod/xiyuan113-hp7mg-w-a-0copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
Starting pod/xiyuan113-hp7mg-w-a-0copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
************for node xiyuan113-hp7mg-w-b-1.c.openshift-qe.internal***********
Starting pod/xiyuan113-hp7mg-w-b-1copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...
Starting pod/xiyuan113-hp7mg-w-b-1copenshift-qeinternal-debug ...
To use host binaries, run `chroot /host`

Removing debug pod ...


STEP 3: rerun the compliancesuite
 ./oc-compliance rerun-now compliancesuite my-ssb-r
Rerunning scans from 'my-ssb-r': ocp4-cis, ocp4-cis-node-worker, ocp4-cis-node-master
Re-running scan 'openshift-compliance/ocp4-cis'
Re-running scan 'openshift-compliance/ocp4-cis-node-worker'
Re-running scan 'openshift-compliance/ocp4-cis-node-master'
$ oc get suite
NAME       PHASE       RESULT
my-ssb-r   DONE          NON-COMPLIANT
oc get compliancecheckresults | grep kubelet-configure-tls-cipher-suites
ocp4-cis-node-master-kubelet-configure-tls-cipher-suites                       PASS             medium
ocp4-cis-node-worker-kubelet-configure-tls-cipher-suites                       PASS             medium

Comment 12 xiyuan 2021-05-12 04:12:15 UTC
For step 2 in https://bugzilla.redhat.com/show_bug.cgi?id=1949377#c11, below is the right procedure to update the tlsCiperSuites:
$ oc create -f - << EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
   name: kubelet-config-m
spec:
    machineConfigPoolSelector:
        matchLabels:
            pools.operator.machineconfiguration.openshift.io/master: ""
    kubeletConfig:
      tlsCipherSuites:
      - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
      - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
      - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
      - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
EOF
kubeletconfig.machineconfiguration.openshift.io/kubelet-config-m created

Comment 14 xiyuan 2021-05-12 10:25:10 UTC
update more official procedure to update tlsCipherSuites for kubelet for step 2 in https://bugzilla.redhat.com/show_bug.cgi?id=1949377#c11:
$ oc create -f - << EOF
apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
 name: set-kubelet-tls-security-profile
spec:
 tlsSecurityProfile:
  type: Custom
  custom:
   ciphers:
     - ECDHE-ECDSA-AES128-GCM-SHA256
     - ECDHE-RSA-AES128-GCM-SHA256
     - ECDHE-ECDSA-AES256-GCM-SHA384
     - ECDHE-RSA-AES256-GCM-SHA384
   minTLSVersion: VersionTLS12
 machineConfigPoolSelector:
  matchLabels:
   pools.operator.machineconfiguration.openshift.io/master: ""
EOF
kubeletconfig.machineconfiguration.openshift.io/set-kubelet-tls-security-profile created

Comment 18 errata-xmlrpc 2021-07-07 11:29:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Compliance Operator version 0.1.35 for OpenShift Container Platform 4.6-4.8), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:2652


Note You need to log in before you can comment on or make changes to this bug.