Bug 1954572
| Summary: | The proxy-kubeconfig related cis rules show incorrect description, rationale and instructions | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | xiyuan |
| Component: | Compliance Operator | Assignee: | Jakub Hrozek <jhrozek> |
| Status: | CLOSED ERRATA | QA Contact: | xiyuan |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.8 | CC: | josorior, mrogers, xiyuan |
| Target Milestone: | --- | ||
| Target Release: | 4.8.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-07-07 11:29:56 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
This should be easy enough to fix The PR was merged -> MODIFIED Per comment https://bugzilla.redhat.com/show_bug.cgi?id=1954572#c6 and below result, verification pass with 4.8.0-0.nightly-2021-05-21-233425 and compliance-operator.v0.1.32 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.nightly-2021-05-21-233425 True False 4h26m Cluster version is 4.8.0-0.nightly-2021-05-21-233425 $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS $ ./oc-compliance view-result ocp4-cis-file-groupowner-proxy-kubeconfig +---------------------+-------------------------------------------+ | KEY | VALUE | +---------------------+-------------------------------------------+ | Title | Verify Group Who Owns The | | | Worker Proxy Kubeconfig File | +---------------------+-------------------------------------------+ | Status | MANUAL | +---------------------+-------------------------------------------+ | Severity | medium | +---------------------+-------------------------------------------+ | Description | To ensure the Kubernetes | | | ConfigMap is mounted into the | | | sdn daemonset pods with the | | | correct ownership, make sure | | | that the sdn-config ConfigMap | | | is mounted using a ConfigMap | | | at the /config mount point and | | | that the sdn container points | | | to that configuration using | | | the --proxy-config command | | | line option. Run: | | | | | | | | | | | | oc get -nopenshift-sdn | | | ds sdn -ojson | jq -r | | | '.spec.template.spec.containers[] | | | | select(.name == "sdn")' | | | | | | | | | | | | and ensure the --proxy-config | | | parameter points to | | | /config/kube-proxy-config.yaml | | | and that the config mount | | | point is mounted from the | | | sdn-config ConfigMap. | +---------------------+-------------------------------------------+ | Rationale | The kubeconfig file | | | for kube-proxy provides | | | permissions to the kube-proxy | | | service. The proxy kubeconfig | | | file contains information | | | about the administrative | | | configuration of the OpenShift | | | cluster that is configured on | | | the system. Protection of this | | | file is critical for OpenShift | | | security. The file is provided | | | via a ConfigMap mount, so | | | the kubelet itself makes sure | | | that the file permissions are | | | appropriate for the container | | | taking it into use. | +---------------------+-------------------------------------------+ | Instructions | Run the following command: | | | | | | | | | | | | $ for i in $(oc get pods | | | -n openshift-sdn -l app=sdn | | | -oname) | | | | | | do | | | | | | oc exec -n openshift-sdn | | | $i -- stat -Lc %U:%G | | | /config/kube-proxy-config.yaml | | | | | | done | | | | | | | | | | | | The output should be root:root | +---------------------+-------------------------------------------+ | CIS-OCP Controls | 4.1.4 | +---------------------+-------------------------------------------+ | Available Fix | No | +---------------------+-------------------------------------------+ | Result Object Name | ocp4-cis-file-groupowner-proxy-kubeconfig | +---------------------+-------------------------------------------+ | Rule Object Name | ocp4-file-groupowner-proxy-kubeconfig | +---------------------+-------------------------------------------+ | Remediation Created | No | +---------------------+-------------------------------------------+ $ for i in $(oc get pods -n openshift-sdn -l app=sdn -oname); do oc exec -n openshift-sdn $i -- stat -Lc %U:%G /config/kube-proxy-config.yaml; done Defaulting container name to sdn. Use 'oc describe pod/sdn-2hb6h -n openshift-sdn' to see all of the containers in this pod. root:root Defaulting container name to sdn. Use 'oc describe pod/sdn-gtwwp -n openshift-sdn' to see all of the containers in this pod. root:root Defaulting container name to sdn. Use 'oc describe pod/sdn-mbsdw -n openshift-sdn' to see all of the containers in this pod. root:root Defaulting container name to sdn. Use 'oc describe pod/sdn-nln66 -n openshift-sdn' to see all of the containers in this pod. root:root Defaulting container name to sdn. Use 'oc describe pod/sdn-pzht9 -n openshift-sdn' to see all of the containers in this pod. root:root Defaulting container name to sdn. Use 'oc describe pod/sdn-wx64v -n openshift-sdn' to see all of the containers in this pod. root:root $ ./oc-compliance view-result ocp4-cis-file-owner-proxy-kubeconfig +---------------------+--------------------------------------+ | KEY | VALUE | +---------------------+--------------------------------------+ | Title | Verify User Who Owns The | | | Worker Proxy Kubeconfig File | +---------------------+--------------------------------------+ | Status | MANUAL | +---------------------+--------------------------------------+ | Severity | medium | +---------------------+--------------------------------------+ | Description | To ensure the Kubernetes | | | ConfigMap is mounted into the | | | sdn daemonset pods with the | | | correct ownership, make sure | | | that the sdn-config ConfigMap | | | is mounted using a ConfigMap | | | at the /config mount point and | | | that the sdn container points | | | to that configuration using | | | the --proxy-config command | | | line option. Run: | | | | | | | | | | | | oc get -nopenshift-sdn | | | ds sdn -ojson | jq -r | | | '.spec.template.spec.containers[] | | | | select(.name == "sdn")' | | | | | | | | | | | | and ensure the --proxy-config | | | parameter points to | | | /config/kube-proxy-config.yaml | | | and that the config mount | | | point is mounted from the | | | sdn-config ConfigMap. | +---------------------+--------------------------------------+ | Rationale | The kubeconfig file | | | for kube-proxy provides | | | permissions to the kube-proxy | | | service. The proxy kubeconfig | | | file contains information | | | about the administrative | | | configuration of the OpenShift | | | cluster that is configured on | | | the system. Protection of this | | | file is critical for OpenShift | | | security. The file is provided | | | via a ConfigMap mount, so | | | the kubelet itself makes sure | | | that the file permissions are | | | appropriate for the container | | | taking it into use. | +---------------------+--------------------------------------+ | Instructions | Run the following command: | | | | | | | | | | | | $ for i in $(oc get pods | | | -n openshift-sdn -l app=sdn | | | -oname) | | | | | | do | | | | | | oc exec -n openshift-sdn | | | $i -- stat -Lc %U:%G | | | /config/kube-proxy-config.yaml | | | | | | done | | | | | | | | | | | | The output should be root:root | +---------------------+--------------------------------------+ | CIS-OCP Controls | 4.1.4 | +---------------------+--------------------------------------+ | Available Fix | No | +---------------------+--------------------------------------+ | Result Object Name | ocp4-cis-file-owner-proxy-kubeconfig | +---------------------+--------------------------------------+ | Rule Object Name | ocp4-file-owner-proxy-kubeconfig | +---------------------+--------------------------------------+ | Remediation Created | No | +---------------------+--------------------------------------+ $ for i in $(oc get pods -n openshift-sdn -l app=sdn -oname); do oc exec -n openshift-sdn $i -- stat -Lc %U:%G /config/kube-proxy-config.yaml; done Defaulting container name to sdn. Use 'oc describe pod/sdn-2hb6h -n openshift-sdn' to see all of the containers in this pod. root:root Defaulting container name to sdn. Use 'oc describe pod/sdn-gtwwp -n openshift-sdn' to see all of the containers in this pod. root:root Defaulting container name to sdn. Use 'oc describe pod/sdn-mbsdw -n openshift-sdn' to see all of the containers in this pod. root:root Defaulting container name to sdn. Use 'oc describe pod/sdn-nln66 -n openshift-sdn' to see all of the containers in this pod. root:root Defaulting container name to sdn. Use 'oc describe pod/sdn-pzht9 -n openshift-sdn' to see all of the containers in this pod. root:root Defaulting container name to sdn. Use 'oc describe pod/sdn-wx64v -n openshift-sdn' to see all of the containers in this pod. root:root Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Compliance Operator version 0.1.35 for OpenShift Container Platform 4.6-4.8), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:2652 |
Description of problem: The proxy-kubeconfig related cis rules show incorrect description, rationale and instructions Version-Release number of selected component (if applicable): 4.8.0-0.nightly-2021-04-26-151924 Steps to Reproduce: 1. Install compliance operator 2. Create scansettingbinding: # oc create -f - << EOF apiVersion: compliance.openshift.io/v1alpha1 kind: ScanSettingBinding metadata: name: my-ssb-r profiles: - name: ocp4-cis kind: Profile apiGroup: compliance.openshift.io/v1alpha1 - name: ocp4-cis-node kind: Profile apiGroup: compliance.openshift.io/v1alpha1 settingsRef: name: default kind: ScanSetting apiGroup: compliance.openshift.io/v1alpha1 EOF Actual results: The proxy-kubeconfig related cis rules show incorrect description, rationale and instructions View result for proxy-kubeconfig related cis rules. The Description, Rationale and Instructions is checking how a file looks on the FS. It is not helpful. # oc get compliancecheckresults.compliance.openshift.io | grep proxy-kubeconfig ocp4-cis-file-groupowner-proxy-kubeconfig MANUAL medium ocp4-cis-file-owner-proxy-kubeconfig MANUAL medium ocp4-cis-file-permissions-proxy-kubeconfig FAIL medium # oc compliance view-result ocp4-cis-file-permissions-proxy-kubeconfig +---------------------+----------------------------------------------------------+ | KEY | VALUE | +---------------------+----------------------------------------------------------+ | Title | Verify Permissions on the | | | Worker Proxy Kubeconfig File | +---------------------+----------------------------------------------------------+ | Status | FAIL | +---------------------+----------------------------------------------------------+ | Severity | medium | +---------------------+----------------------------------------------------------+ | Description | To properly set the permissions | | | of<html:code>/config/kube-proxy-config.yaml</html:code>, | | | run the command:<html:pre>$ sudo chmod 0644 | | | /config/kube-proxy-config.yaml</html:pre> | +---------------------+----------------------------------------------------------+ | Rationale | The kube-proxy kubeconfig file | | | controls various parameters | | | of the kube-proxy
service | | | in the worker node. If used, | | | you should restrict its file | | | permissions
to maintain | | | the integrity of the file. | | | The file should be writable | | | by only
the administrators | | | on the system.

The | | | kube-proxy runs with the | | | kubeconfig parameters | | | configured as
a Kubernetes | | | ConfigMap instead of a file. | | | In this case, there is no | | | proxy
kubeconfig file. | | | But appropriate permissions | | | still need to be set in | | | the
ConfigMap mount. | +---------------------+----------------------------------------------------------+ | Instructions | To check the permissions of | | | /config/kube-proxy-config.yaml, | | | | | | you'll need to log into a | | | node in the cluster. | | | As a user with administrator | | | privileges, log into a node in | | | the relevant pool: | | | $ oc debug node/$NODE_NAME | | | At the sh-4.4# prompt, run: | | | # chroot /host | | | | Then,run the command: | | | | | | $ ls -l | | | /config/kube-proxy-config.yaml | | | | | | If properly configured, the | | | output should indicate the | | | following permissions: | | | | | | -rw-r--r-- | +---------------------+----------------------------------------------------------+ | CIS-OCP Controls | 4.1.3 | +---------------------+----------------------------------------------------------+ | Available Fix | No | +---------------------+----------------------------------------------------------+ | Result Object Name | ocp4-cis-file-permissions-proxy-kubeconfig | +---------------------+----------------------------------------------------------+ | Rule Object Name | ocp4-file-permissions-proxy-kubeconfig | +---------------------+----------------------------------------------------------+ | Remediation Created | No | +---------------------+----------------------------------------------------------+ # oc compliance view-result ocp4-cis-file-groupowner-proxy-kubeconfig +---------------------+----------------------------------------------------------+ | KEY | VALUE | +---------------------+----------------------------------------------------------+ | Title | Verify Group Who Owns The | | | Worker Proxy Kubeconfig File | +---------------------+----------------------------------------------------------+ | Status | MANUAL | +---------------------+----------------------------------------------------------+ | Severity | medium | +---------------------+----------------------------------------------------------+ | Description | To properly set the group owner | | | of<html:code>/config/kube-proxy-config.yaml</html:code>, | | | run the command:<html:pre>$ sudo chgrp root | | | /config/kube-proxy-config.yaml</html:pre> | +---------------------+----------------------------------------------------------+ | Rationale | The kubeconfig file | | | for kube-proxy provides | | | permissions to the kube-proxy | | | service.
The proxy | | | kubeconfig file contains | | | information about the | | | administrative configuration | | | of the
OpenShift | | | cluster that is | | | configured on the system. | | | Protection of this file | | | is
critical for OpenShift | | | security.

The file | | | is provided via a ConfigMap | | | mount, so the kubelet itself | | | makes sure that the
file | | | permissions are appropriate | | | for the container taking it | | | into use. | +---------------------+----------------------------------------------------------+ | Instructions | To check the group ownership of | | | /config/kube-proxy-config.yaml, | | | | | | you'll need to log into a node | | | in the cluster. | | | | | | As a user with administrator | | | privileges, log into a node in | | | the relevant pool: | | | | | | | | | | | | $ oc debug node/$NODE_NAME | | | | | | | | | | | | At the sh-4.4# prompt, run: | | # chroot /host | | | Then,run the command: | | | | $ ls -lL | | | /config/kube-proxy-config.yaml | | | | | | If properly configured, the | | | output should indicate the | | | following group-owner: | | | | | | root | +---------------------+----------------------------------------------------------+ | CIS-OCP Controls | 4.1.4 | +---------------------+----------------------------------------------------------+ | Available Fix | No | +---------------------+----------------------------------------------------------+ | Result Object Name | ocp4-cis-file-groupowner-proxy-kubeconfig | +---------------------+----------------------------------------------------------+ | Rule Object Name | ocp4-file-groupowner-proxy-kubeconfig | +---------------------+----------------------------------------------------------+ | Remediation Created | No | +---------------------+----------------------------------------------------------+ # oc compliance view-result ocp4-cis-file-owner-proxy-kubeconfig +---------------------+----------------------------------------------------------+ | KEY | VALUE | +---------------------+----------------------------------------------------------+ | Title | Verify User Who Owns The | | | Worker Proxy Kubeconfig File | +---------------------+----------------------------------------------------------+ | Status | MANUAL | +---------------------+----------------------------------------------------------+ | Severity | medium | +---------------------+----------------------------------------------------------+ | Description | To properly set the owner | | | of<html:code>/config/kube-proxy-config.yaml</html:code>, | | | run the command:<html:pre>$ sudo chown root | | | /config/kube-proxy-config.yaml</html:pre> | +---------------------+----------------------------------------------------------+ | Rationale | The kubeconfig file | | | for kube-proxy provides | | | permissions to the kube-proxy | | | service.
The proxy | | | kubeconfig file contains | | | information about the | | | administrative configuration | | | of the
OpenShift | | | cluster that is | | | configured on the system. | | | Protection of this file | | | is
critical for OpenShift | | | security.

The file | | | is provided via a ConfigMap | | | mount, so the kubelet itself | | | makes sure that the
file | | | permissions are appropriate | | | for the container taking it | | | into use. | +---------------------+----------------------------------------------------------+ | Instructions | To check the ownership of | | | /config/kube-proxy-config.yaml, | | | | | | you'll need to log into a node | | | in the cluster. | | | | | | As a user with administrator | | | privileges, log into a node in | | | the relevant pool: | | | | | | | | | | | | $ oc debug node/$NODE_NAME | | | | | | | | | | | | At the sh-4.4# prompt, run: | | | | | | | | | | | | # chroot /host | | | | | | | | | | | | | | | | | | Then,run the command: | | | | | | $ ls -lL | | | /config/kube-proxy-config.yaml | | | | | | If properly configured, the | | | output should indicate the | | | following owner: | | | | | | root | +---------------------+----------------------------------------------------------+ | CIS-OCP Controls | 4.1.4 | +---------------------+----------------------------------------------------------+ | Available Fix | No | +---------------------+----------------------------------------------------------+ | Result Object Name | ocp4-cis-file-owner-proxy-kubeconfig | +---------------------+----------------------------------------------------------+ | Rule Object Name | ocp4-file-owner-proxy-kubeconfig | +---------------------+----------------------------------------------------------+ | Remediation Created | No | +---------------------+----------------------------------------------------------+ Expected results: The proxy-kubeconfig related cis rules is sort of special because it should not check how a file looks on the FS, but should check instead how a CM is mounted to a pod. Additional info: