Bug 2082431
Summary: | The remediation doesn’t work for rule ocp4-kubelet-configure-tls-cipher-suites | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | xiyuan |
Component: | Compliance Operator | Assignee: | Vincent Shen <wenshen> |
Status: | CLOSED ERRATA | QA Contact: | |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 4.11 | CC: | jmittapa, lbragsta, mrogers, suprs, wenshen, xiyuan |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-06-06 14:39:50 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
xiyuan
2022-05-06 06:49:17 UTC
It seems like the issue is caused by https://github.com/openshift/compliance-operator/pull/814, as initially we were not expecting rendered MC to use base64 encoding. Hi Vincent, The base64 decoder issue is for ovn only. However, the mcp pause issue for remediation(always need manual unpause or it will stuck at paused status) is a common issue could be reproduced easily on all platforms. Could you help to raise the priority for this bug? Thanks. Hi Xiyuan, I have suggested a patch PR here: https://github.com/ComplianceAsCode/compliance-operator/pull/36. I will discuss this with the team to see if we can have this in the upcoming release. Best, Vincent Verification pass with 4.11.0-0.nightly-2022-05-18-171831 and latest CO code. # git log | head commit fb0f0469cb50a89158b193e928d856f59c0e14b7 Merge: 3a94f273 aa900230 Author: Juan Osorio Robles <jaosorior> Date: Tue May 17 12:17:35 2022 +0300 Merge pull request #36 from Vincent056/bugfix_kc Bug 2082431: Fix MachineConfig base64 encoding issue on OVN cluster commit aa9002305b2f857151cdd269190a92315bf1018b $ make deploy-local ... deployment.apps/compliance-operator created role.rbac.authorization.k8s.io/compliance-operator created clusterrole.rbac.authorization.k8s.io/compliance-operator created role.rbac.authorization.k8s.io/resultscollector created role.rbac.authorization.k8s.io/api-resource-collector created role.rbac.authorization.k8s.io/resultserver created role.rbac.authorization.k8s.io/remediation-aggregator created clusterrole.rbac.authorization.k8s.io/remediation-aggregator created role.rbac.authorization.k8s.io/rerunner created role.rbac.authorization.k8s.io/profileparser created clusterrole.rbac.authorization.k8s.io/api-resource-collector created rolebinding.rbac.authorization.k8s.io/compliance-operator created clusterrolebinding.rbac.authorization.k8s.io/compliance-operator created rolebinding.rbac.authorization.k8s.io/resultscollector created rolebinding.rbac.authorization.k8s.io/remediation-aggregator created clusterrolebinding.rbac.authorization.k8s.io/remediation-aggregator created clusterrolebinding.rbac.authorization.k8s.io/api-resource-collector created rolebinding.rbac.authorization.k8s.io/api-resource-collector created rolebinding.rbac.authorization.k8s.io/rerunner created rolebinding.rbac.authorization.k8s.io/profileparser created rolebinding.rbac.authorization.k8s.io/resultserver created serviceaccount/compliance-operator created serviceaccount/resultscollector created serviceaccount/remediation-aggregator created serviceaccount/rerunner created serviceaccount/api-resource-collector created serviceaccount/profileparser created serviceaccount/resultserver created clusterrolebinding.rbac.authorization.k8s.io/compliance-operator-metrics created clusterrole.rbac.authorization.k8s.io/compliance-operator-metrics created W0519 15:50:50.428000 25441 warnings.go:70] would violate PodSecurity "restricted:latest": unrestricted capabilities (container "compliance-operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "compliance-operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "compliance-operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") deployment.apps/compliance-operator triggers updated $ oc get pod NAME READY STATUS RESTARTS AGE compliance-operator-8595cc98df-qh2ml 1/1 Running 1 (60m ago) 61m ocp4-openshift-compliance-pp-7599c78b-642x8 1/1 Running 0 59m rhcos4-openshift-compliance-pp-758b8f6d54-m4z76 1/1 Running 0 59m $ oc create -f -<<EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ScanSettingBinding > metadata: > name: my-ssb-r > profiles: > - name: ocp4-moderate-node > kind: Profile > apiGroup: compliance.openshift.io/v1alpha1 > settingsRef: > name: default-auto-apply > kind: ScanSetting > apiGroup: compliance.openshift.io/v1alpha1 > EOF scansettingbinding.compliance.openshift.io/my-ssb-r created $ oc get suite -w NAME PHASE RESULT my-ssb-r LAUNCHING NOT-AVAILABLE my-ssb-r LAUNCHING NOT-AVAILABLE my-ssb-r RUNNING NOT-AVAILABLE my-ssb-r RUNNING NOT-AVAILABLE my-ssb-r AGGREGATING NOT-AVAILABLE my-ssb-r AGGREGATING NOT-AVAILABLE my-ssb-r DONE NON-COMPLIANT my-ssb-r DONE NON-COMPLIANT $ oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-16cb661b1f9966208976f789a1ac8941 False True False 3 2 2 0 5h33m worker rendered-worker-29d920e599951fbb9ab779b6f26468b7 True False False 3 3 3 0 5h33m worker rendered-worker-29d920e599951fbb9ab779b6f26468b7 True False False 3 3 3 0 5h34m master rendered-master-16cb661b1f9966208976f789a1ac8941 False True False 3 2 2 0 5h34m worker rendered-worker-29d920e599951fbb9ab779b6f26468b7 True False False 3 3 3 0 5h34m master rendered-master-16cb661b1f9966208976f789a1ac8941 False True False 3 2 2 0 5h34m master rendered-master-16cb661b1f9966208976f789a1ac8941 False True False 3 2 2 0 5h35m worker rendered-worker-29d920e599951fbb9ab779b6f26468b7 True False False 3 3 3 0 5h35m $ oc compliance rerun-now scansettingbinding my-ssb-r Rerunning scans from 'my-ssb-r': ocp4-moderate-node-master, ocp4-moderate-node-worker Re-running scan 'openshift-compliance/ocp4-moderate-node-master' Re-running scan 'openshift-compliance/ocp4-moderate-node-worker' $ oc get suite -w NAME PHASE RESULT my-ssb-r LAUNCHING NOT-AVAILABLE my-ssb-r LAUNCHING NOT-AVAILABLE my-ssb-r RUNNING NOT-AVAILABLE my-ssb-r RUNNING NOT-AVAILABLE my-ssb-r AGGREGATING NOT-AVAILABLE my-ssb-r AGGREGATING NOT-AVAILABLE my-ssb-r DONE NON-COMPLIANT my-ssb-r DONE NON-COMPLIANT $ oc get cr NAME STATE ocp4-moderate-node-master-directory-access-var-log-kube-audit Applied ocp4-moderate-node-master-directory-access-var-log-oauth-audit Applied ocp4-moderate-node-master-directory-access-var-log-ocp-audit Applied ocp4-moderate-node-master-kubelet-configure-event-creation Applied ocp4-moderate-node-master-kubelet-configure-tls-cipher-suites Applied ocp4-moderate-node-master-kubelet-enable-iptables-util-chains Applied ocp4-moderate-node-master-kubelet-enable-protect-kernel-defaults Applied ocp4-moderate-node-master-kubelet-enable-protect-kernel-sysctl Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-hard-imagefs-available-1 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree-1 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-hard-memory-available Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-hard-memory-available-1 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-hard-nodefs-available Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-hard-nodefs-available-1 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree-1 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-imagefs-available Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-imagefs-available-1 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-imagefs-available-2 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-1 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-2 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-memory-available Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-memory-available-1 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-memory-available-2 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-nodefs-available Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-nodefs-available-1 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-nodefs-available-2 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-1 Applied ocp4-moderate-node-master-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-2 Applied ocp4-moderate-node-worker-kubelet-configure-event-creation Applied ocp4-moderate-node-worker-kubelet-configure-tls-cipher-suites Applied ocp4-moderate-node-worker-kubelet-enable-iptables-util-chains Applied ocp4-moderate-node-worker-kubelet-enable-protect-kernel-defaults Applied ocp4-moderate-node-worker-kubelet-enable-protect-kernel-sysctl Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-hard-imagefs-available Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-hard-imagefs-available-1 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-hard-imagefs-inodesfree-1 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-hard-memory-available Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-hard-memory-available-1 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-hard-nodefs-available Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-hard-nodefs-available-1 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-hard-nodefs-inodesfree-1 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-imagefs-available Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-imagefs-available-1 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-imagefs-available-2 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-1 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-imagefs-inodesfree-2 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-memory-available Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-memory-available-1 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-memory-available-2 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-nodefs-available Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-nodefs-available-1 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-nodefs-available-2 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-1 Applied ocp4-moderate-node-worker-kubelet-eviction-thresholds-set-soft-nodefs-inodesfree-2 Applied $ # oc logs pod/compliance-operator-8595cc98df-h8xpr --all-containers | grep -i error {"level":"info","ts":1652956208.4256825,"logger":"metrics","msg":"Registering metric: compliance_scan_error_total"} {"level":"error","ts":1652956584.335884,"logger":"suitectrl","msg":"Could not pause pool","Request.Namespace":"openshift-compliance","Request.Name":"my-ssb-r","MachineConfigPool.Name":"worker","error":"Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io \"worker\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/openshift/compliance-operator/pkg/controller/compliancesuite.(*ReconcileComplianceSuite).applyRemediation\n\t/go/src/github.com/openshift/compliance-operator/pkg/controller/compliancesuite/compliancesuite_controller.go:578\ngithub.com/openshift/compliance-operator/pkg/controller/compliancesuite.(*ReconcileComplianceSuite).reconcileRemediations\n\t/go/src/github.com/openshift/compliance-operator/pkg/controller/compliancesuite/compliancesuite_controller.go:486\ngithub.com/openshift/compliance-operator/pkg/controller/compliancesuite.(*ReconcileComplianceSuite).Reconcile\n\t/go/src/github.com/openshift/compliance-operator/pkg/controller/compliancesuite/compliancesuite_controller.go:180\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/openshift/compliance-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/compliance-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/openshift/compliance-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"} {"level":"error","ts":1652956584.340368,"logger":"suitectrl","msg":"Retriable error","Request.Namespace":"openshift-compliance","Request.Name":"my-ssb-r","error":"Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io \"worker\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/openshift/compliance-operator/pkg/controller/compliancesuite.(*ReconcileComplianceSuite).Reconcile\n\t/go/src/github.com/openshift/compliance-operator/pkg/controller/compliancesuite/compliancesuite_controller.go:181\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/src/github.com/openshift/compliance-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/compliance-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/openshift/compliance-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"} {"level":"error","ts":1652956584.340567,"logger":"controller","msg":"Reconciler error","controller":"compliancesuite-controller","name":"my-ssb-r","namespace":"openshift-compliance","error":"Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io \"worker\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/compliance-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/src/github.com/openshift/compliance-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/compliance-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"} verification pass with 4.11.0-0.nightly-2022-05-25-193227 and compliance-operator.v0.1.52 $ oc get ip NAME CSV APPROVAL APPROVED install-prbqr compliance-operator.v0.1.52 Automatic true $ oc get csv NAME DISPLAY VERSION REPLACES PHASE compliance-operator.v0.1.52 Compliance Operator 0.1.52 Succeeded elasticsearch-operator.5.4.2 OpenShift Elasticsearch Operator 5.4.2 Succeeded 1. create ssb with tailoredprofile: $ oc create -f - << EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: TailoredProfile > metadata: > name: test-node > namespace: openshift-compliance > spec: > description: set value for ocp4-kubelet-configure-tls-cipher-suites > title: set value for ocp4-kubelet-configure-tls-cipher-suites > enableRules: > - name: ocp4-kubelet-configure-tls-cipher-suites > rationale: Node > EOF tailoredprofile.compliance.openshift.io/test-node created $ oc create -f - << EOF > apiVersion: compliance.openshift.io/v1alpha1 > kind: ScanSettingBinding > metadata: > name: ocp4-kubelet-configure-tls-cipher-test > profiles: > - apiGroup: compliance.openshift.io/v1alpha1 > kind: TailoredProfile > name: test-node > settingsRef: > apiGroup: compliance.openshift.io/v1alpha1 > kind: ScanSetting > name: default-auto-apply > EOF scansettingbinding.compliance.openshift.io/ocp4-kubelet-configure-tls-cipher-test created $ oc get suite -w NAME PHASE RESULT ocp4-kubelet-configure-tls-cipher-test RUNNING NOT-AVAILABLE ocp4-kubelet-configure-tls-cipher-test RUNNING NOT-AVAILABLE ocp4-kubelet-configure-tls-cipher-test AGGREGATING NOT-AVAILABLE ocp4-kubelet-configure-tls-cipher-test AGGREGATING NOT-AVAILABLE ocp4-kubelet-configure-tls-cipher-test DONE NON-COMPLIANT ocp4-kubelet-configure-tls-cipher-test DONE NON-COMPLIANT $ oc get mc 99-worker-generated-kubelet -o=jsonpath={.spec.config.storage.files[0].contents.source} data:text/plain;charset=utf-8;base64,ewogICJraW5kIjogIkt1YmVsZXRDb25maWd1cmF0aW9uIiwKICAiYXBpVmVyc2lvbiI6ICJrdWJlbGV0LmNvbmZpZy5rOHMuaW8vdjFiZXRhMSIsCiAgInN0YXRpY1BvZFBhdGgiOiAiL2V0Yy9rdWJlcm5ldGVzL21hbmlmZXN0cyIsCiAgInN5bmNGcmVxdWVuY3kiOiAiMHMiLAogICJmaWxlQ2hlY2tGcmVxdWVuY3kiOiAiMHMiLAogICJodHRwQ2hlY2tGcmVxdWVuY3kiOiAiMHMiLAogICJ0bHNDaXBoZXJTdWl0ZXMiOiBbCiAgICAiVExTX0VDREhFX1JTQV9XSVRIX0FFU18yNTZfR0NNX1NIQTM4NCIsCiAgICAiVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0IiwKICAgICJUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2IiwKICAgICJUTFNfRUNESEVfRUNEU0FfV0lUSF9BRVNfMTI4X0dDTV9TSEEyNTYiCiAgXSwKICAidGxzTWluVmVyc2lvbiI6ICJWZXJzaW9uVExTMTIiLAogICJyb3RhdGVDZXJ0aWZpY2F0ZXMiOiB0cnVlLAogICJzZXJ2ZXJUTFNCb290c3RyYXAiOiB0cnVlLAogICJhdXRoZW50aWNhdGlvbiI6IHsKICAgICJ4NTA5IjogewogICAgICAiY2xpZW50Q0FGaWxlIjogIi9ldGMva3ViZXJuZXRlcy9rdWJlbGV0LWNhLmNydCIKICAgIH0sCiAgICAid2ViaG9vayI6IHsKICAgICAgImNhY2hlVFRMIjogIjBzIgogICAgfSwKICAgICJhbm9ueW1vdXMiOiB7CiAgICAgICJlbmFibGVkIjogZmFsc2UKICAgIH0KICB9LAogICJhdXRob3JpemF0aW9uIjogewogICAgIndlYmhvb2siOiB7CiAgICAgICJjYWNoZUF1dGhvcml6ZWRUVEwiOiAiMHMiLAogICAgICAiY2FjaGVVbmF1dGhvcml6ZWRUVEwiOiAiMHMiCiAgICB9CiAgfSwKICAiY2x1c3RlckRvbWFpbiI6ICJjbHVzdGVyLmxvY2FsIiwKICAiY2x1c3RlckROUyI6IFsKICAgICIxNzIuMzAuMC4xMCIKICBdLAogICJzdHJlYW1pbmdDb25uZWN0aW9uSWRsZVRpbWVvdXQiOiAiMHMiLAogICJub2RlU3RhdHVzVXBkYXRlRnJlcXVlbmN5IjogIjBzIiwKICAibm9kZVN0YXR1c1JlcG9ydEZyZXF1ZW5jeSI6ICIwcyIsCiAgImltYWdlTWluaW11bUdDQWdlIjogIjBzIiwKICAidm9sdW1lU3RhdHNBZ2dQZXJpb2QiOiAiMHMiLAogICJzeXN0ZW1DZ3JvdXBzIjogIi9zeXN0ZW0uc2xpY2UiLAogICJjZ3JvdXBSb290IjogIi8iLAogICJjZ3JvdXBEcml2ZXIiOiAic3lzdGVtZCIsCiAgImNwdU1hbmFnZXJSZWNvbmNpbGVQZXJpb2QiOiAiMHMiLAogICJydW50aW1lUmVxdWVzdFRpbWVvdXQiOiAiMHMiLAogICJtYXhQb2RzIjogMjUwLAogICJwb2RQaWRzTGltaXQiOiA0MDk2LAogICJrdWJlQVBJUVBTIjogNTAsCiAgImt1YmVBUElCdXJzdCI6IDEwMCwKICAic2VyaWFsaXplSW1hZ2VQdWxscyI6IGZhbHNlLAogICJldmljdGlvblByZXNzdXJlVHJhbnNpdGlvblBlcmlvZCI6ICIwcyIsCiAgImZlYXR1cmVHYXRlcyI6IHsKICAgICJBUElQcmlvcml0eUFuZEZhaXJuZXNzIjogdHJ1ZSwKICAgICJDU0lNaWdyYXRpb25BV1MiOiBmYWxzZSwKICAgICJDU0lNaWdyYXRpb25BenVyZUZpbGUiOiBmYWxzZSwKICAgICJDU0lNaWdyYXRpb25HQ0UiOiBmYWxzZSwKICAgICJDU0lNaWdyYXRpb252U3BoZXJlIjogZmFsc2UsCiAgICAiRG93bndhcmRBUElIdWdlUGFnZXMiOiB0cnVlLAogICAgIlBvZFNlY3VyaXR5IjogdHJ1ZSwKICAgICJSb3RhdGVLdWJlbGV0U2VydmVyQ2VydGlmaWNhdGUiOiB0cnVlCiAgfSwKICAibWVtb3J5U3dhcCI6IHt9LAogICJjb250YWluZXJMb2dNYXhTaXplIjogIjUwTWkiLAogICJzeXN0ZW1SZXNlcnZlZCI6IHsKICAgICJlcGhlbWVyYWwtc3RvcmFnZSI6ICIxR2kiCiAgfSwKICAibG9nZ2luZyI6IHsKICAgICJmbHVzaEZyZXF1ZW5jeSI6IDAsCiAgICAidmVyYm9zaXR5IjogMCwKICAgICJvcHRpb25zIjogewogICAgICAianNvbiI6IHsKICAgICAgICAiaW5mb0J1ZmZlclNpemUiOiAiMCIKICAgICAgfQogICAgfQogIH0sCiAgInNodXRkb3duR3JhY2VQZXJpb2QiOiAiMHMiLAogICJzaHV0ZG93bkdyYWNlUGVyaW9kQ3JpdGljYWxQb2RzIjogIjBzIgp9Cg== $ oc get mcp -w NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-88653b9010cf401816f02bbd0ca6067e False True False 3 0 0 0 4h21m worker rendered-worker-8b12364529c4327d42a8be257d60994b False True False 3 0 0 0 4h21m worker rendered-worker-8b12364529c4327d42a8be257d60994b False True False 3 0 1 0 4h23m worker rendered-worker-8b12364529c4327d42a8be257d60994b False True False 3 1 1 0 4h23m ... master rendered-master-02e4f67b541447fc98be1b4058db1142 True False False 3 3 3 0 4h53m worker rendered-worker-0e244d0bc1579c3ed29531b1fb49336a True False False 3 3 3 0 4h53m $ oc compliance rerun-now scansettingbindings ocp4-kubelet-configure-tls-cipher-test Rerunning scans from 'ocp4-kubelet-configure-tls-cipher-test': test-node-master, test-node-worker Re-running scan 'openshift-compliance/test-node-master' Re-running scan 'openshift-compliance/test-node-worker' $ oc get ccr NAME STATUS SEVERITY test-node-master-kubelet-configure-tls-cipher-suites PASS medium test-node-worker-kubelet-configure-tls-cipher-suites PASS medium $ oc get ccr test-node-master-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions} Run the following command on the kubelet node(s): $ sudo grep tlsCipherSuites /etc/kubernetes/kubelet.conf Verify that the set of ciphers contains only the following: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 $ oc get ccr test-node-master-kubelet-configure-tls-cipher-suites -o=jsonpath={.instructions} Run the following command on the kubelet node(s): $ sudo grep tlsCipherSuites /etc/kubernetes/kubelet.conf Verify that the set of ciphers contains only the following: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 $ oc debug node/ip-10-0-152-227.us-east-2.compute.internal -- chroot /host cat /etc/kubernetes/kubelet.conf W0526 18:57:53.619851 609 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true), hostPath volumes (volume "host"), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/ip-10-0-152-227us-east-2computeinternal-debug ... To use host binaries, run `chroot /host` { "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", "staticPodPath": "/etc/kubernetes/manifests", "syncFrequency": "0s", "fileCheckFrequency": "0s", "httpCheckFrequency": "0s", "tlsCipherSuites": [ "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" ], "tlsMinVersion": "VersionTLS12", "rotateCertificates": true, "serverTLSBootstrap": true, "authentication": { "x509": { "clientCAFile": "/etc/kubernetes/kubelet-ca.crt" }, "webhook": { "cacheTTL": "0s" }, "anonymous": { "enabled": false } }, "authorization": { "webhook": { "cacheAuthorizedTTL": "0s", "cacheUnauthorizedTTL": "0s" } }, "clusterDomain": "cluster.local", "clusterDNS": [ "172.30.0.10" ], "streamingConnectionIdleTimeout": "0s", "nodeStatusUpdateFrequency": "0s", "nodeStatusReportFrequency": "0s", "imageMinimumGCAge": "0s", "volumeStatsAggPeriod": "0s", "systemCgroups": "/system.slice", "cgroupRoot": "/", "cgroupDriver": "systemd", "cpuManagerReconcilePeriod": "0s", "runtimeRequestTimeout": "0s", "maxPods": 250, "podPidsLimit": 4096, "kubeAPIQPS": 50, "kubeAPIBurst": 100, "serializeImagePulls": false, "evictionPressureTransitionPeriod": "0s", "featureGates": { "APIPriorityAndFairness": true, "CSIMigrationAWS": false, "CSIMigrationAzureFile": false, "CSIMigrationGCE": false, "CSIMigrationvSphere": false, "DownwardAPIHugePages": true, "PodSecurity": true, "RotateKubeletServerCertificate": true }, "memorySwap": {}, "containerLogMaxSize": "50Mi", "systemReserved": { "ephemeral-storage": "1Gi" }, "logging": { "flushFrequency": 0, "verbosity": 0, "options": { "json": { "infoBufferSize": "0" } } }, "shutdownGracePeriod": "0s", "shutdownGracePeriodCriticalPods": "0s" } Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Compliance Operator bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:4657 |